Meta Integration® Model Bridge (MIMB)
"Metadata Integration" Solution

MIMB Bridge Documentation

MIMB Import Bridge from OpenStack Swift Object Store File System - New Beta Bridge

Bridge Specifications

Vendor OpenStack
Tool Name Swift Object Store File System
Tool Version 1.0
Tool Web Site
Supported Methodology [File System] Multi-Model, Data Store (NoSQL / Hierarchical) via REST API

Tool: OpenStack / Swift Object Store File System version 1.0 via REST API
Metadata: [File System] Multi-Model, Data Store (NoSQL / Hierarchical)
Component: OpenStackSwift version 11.2.0

This import bridge crawls a data lake implemented on a OpenStack Swift to detect (reverse engineer) metadata from all the data files (for data catalog purpose).



This import bridge supports the following file formats:
- Delimited (Flat) files such as CSV (see details below)
- Positional (Fixed Length) files typically from mainframe (see details below)
- COBOL COPYBOOK files typically from mainframe (see details below)
- Open Office Excel XML .XSLX (see details below)
- JSON (JavaScript Object Notation)
- Apache Avro
- Apache Parquet
- Apache ORC

as well as the compressed versions of the above formats:
- ZIP (as a compression format, not as archive format)
- LZ4
- Snappy (as standard Snappy format, not as Hadoop native Snappy format)


This import bridge detects (reverse engineer) the metadata from a data file of type Delimited File (also known as Flat File).
The detection of such Delimited File is not based on file extensions (such as .CSV, .PSV) but rather by sampling the file content.

The import bridge can detect a header row, and use it to create the field name, otherwise generic field names are created.

The import bridge samples up to 1000 rows in order to automatically detect the field separators which by default include:
', (comma)', '; (semicolon)', ': (colon)', '\t (tab)', '| (pipe)', '0x1 (ctrl+A)'
More separators can be added in the auto detection process (including double characters), see the Miscellaneous parameter.

During the sampling, the import bridge also detects the file data types, such as DATE, NUMBER, STRING.


This import bridge creates metadata for data files of type Positional File (also known as Fixed Length File).
Such metadata cannot be automatically detected (reverse engineered) by sampling the data files (e.g. customers.dat or even just customers with no extension).
Therefore, this import bridge imports a 'Positional File Definition' file which must be with extension .positional_file_definition format file
(e.g. customers.dat.positional_file_definition format file will create the metadata of a file named file customers with the fields defined inside)
This is the equivalent of a RDBMS DDL for positional files. With such a long extension, this data definition file can coexist with the actual data files in the each file system directory containing them.

The 'Positional File Definition' file format is defined as follows:
- Format file must start with the following header:
column name, position, width, data type, comment
- All positions must be unique and greater than or equal to 1.
- The file format is invalid when some columns have positions and others don't.
- When all columns do not have positions but have widths the application assumes that columns are ordered and calculates positions based on widths.
a,,4 -> a,1,4
b,,25 -> b,5,25
- When the position is present the application uses widths for documentation only.
- Types and comments are used as documentation only.


This import bridge can only import the COBOL COPYBOOK files (which contain the data definitions), therefore does not detect (reverse engineer) metadata from actual COBOL data files.
The detection of such COBOL COPYBOOK file is not based on file extensions (such as .CPY) but rather by sampling the file content.

This bridge creates a 'Physical Hierarchical Model' which reflects a truly flat, byte-position defined, record structure, which is useful for stitching to the DI/ETL processes. Therefore, the physical model has all the physical elements required to define a flat record, which is ONE table with all the elements (including multiple columns for OCCURS elements when the proper import bridge parameter is set).

Note that this import bridge does not currently support the COPY verb, and reports a parsing error at the line and position at which the COPY statement begins. In order to import Copybooks with the Copy Statement, create an expanded Copybook file with the included sections already in place (replacing the COPY verb). Most COBOL compilers have the option to output only the preprocessed Copybooks with the COPY and REPLACE statements expanded.

Q: Why is the default start column '6' (six) and the default end column '72' (seventy-two)?
A: The import bridge parser counts columns starting at 0 (zero), rather than 1 (one). Thus, the defaults leave the standard first six columns for line numbers, next column for comment indicators, and last 8 columns (out of 80) for additional line comment information.


This import bridge detects (reverse engineer) the metadata from a data file of type Excel XML format (XLSX).
The detection of such Excel file is based on file extension .XLSX.

The import bridge can detect a header row, and use it to create the field name, otherwise generic field names are created.

The import bridge samples up to 1000 rows to detect the file data types, such as DATE, NUMBER, STRING.

If an Excel file has multiple sheets, each sheet is imported as the equivalent of a file/table with the same sheet name.

The import bridge uses the machine's local to read files and allows you to specify the character set encoding files use.

Refer to the current general known limitations at or bundled in Documentation/ReadMe/MIMBKnownLimitations.html

Provide a troubleshooting package with:
- the debug log (can be set in the UI or in conf/ with MIR_LOG_LEVEL=6)
- the metadata backup if available (can be set in the Miscellaneous parameter with -backup option, although this common option is not implemented on all bridges for technical reasons).

Bridge Parameters

Parameter Name Description Type Values Default Scope
REST Endpoint: Your REST Endpoint to sign programmatic requests to the service. STRING      
Auth V1 Endpoint: Your Auth V1 Endpoint to authenticate the import bridge. STRING      
User Username STRING      
Password Password PASSWORD      
Container Container name. STRING      
Miscellaneous INTRODUCTION
Specify miscellaneous options starting with a dash and optionally followed by parameters, e.g.
-connection.cast MyDatabase1="MICROSOFT SQL SERVER"
Some options can be used multiple times if applicable, e.g.
-connection.rename NewConnection1=OldConnection1 -connection.rename NewConnection2=OldConnection2;
As the list of options can become a long string, it is possible to load it from a file which must be located in ${MODEL_BRIDGE_HOME}\data\MIMB\parameters and have the extension .txt. In such case, all options must be defined within that file as the only value of this parameter, e.g.

-java.memory <Java Memory's maximum size> (previously -m)

1G by default on 64bits JRE or as set in conf/, e.g.
-java.memory 8G
-java.memory 8000M

-java.parameters <Java Runtime Environment command line options> (previously -j)

This option must be the last one in the Miscellaneous parameter as all the text after -java.parameters is passed "as is" to the JRE, e.g.
-java.parameters -Dname=value -Xms1G
The following option must be set when a proxy is used to access internet (this is critical to access and exceptionally a few other tool sites) in order to download the necessary third-party software libraries.
Note: The majority of proxies are concerned with encrypting (HTTPS) the outside (of the company) traffic and trust the inside traffic that can access proxy over HTTP. In this case, an HTTPS request reaches the proxy over HTTP where the proxy HTTPS-encrypts it.
-java.parameters -java.parameters -Dhttp.proxyHost= -Dhttp.proxyPort=3128 -Dhttp.proxyUser=user -Dhttp.proxyPassword=pass


Override the model name, e.g. "My Model Name"

-prescript <script name>

This option allows running a script before the bridge execution.
The script must be located in the bin directory (or as specified with M_SCRIPT_PATH in conf/, and have .bat or .sh extension.
The script path must not include any parent directory symbol (..).
The script should return exit code 0 to indicate success, or another value to indicate failure.
For example:
-prescript "script.bat arg1 arg2"

-postscript <script name>

This option allows running a script after successful execution of the bridge.
The script must be located in the bin directory (or as specified with M_SCRIPT_PATH in conf/, and have .bat or .sh extension.
The script path must not include any parent directory symbol (..).
The script should return exit code 0 to indicate success, or another value to indicate failure.
For example:
-postscript "script.bat arg1 arg2"


Clears the cache before the import, and therefore will run a full import without incremental harvesting.

If the model was not changed and the -cache.clear parameter is not used (incremental harvesting), then a new version will not be created.
If the model was not changed and the -cache.clear parameter is set (full source import instead of incremental), then a new version will be created.

-backup <directory>

This option allows to save the bridge input metadata for further troubleshooting. The provided <directory> must be empty.

The primary use of this option is for data store import bridges, in particular JDBC based database import bridges.

Note that this option is not operational on some bridges including:
- File based import bridges (as such input files can be used instead)
- DI/BI repository import bridges (as the tool's repository native backup can be used instead)
- Some API based import bridges (e.g. COM based) for technical reasons.

Data Connections are produced by the import bridges typically from ETL/DI and BI tools to refer to the source and target data stores they use. These data connections are then used by metadata management tools to connect them (metadata stitching) to their actual data stores (e.g. databases, file system, etc.) in order to produce the full end to end data flow lineage and impact analysis. The name of each data connection is unique by import model. The data connection names used within DI/BI design tools are used when possible, otherwise connection names are generated to be short but meaningful such as the database / schema name, the file system path, or Uniform Resource Identifier (URI). The following option allows to manipulate connections. These options replaces the legacy options -c, -cd, and -cs.

-connection.cast ConnectionName=ConnectionType

Casts a generic database connection (e.g. ODBC/JDBC) to a precise database type (e.g. ORACLE) for SQL Parsing, e.g.
-connection.cast "My Database"="MICROSOFT SQL SERVER".
The list of supported data store connection types includes:

-connection.rename OldConnection=NewConnection

Renames an existing connection to a new name, e.g.
-connection.rename OldConnectionName=NewConnectionName
Multiple existing database connections can be renamed and merged into one new database connection, e.g.
-connection.rename MySchema1=MyDatabase -connection.rename MySchema2=MyDatabase

-connection.split oldConnection.Schema1=newConnection

Splits a database connection into one or multiple database connections.
A single database connection can be split into one connection per schema, e.g.
-connection.split MyDatabase
All database connections can be split into one connection per schema, e.g.
-connection.split *
A database connection can be explicitly split creating a new database connection by appending a schema name to a database, e.g.
-connection.split MyDatabase.schema1=MySchema1 SourcePath=DestinationPath

Maps a source path to destination path. This is useful for file system connections when different paths points to the same object (directory or file).
On Hadoop, a process can write into a CSV file specified with the HDFS full path, but another process reads from a Hive table implemented (external) by the same file specified using a relative path with default file name and extension, e.g. /user1/folder=hdfs://host:8020/users/user1/folder/file.csv
On Linux, a given directory (or file) like /data can be referred to by multiple symbolic links like /users/john and /users/paul, e.g. /data=/users/John /data=/users/paul
On Windows, a given directory like C:\data can be referred to by multiple network drives like M: and N:, e.g. C:\data=M:\ C:\data=N:\

-connection.casesensitive ConnectionName

Overrides the default case insensitive matching rules for the object identifiers inside the specified connection, provided the detected type of the data store by itself supports this configuration (e.g. Microsoft SQL Server, MySql etc.), e.g.
-connection.casesensitive "My Database"

-connection.level AggregationLevel

Specifies the aggregation level for the external connections, e.g.-connection.level catalog
The list of the supported values:
schema (default)

-file.encoding <Encoding value>

Uses the encoding value to read the text files (e.g. delimited and fixed width).
The supported languages are listed below with the actual encoding value between parentheses at the end of each line, e.g.
-file.encoding shift_jis

Central and Eastern European (ISO-8859-2)
Central and Eastern European (Windows-1250)
Chinese Traditional (Big5)
Chinese Simplified (GB18030)
Chinese Simplified (GB2312)
Cyrillic (ISO-8859-5)
Cyrillic (Windows-1251)
DOS (IBM-850)
Greek (ISO-8859-7)
Greek (Windows-1253)
Hebrew (ISO-8859-8)
Hebrew (Windows-1255)
Japanese (Shift_JIS)
Korean (KS_C_5601-1987)
Thai (TIS620)
Thai (Windows-874)
Turkish (ISO-8859-9)
Turkish (Windows-1254)
UTF 8 (UTF-8)
UTF 16 (UTF-16)
Western European (ISO-8859-1)
Western European (ISO-8859-15)
Western European (Windows-1252)
Locale encoding
No encoding conversion

-processing.max.threads <number> (previously -tps)

Allows for parallel processing up to a maximum number of threads (by default 1), e.g.
-processing.max.threads 10

-processing.max.time <time> (previously -tl)

Sets a time limit for processing all files. Time can be specified in seconds, minutes, or hours, e.g.
-processing.max.time 3600s
-processing.max.time 60m
-processing.max.time 1h

-processing.max.files <number> (previously -fl)

Sets a maximum number of files to process (there are no limits by default), e.g.
-processing.max.files 100

Note, please exercise caution when using this option to handle the large number of files which may be in partition directories. Instead, the Partition directories parameter should be specified to properly declare any partition directory. That specification will not only limit the number of similar files to be processed, but will also produce a proper model of the data lake as a partition rather than a large number of files.

-partitions.latest (previously -fresh.partition.models)

Uses ONLY the latest modified files when processing partitions defined in the Partitioned directories parameter.

-partitions.disable.detection (previously -disable.partitions.autodetection)

Disables the automatic partition detection (when "Partition directories" option is empty)

-cache.reuse (previously

Reuses what was already downloaded in the cache by disabling dependencies downloading.

-hadoop.key <Hadoop configuration options> (previously -hadoop)

Sets key values for the hadoop libraries (None by default), e.g.
-hadoop.key key1=val1;key2=val2

-path.substitute <path> <new path> (previously -subst)

Substitutes a root path by a new one, e.g.
-path.substitute K: C:\test


Print all processed files paths into debug log.

-delimited.disable.header.parsing (previously -delimited.no_header)

Disables the parsing of the header of delimited files (headers are parsed by default to detect field names).
Use this option if the delimited file has no header, or to disable the import of the header (if the field names are sensitive). <number> (previously -delimited.top_rows_skip)

Skips an number of rows at the top of delimited files (by default 0).
Use this option the delimited files contains several rows of description at the beginning. 1

-delimited.add.separators <comma separated separators> (previously -delimited.extra_separators)

Adds extra possible separators when parsing delimited files.
By default, the detected separators include: ', (comma)', '; (semicolon)', '\t (tab)', '| (pipe)', '0x1 (ctrl+A)', 'BS (\u0008)', ': (colon)'
Note that extra separators can be multi characters, e.g.
-delimited.add.separators ~,||,|~

-parquet.max.compressed.size <value> (previously -parquet.compressed.max.size)

Ignores any parquet archive files with a compressed size bigger than the provided value (Default value is 10,000,000 bytes), e.g.


Bridge Mapping

Meta Integration Repository (MIR)
(based on the OMG CWM standard)
"OpenStack Swift Object Store File System - New Beta Bridge"
File System (File)
Mapping Comments
Attribute Array Elementary Item, Field, Attribute, Array Field, Elementary Item, Fixed Width Field, Partition Field  
Comment Comment  
Name Name  
Position Position, Offset  
Class Array Element, Group Item, Array Group Item, Array Object, Element, Object, Sheet  
Name Name  
PropertyElementTypeScope UDPs  
Name Name  
Scope Scope  
PropertyType UDP  
DataType Data Type  
DesignLevel Design Level  
Name Name  
Position Position  
StoreModel Cobol File, Parquet File, Delimited File, Avro File, Json File, Collection, Orc File, Xml File, Excel File, File, Fixed Width File  
Name Name  
NativeType Type  
TypeValue Condition Name  
Name Name  
Value Value  

Last updated on Mon, 17 Jun 2024 17:48:15

Copyright © 1997-2024 Meta Integration Technology, Inc. All Rights Reserved.

Meta Integration® is a registered trademark of Meta Integration Technology, Inc.
All other trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.