PowerExchange Adapters for Informatica
- PowerExchange Adapters for Informatica 10.5.3
- All Products
Property
| Description
|
---|---|
Tracing Level
| Sets the amount of detail that appears in the log file. You can choose terse, normal, verbose initialization, or verbose data. Default is normal.
|
Concurrent Threads
| Number of concurrent connections to read data from
Microsoft Azure Data Lake Storage Gen2 . When reading a large file or object, you can spawn multiple threads to process data. Configure
Block Size to partition a large file into smaller parts. Default is 10.
|
Filesystem Name Override
| Overrides the file system name.
|
Source Type
| Select the type of source from which you want to read data. You can select the following source types:
Default is File. The directory read is not applicable to binary files.
|
Allow Wildcard Characters
| Indicates whether you want to use wildcard characters for the source file name when you run a mapping to read data from an Avro, flat, JSON, ORC, or Parquet file.
For more information about wildcard characters, see
Wildcard Characters.
|
Directory Override
| Overrides the default directory path. You can specify an absolute or a relative directory path.
|
File Name Override
| Overrides the file name.
|
Recursive Directory Read
| Indicates whether you want to read objects stored in subdirectories in the native environment, Spark engine, and Databricks Spark engine.
For more information about recursive directory read, see
Reading Files from Subdirectories.
|
Block Size
| Partitions a large file or object into smaller parts each of specified block size. When reading a large file, consider partitioning a large file into smaller parts and configure
Concurrent Threads to spawn required number of threads to process data in parallel.
|
Timeout Interval
| The number of seconds to wait when attempting to connect to the server. A timeout will occur if the connection cannot be established in the specified amount of time. Default is 0.
|
Compression Formats
| Compresses data when you read data
Microsoft Azure Data Lake Storage Gen2 . You can read the following compressed files:
If you select a
.GZ source object and select None in the compression format, though the mapping runs successfully, data is not written to the target object.
|