PowerExchange Adapters for Informatica 10.2.1
- PowerExchange Adapters for Informatica 10.2.1
- All Products
The same Parquet file, when written in the native environment and on the Spark engine, has different sizes. The Data Integration Service cannot read the Parquet file, which is written by the Spark engine, in the native environment.
A mapping fails on the Spark engine, if the Blob Name Override or Blob Container Override fields contain a space or a special character.
When you read or write a blob that has special characters, the mapping fails on the Spark engine.
For the write operation, when you run a mapping on the Spark engine and the folder path contains special characters, the Data Integration Service creates a new folder.
When you run a mapping in the native environment and on the Spark engine to write an Avro file, the target files generated in both modes have different sizes. The Data Integration Service compresses the target file using the snappy compression format when a mapping is run on the Spark engine.
When a JSON file contains special characters, the Data Integration Service does not read the data correctly in the Spark mode.
The Data Integration Service stops responding during data preview when an Avro or a Parquet file contains bytes or bytes_array data types with NULL values.
The write operation fails for a flat file in the native environment when single or double quotes are selected as text qualifier.
The Data Integration Service adds an extra blank new line at the end when you read or write a flat file in the native environment or in the Spark mode.
When you read data from or write data to Microsoft Azure Blob Storage, the entire blob gets downloaded in the staging directory even if you cancel the mapping.