PowerExchange Adapters for Informatica
- PowerExchange Adapters for Informatica 10.4.0
- All Products
For the write operation, when you run a mapping on the Spark engine and the folder path contains special characters, the Data Integration Service creates a new folder.
When you refresh the source or target schema at runtime in a dynamic mapping, the values that you specify for the delimiter, text qualifier, and escape character for a flat file are not honored and the default values are used instead. This might lead to unexpected results in the target.
When you read data from a Microsoft Azure SQL Data Warehouse source and use the
Create Targetoption to create a Microsoft Azure Blob Storage target, if the Microsoft Azure Blob Storage connection uses SAS authentication, the mapping fails.
When you run a mapping, in the native environment, to read a flat file that has unicode characters, a space, null values, single quotes, or a value that starts with a dollar sign, the Data Integration Service adds double quotes to the values when writing data to the target.
When you enable Mapping Flow in a mapping that reads data from a flat file source and writes to a flat file target, the mapping fails with the following error in the native environment:
Workaround: Remove the FileName field from the imported source object and rerun the mapping.
When you use the
Create Targetoption to create a Microsoft Azure Blob Storage target and select Flat as the Resource Format, fields are not getting propagated to the target.
Workaround: Enable column projection and create fields manually in the target file and run the mapping.
When you import an object from sub directories with names having a space, data preview fails.
When you read or write a blob that has special characters, the mapping fails on the Spark engine.
When a JSON file contains special characters, the Data Integration Service does not read the data correctly in the Spark mode.
The Data Integration Service adds an extra blank new line at the end when you read or write a flat file in the native environment or in the Spark mode.
When you read data from or write data to Microsoft Azure Blob Storage, the entire blob gets downloaded in the staging directory even if you cancel the mapping.