PowerExchange Adapters for Informatica
- PowerExchange Adapters for Informatica 10.5
- All Products
Bug
| Description
|
---|---|
OCON-26004
| When you run a mapping in the native environment to read a complex file, where the Microsoft Azure Blob Storage connection uses the shared access signature authorization type and the Source transformation contains an override for the container name, the mapping fails with the following error:
java.lang.Exception: [MPSVCCMN_10094] The Mapping Service Module failed to run the job with ID [XK6nqtfUEeqypGWC8kJ51Q] because of the following error: [EdtmExec_00007] Exception: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
|
OCON-17082
| When you import an object from sub directories with names having a space, data preview fails.
|
Bug
| Description
|
---|---|
BDM-19847
| For the write operation, when you run a mapping on the Spark engine and the folder path contains special characters, the Data Integration Service creates a new folder.
|
OCON-22511
| When you read data from a Microsoft Azure SQL Data Warehouse source and use the
Create Target option to create a Microsoft Azure Blob Storage target, if the Microsoft Azure Blob Storage connection uses SAS authentication, the mapping fails.
|
OCON-20605
| When you run a mapping, in the native environment, to read a flat file that has unicode characters, a space, null values, single quotes, or a value that starts with a dollar sign, the Data Integration Service adds double quotes to the values when writing data to the target.
|
OCON-17642
| When you enable Mapping Flow in a mapping that reads data from a flat file source and writes to a flat file target, the mapping fails with the following error in the native environment:
Workaround: Remove the FileName field from the imported source object and rerun the mapping.
|
OCON-17443
| When you use the
Create Target option to create a Microsoft Azure Blob Storage target and select Flat as the Resource Format, fields are not getting propagated to the target.
Workaround: Enable column projection and create fields manually in the target file and run the mapping.
|
OCON-12420
| When you read or write a blob that has special characters, the mapping fails on the Spark engine.
|
OCON-12352
| When a JSON file contains special characters, the Data Integration Service does not read the data correctly in the Spark mode.
|
OCON-12318
| The Data Integration Service adds an extra blank new line at the end when you read or write a flat file in the native environment or in the Spark mode.
|
OCON-10125
| When you read data from or write data to Microsoft Azure Blob Storage, the entire blob gets downloaded in the staging directory even if you cancel the mapping.
|