PowerExchange Adapters for Informatica 10.5
- PowerExchange Adapters for Informatica 10.5
- All Products
When you use the FileName port in a complex file target and run a mapping with mapping flow enabled in the native environment, the Data Integration Service does not create the folder structure as expected.
When you read an ORC file on Spark engine or Databricks Spark engine, the Data Integration Service fails to read the special characters in the ORC file.
When you read a JSON file in the native environment, the Data Integration Service incorrectly writes the values of double data type to the target in exponential format.
When you import a flat file data object with default precision and update the precision value at run time, the Data Integration Service uses the default value instead of the value provided at run time.
When you select
Text qualifierand specify the escape character in the flat file schema format properties, extra columns are added while previewing the data.
Text qualifierto import the object. After the object import, you can select
No quotesin the Schema tab.
When you run a mapping to write a complex file to a Microsoft Azure Data Lake Storage Gen2 target on Databricks Spark engine, the Data Integration Service does not override all the target files of the previous mapping in the target directory.
This issue occurs for all target files except the partition file.
When you import an object from sub directories with names having a space, data preview fails.
When you create a pass-through mapping to read data from a flat file and write data to another flat file target and enable run-time linking,
'#'is added to the first column name in the target file when you run the mapping first time on the Spark engine. The mapping fails at subsequent runs.
When you run a mapping, in the native environment, to read a flat file that has unicode characters, a space, null values, single quotes, or a value that starts with a dollar sign, the Data Integration Service adds double quotes to the values when writing data to the target.
You cannot configure the Azure Government end-points in mappings that run on Databricks Spark engine.