PowerExchange Adapters for Informatica
- PowerExchange Adapters for Informatica 10.4.1
- All Products
When you run a mapping to read data from a Parquet file on the Spark engine, the mapping fails with the following error:
Spark task failed with the following error: [com.informatica.sdk.dtm.ExecutionException
This issue occurs when you use the Cloudera CDP 7.1 distribution.
When you set the
-DINFA_HADOOP_DIST_DIR=hadoop\<Distro>option in the
developerCore.inifile and import a Parquet file, the format of the imported metadata differs based on the distribution. For Cloudera CDP 7.1, the metadata is imported as string and for other supported distributions, the metadata is imported as UTF8.
When you run a mapping to write a complex file to a Microsoft Azure Data Lake Storage Gen2 target on Databricks Spark engine, the Data Integration Service does not override all the target files of the previous mapping in the target directory.
This issue occurs for all target files except the partition file.
When you read an ORC file on Spark engine or Databricks Spark engine, the Data Integration Service fails to read the special characters in the ORC file.
When you read a JSON file in the native environment, the Data Integration Service incorrectly writes the values of double data type to the target in exponential format.
When you import a flat file data object with default precision and update the precision value at run time, the Data Integration Service uses the default value instead of the value provided at run time.
Updated July 14, 2021