PowerExchange Adapters for Informatica
- PowerExchange Adapters for Informatica 10.4.1
- All Products
When you set the
-DINFA_HADOOP_DIST_DIR=hadoop\<Distro>option in the
developerCore.inifile and import a Parquet file , the format of the imported metadata differs based on the distribution. For Cloudera CDP 7.1, the metadata is imported as string and for other supported distributions, the metadata is imported as UTF8.
When you run a mapping in the native environment to write data to a complex file target and choose to overwrite the target data and the target filename does not contain the file format extension such as ".avro or .parquet", the Data Integration Service does not delete the target data before writing data.
When you run a mapping on the Blaze engine to read data from and write data to sequence complex file data objects using a Kerberos connection, the mapping fails with the following exception:
"GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos target)"
When you run a mapping on the Spark engine to read data from an empty JSON complex file source and write data to a complex file target, the mapping should fail, but the mapping runs successfully and the Data Integration Service generates an empty target file.
Updated July 14, 2021