Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Google BigQuery
  6. PowerExchange for Google Cloud Storage
  7. PowerExchange for Greenplum
  8. PowerExchange for HBase
  9. PowerExchange for HDFS
  10. PowerExchange for Hive
  11. PowerExchange for JDBC V2
  12. PowerExchange for JD Edwards EnterpriseOne
  13. PowerExchange for Kudu
  14. PowerExchange for LDAP
  15. PowerExchange for Microsoft Azure Blob Storage
  16. PowerExchange for Microsoft Azure Cosmos DB SQL API
  17. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  18. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  19. PowerExchange for Microsoft Azure SQL Data Warehouse
  20. PowerExchange for Microsoft Dynamics CRM
  21. PowerExchange for MongoDB
  22. PowerExchange for Netezza
  23. PowerExchange for OData
  24. PowerExchange for Salesforce
  25. PowerExchange for SAP NetWeaver
  26. PowerExchange for Snowflake
  27. PowerExchange for Teradata
  28. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for HDFS (10.5)

PowerExchange for HDFS (10.5)

Known Issues

The following table describes known issues:
Issue
Description
OCON-25655
When you run a mapping in the native environment to write data to a complex file target and choose to overwrite the target data and the target filename does not contain the file format extension such as
.avro
or
.parquet
, the Data Integration Service does not delete the target data before writing data.
OCON-23920
When the Blaze engine runs a mapping on a Kerberos-enabled Hadoop cluster to read data from and write data to sequence complex file data objects, the mapping fails with the following exception:
"GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos target)"
OCON-24348
When you run a mapping on the Spark engine to read data from an empty JSON complex file source and write data to a complex file target, the mapping should fail, but the mapping runs successfully and the Data Integration Service generates an empty target file.
OCON-23230
You cannot use multiple level partitioning when you run a mapping to write data to a complex file target with Filename port enabled.
OCON-23124
When you run a mapping in the native environment to write data to a complex file using filename port and mapping flow is enabled, the Data Integration Service generates an incorrect folder structure and writes data to a single file.
OCON-23084
When you run a mapping to read from a complex file source and write to a complex file target and the source object contains unsupported data types in the schema, the mapping fails.
OCON-21852
When you import a complex file data object in JSON format, the import fails with the following error:
Array must contain at least 1 element for projection
OCON-17103
When you run a mapping on the Spark engine to read data from a complex file and the source path has a wildcard character, then the log file does not display the source file names.
OCON-16280
When you create a complex file data object from a JSON file and rename the filed names to an invalid field name, then the task fails with the following error:
Encountered an error saving the data object
OCON-15862
When you run a mapping to read data from a complex file source in JSON format and enable compression, the mapping runs successfully but the Data Integration Service fails to read data from the source.

0 COMMENTS

We’d like to hear from you!