Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Cassandra JDBC
  6. PowerExchange for DataSift
  7. PowerExchange for Facebook
  8. PowerExchange for Google Analytics
  9. PowerExchange for Google BigQuery
  10. PowerExchange for Google Cloud Spanner
  11. PowerExchange for Google Cloud Storage
  12. PowerExchange for Greenplum
  13. PowerExchange for HBase
  14. PowerExchange for HDFS
  15. PowerExchange for Hive
  16. PowerExchange for JD Edwards EnterpriseOne
  17. PowerExchange for JDBC V2 (10.4.0)
  18. PowerExchange for LDAP
  19. PowerExchange for LinkedIn
  20. PowerExchange for MapR-DB (10.4.0)
  21. PowerExchange for Microsoft Azure Blob Storage
  22. PowerExchange for Microsoft Azure Cosmos DB SQL API
  23. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  24. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  25. PowerExchange for Microsoft Azure SQL Data Warehouse
  26. PowerExchange for Microsoft Dynamics CRM
  27. PowerExchange for MongoDB
  28. PowerExchange for Netezza
  29. PowerExchange for OData
  30. PowerExchange for Salesforce
  31. PowerExchange for Salesforce Marketing Cloud
  32. PowerExchange for SAP NetWeaver
  33. PowerExchange for Snowflake (10.4.0)
  34. PowerExchange for Tableau
  35. PowerExchange for Tableau V3
  36. PowerExchange for Teradata Parallel Transporter API
  37. PowerExchange for Twitter
  38. PowerExchange for Web Content-Kapow Katalyst
  39. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for HDFS Known Limitations (10.4.0)

PowerExchange for HDFS Known Limitations (10.4.0)

The following table describes known limitations:
Bug
Description
OCON-23230
You cannot use multiple level partitioning when you run a mapping to write data to a complex file target with Filename port enabled.
OCON-23124
When you run a mapping in the native environment to write data to a complex file using filename port and mapping flow is enabled, the Data Integration Service generates an incorrect folder structure and writes data to a single file.
OCON-23122
When you run a mapping in the native environment to read or write data to a complex file object in ORC format, the mapping fails
OCON-23084
When you run a mapping to read from a complex file source and write to a complex file target and the source object contains unsupported data types in the schema, the mapping fails.
OCON-21852
When you import a complex file data object in JSON format, the import fails with the following error:
Array must contain at least 1 element for projection
OCON-17103
When you run a mapping on the Spark engine to read data from a complex file and the source path has a wildcard character, then the log file does not display the source file names.
OCON-16280
When you create a complex file data object from a JSON file, the task fails with the following error:
Encountered an error saving the data object
OCON-15862
When you run a mapping to read data from a complex file source in JSON format and enable compression, the mapping runs successfully but the Data Integration Service fails to read data from source.
OCON-12579
If you set the Hive warehouse directory in a Hadoop connection to an encrypted HDFS directory and the impersonation user does not have the DECRYPT_EEK permission, complex file mappings run indefinitely on the Hive engine.

0 COMMENTS

We’d like to hear from you!