Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Google BigQuery
  6. PowerExchange for Google Cloud Storage
  7. PowerExchange for Greenplum
  8. PowerExchange for HBase
  9. PowerExchange for HDFS
  10. PowerExchange for Hive
  11. PowerExchange for JDBC V2
  12. PowerExchange for JD Edwards EnterpriseOne
  13. PowerExchange for Kudu
  14. PowerExchange for LDAP
  15. PowerExchange for Microsoft Azure Blob Storage
  16. PowerExchange for Microsoft Azure Cosmos DB SQL API
  17. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  18. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  19. PowerExchange for Microsoft Azure SQL Data Warehouse
  20. PowerExchange for MongoDB
  21. PowerExchange for Netezza
  22. PowerExchange for OData
  23. PowerExchange for Salesforce
  24. PowerExchange for SAP NetWeaver
  25. PowerExchange for Snowflake
  26. PowerExchange for Teradata
  27. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Hive (10.5)

PowerExchange for Hive (10.5)

Fixed Issues

The following table describes fixed issues:
Issue
Description
BDM-33990
When you run a mapping on the Spark engine to write data to a Hive table with complex data types such as struct and the column names have special characters, the mapping fails.

Known Issues

The following table describes known issues:
Issue
Description
OCON-28029
When you import a Hive mapping enabled with the truncate table target option for a Hive non-ACID table and run it from the native environment, the truncate query does not work and the mapping fails.
BDM-37718
When a Blaze engine runs a mapping on the Cloudera CDH version 6.3.4 cluster to write data that contains single-byte ASCII delimiters to a Hive target, the mapping incorrectly writes the delimiters to the target.
Workaround: Create a Hive target table with the delimiter or octal code character and then run the mapping.
BDM-33998
When you run a mapping on the Spark engine to write data to a Hive table with complex data types such as struct and the column name contains a reserved word, the mapping fails.
OCON-18287
When you alter a Hive table by adding a new column in native environment and the table contains a special character in the column name, the mapping fails.
OCON-25343
When you change the table metadata in the Developer Tool by modifying the existing data type in a Hive mapping that reads data from a Hive source and do not synchronize the Physical Data Object (PDO), the mapping fails with the following error:
FAILED: SemanticException [Error 10044]: Line 1:23 Cannot insert into target table because column number/types are different
Workaround: Synchronize the Physical Data Object and run the mapping again.
OCON-25211
When you run a mapping to read data from a Hive table with hierarchical (Htype) data type in columns and use the
sort
option to override the default SQL query, the mapping fails.
OCON-25180
When you synchronize a Hive object that contains complex datatypes in the Developer tool, the links between the hive objects in the mapping are not retained.

0 COMMENTS

We’d like to hear from you!