PowerExchange Adapters for Informatica
- PowerExchange Adapters for Informatica 10.5.3
- All Products
When you run a mapping on the Spark engine to write data to a Hive table with complex data types such as struct and the column names have special characters, the mapping fails.
When you import a Hive mapping enabled with the truncate table target option for a Hive non-ACID table and run it from the native environment, the truncate query does not work and the mapping fails.
When a Blaze engine runs a mapping on the Cloudera CDH version 6.3.4 cluster to write data that contains single-byte ASCII delimiters to a Hive target, the mapping incorrectly writes the delimiters to the target.
Workaround: Create a Hive target table with the delimiter or octal code character and then run the mapping.
When you run a mapping on the Spark engine to write data to a Hive table with complex data types such as struct and the column name contains a reserved word, the mapping fails.
When you alter a Hive table by adding a new column in native environment and the table contains a special character in the column name, the mapping fails.
When you change the table metadata in the Developer Tool by modifying the existing data type in a Hive mapping that reads data from a Hive source and do not synchronize the Physical Data Object (PDO), the mapping fails with the following error:
FAILED: SemanticException [Error 10044]: Line 1:23 Cannot insert into target table because column number/types are different
Workaround: Synchronize the Physical Data Object and run the mapping again.
When you run a mapping to read data from a Hive table with hierarchical (Htype) data type in columns and use the
sortoption to override the default SQL query, the mapping fails.
When you synchronize a Hive object that contains complex datatypes in the Developer tool, the links between the hive objects in the mapping are not retained.