Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Connections
  4. Mappings in the Hadoop Environment
  5. Mapping Objects in the Hadoop Environment
  6. Monitoring Mappings in the Hadoop Environment
  7. Mappings in the Native Environment
  8. Profiles
  9. Native Environment Optimization
  10. Data Type Reference
  11. Function Reference
  12. Parameter Reference

Hive Targets

Hive Targets

A mapping that is running in the Hadoop environment can write to a Hive target.
Consider the following limitations when you configure a Hive target in a mapping that runs in the Hadoop environment:
  • The Data Integration Service does not run pre-mapping or post-mapping SQL commands against a Hive target. You cannot validate and run a mapping with PreSQL or PostSQL properties for a Hive target.
  • A mapping fails to run if the Hive target definition differs in the number and order of the columns from the relational table in the Hive database.
  • A mapping fails to run when you use Unicode characters in a Hive target definition.
  • You must truncate the target table to overwrite data to a Hive table with Hive version 0.7. The Data Integration Service ignores write, update override, delete, insert, and update strategy properties when it writes data to a Hive target.
  • The Data Integration Service can truncate the partition in the Hive target in which the data is being inserted. You must choose to both truncate the partition in the Hive target and truncate the target table.
A mapping that runs on the Blaze engine can have partitioned and bucketed Hive tables as targets. The Blaze engine supports Hive tables that are enabled for locking. The Blaze engine can create or replace Hive target tables and truncate the partition in the Hive target table. You must choose to both truncate the partition in the Hive target and truncate the target table.
In a mapping that runs on the Spark engine or the Blaze engine, you can create a custom DDL query that creates or replaces a Hive table at run time. However, with the Blaze engine, you cannot use a backtick (`) character in the DDL query. The backtick character is required in HiveQL when you include special characters or keywords in a query.
A mapping that runs on the Blaze engine can write to Hive ACID tables. To write to a Hive ACID table, the mapping must contain an Update Strategy transformation connected to the Hive target. The update strategy expression must flag each row for insert.


Updated November 09, 2018