Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Connections
  4. Mappings in the Hadoop Environment
  5. Mapping Objects in the Hadoop Environment
  6. Processing Hierarchical Data on the Spark Engine
  7. Stateful Computing on the Spark Engine
  8. Monitoring Mappings in the Hadoop Environment
  9. Mappings in the Native Environment
  10. Profiles
  11. Native Environment Optimization
  12. Data Type Reference
  13. Complex File Data Object Properties
  14. Function Reference
  15. Parameter Reference

Hive Mappings

Hive Mappings

Based on the mapping environment, you can read data from or write data to Hive.
In a native environment, you can read data from Hive. To read data from Hive, complete the following steps:
  1. Create a Hive connection.
  2. Configure the Hive connection mode to access Hive as a source or target.
  3. Use the Hive connection to create a data object to read from Hive.
  4. Add the data object to a mapping and configure the mapping to run in the native environment.
You can write to Hive in a Hadoop environment. To write data to Hive, complete the following steps:
  1. Create a Hive connection.
  2. Configure the Hive connection mode to access Hive as a source or target.
  3. Use the Hive connection to create a data object to write to Hive.
  4. Add the data object to a mapping and configure the mapping to run in the Hadoop environment.
You can define the following types of objects in a Hive mapping:
  • A Read Transformation to read data from Hive
  • Transformations
  • A target or an SQL data service. You can write to Hive if you run the mapping in a Hadoop cluster.
Validate and run the mapping. You can deploy the mapping and run it or add the mapping to a Mapping task in a workflow.


Updated December 13, 2018