PowerExchange for Hive User Guide

PowerExchange for Hive User Guide

PowerExchange for Hive Overview

PowerExchange for Hive Overview

Use PowerExchange for Hive to read data from a Hive source and write data to a Hive target.
When you read data from Hive, you can run the mapping in the native environment. In the native environment, the DIS runs the mapping from the Developer tool. You can optionally run a mapping with a Hive source in the Hadoop environment when you have Data Engineering Integration. You might run a mapping in the Hadoop environment to optimize performance when you need to process large amounts of data.
When you write data to a Hive target, you run the mapping in the native or Hadoop environment. You can select the following engines in the Hadoop environment:
  • Blaze engine
  • Spark engine
When you run a mapping in the Hadoop environment, the Data Integration Service converts the mapping task to an execution plan based on the engine. It could be a Blaze execution plan or a Spark execution plan. In the Hadoop environment, the Data Integration Service converts the mapping to an execution plan that is run on a Hadoop cluster.
During mapping development, you validate a Hive mapping for the native environment or the Hadoop environment. You use a Hive connection to connect to Hive to read and write Hive data.
For more information about configuring and running a mapping in the Hadoop environment, see the
Informatica
Data Engineering Integration User Guide
.

0 COMMENTS

We’d like to hear from you!