The Data Integration Service can use the Hive engine to run Model repository mappings or profiles on a Hadoop cluster.
To run a mapping or profile with the Hive engine, the Data Integration Service creates HiveQL queries based on the transformation or profiling logic. The Data Integration Service submits the HiveQL queries to the Hive driver. The Hive driver converts the HiveQL queries to MapReduce jobs, and then sends the jobs to the Hadoop cluster.
The following diagram shows the architecture of how a Hadoop cluster processes MapReduce jobs sent from the Hive driver:
The following events occur when the Hive driver sends jobs to the Hadoop cluster:
The Hive driver sends the MapReduce jobs to the Resource Manager in the Hadoop cluster.
The Resource Manager sends the jobs request to the Node Manager that retrieves a list of data nodes that can process the MapReduce jobs.
The Node Manager assigns MapReduce jobs to the data nodes.
The Hive driver also connects to the Hive metadata database through the Hive metastore to determine where to create temporary tables. The Hive driver uses temporary tables to process the data. The Hive driver removes temporary tables after completing the task.