Table of Contents

Search

  1. Preface
  2. Introduction to Big Data Management Administration
  3. Big Data Management Engines
  4. Authentication and Authorization
  5. Running Mappings on a Cluster with Kerberos Authentication
  6. Configuring Access to an SSL/TLS-Enabled Cluster
  7. Cluster Configuration
  8. Cluster Configuration Privileges and Permissions
  9. Cloud Provisioning Configuration
  10. Queuing
  11. Tuning for Big Data Processing
  12. Connections
  13. Multiple Blaze Instances on a Cluster

Big Data Management Administrator Guide

Big Data Management Administrator Guide

Hive Engine Architecture

Hive Engine Architecture

The Data Integration Service can use the Hive engine to run Model repository mappings or profiles on a Hadoop cluster.
To run a mapping or profile on the Hive engine, the Data Integration Service creates HiveQL queries based on the transformation or profiling logic. The Data Integration Service submits the HiveQL queries to the Hive driver. The Hive driver converts the HiveQL queries to MapReduce or Tez jobs, and then sends the jobs to the Hadoop cluster.
Effective in version 10.2.1, the MapReduce mode of the Hive run-time engine is deprecated, and Informatica will drop support for it in a future release. The Tez mode remains supported.
The Tez engine can process jobs on Hortonworks HDP, Azure HDInsight, and Amazon Elastic MapReduce. To use Cloudera CDH including Apache Hadoop, the jobs can process only on the MapReduce engine.
When you run a mapping on the Spark engine that launches Hive tasks, the mapping runs either on the MapReduce or on the Tez engines. For example, Hortonworks HDP cluster launches Hive tasks on MapReduce or Tez engines. A Cloudera CDH cluster launches Hive tasks on MapReduce engine.
The following image shows the architecture of how a Hadoop cluster processes MapReduce or Tez jobs sent from the Hive driver:
The following events occur when the Hive driver sends jobs to the Hadoop cluster:
  1. The Hive driver sends the MapReduce or Tez jobs to the Resource Manager in the Hadoop cluster.
  2. The Resource Manager sends the jobs request to the Node Manager that retrieves a list of data nodes that can process the MapReduce or Tez jobs.
  3. The Node Manager assigns MapReduce or Tez jobs to the data nodes.
  4. The Hive driver also connects to the Hive metadata database through the Hive metastore to determine where to create temporary tables. The Hive driver uses temporary tables to process the data. The Hive driver removes temporary tables after completing the task.

0 COMMENTS

We’d like to hear from you!