Table of Contents


  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Connections
  4. Mappings in a Hadoop Environment
  5. Mapping Objects in a Hadoop Environment
  6. Mappings in the Native Environment
  7. Profiles
  8. Native Environment Optimization
  9. Data Type Reference
  10. Function Reference
  11. Parameter Reference

Blaze Engine Architecture

Blaze Engine Architecture

To run a mapping on the Informatica Blaze engine, the Data Integration Service submits jobs to the Blaze engine executor. The Blaze engine executor is a software component that enables communication between the Data Integration Service and the Blaze engine components on the Hadoop cluster.
The following Blaze engine components appear on the Hadoop cluster:
  • Grid Manager. Manages tasks for batch processing.
  • Orchestrator. Schedules and processes parallel data processing tasks on a cluster.
  • DTM Process Manager. Manages the DTM Processes.
  • DTM Processes. An operating system process started to run DTM instances.
  • Data Exchange Framework. Shuffles data between different processes that process the data on cluster nodes.
The following image shows how a Hadoop cluster processes jobs sent from the Blaze engine executor:
The image shows the Informatica domain on the left-hand side. The Informatica doman contains the following components: Data Integration Service, Logical Data Transformation Manager, Blaze engine executor. The Hadoop cluster contains the following components: Resource Manager, Grid Manager, Orchestrator, Node Manager, DTM Process Manager, DTM Process, Data Exchange Framework.
The following events occur when the Data Integration Service submits jobs to the Blaze engine executor:
  1. The Blaze engine executor receives a job request from the LDTM.
  2. It initializes the communication with the Grid Manager to initialize Blaze engine components on the Hadoop cluster, and it queries the Grid Manager for an available Orchestrator.
  3. The Orchestrator communicates with the Grid Manager and the Resource Manager for available resources on the Hadoop cluster.
  4. The Resource Manager sends a job request to the Node Manager on the Hadoop cluster.
  5. The Node Manager sends the tasks to the DTM Processes through the DTM Process Manager.
  6. The DTM Processes communicate with the Data Exchange Framework to send and receive data across processing units that run on the cluster nodes.

Updated July 03, 2018