Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings in the Hadoop Environment
  4. Mapping Sources in the Hadoop Environment
  5. Mapping Targets in the Hadoop Environment
  6. Mapping Transformations in the Hadoop Environment
  7. Processing Hierarchical Data on the Spark Engine
  8. Configuring Transformations to Process Hierarchical Data
  9. Processing Unstructured and Semi-structured Data with an Intelligent Structure Model
  10. Stateful Computing on the Spark Engine
  11. Monitoring Mappings in the Hadoop Environment
  12. Mappings in the Native Environment
  13. Profiles
  14. Native Environment Optimization
  15. Cluster Workflows
  16. Connections
  17. Data Type Reference
  18. Function Reference
  19. Parameter Reference

Big Data Management User Guide

Big Data Management User Guide

Blaze Engine Architecture

Blaze Engine Architecture

To run a mapping on the Informatica Blaze engine, the Data Integration Service submits jobs to the Blaze engine executor. The Blaze engine executor is a software component that enables communication between the Data Integration Service and the Blaze engine components on the Hadoop cluster.
The following Blaze engine components appear on the Hadoop cluster:
  • Grid Manager. Manages tasks for batch processing.
  • Orchestrator. Schedules and processes parallel data processing tasks on a cluster.
  • Blaze Job Monitor. Monitors Blaze engine jobs on a cluster.
  • DTM Process Manager. Manages the DTM Processes.
  • DTM Processes. An operating system process started to run DTM instances.
  • Data Exchange Framework. Shuffles data between different processes that process the data on cluster nodes.
The following image shows how a Hadoop cluster processes jobs sent from the Blaze engine executor:
This image shows the Blaze engine architecture diagram.
The following events occur when the Data Integration Service submits jobs to the Blaze engine executor:
  1. The Blaze Engine Executor communicates with the Grid Manager to initialize Blaze engine components on the Hadoop cluster, and it queries the Grid Manager for an available Orchestrator.
  2. The Grid Manager starts the Blaze Job Monitor.
  3. The Grid Manager starts the Orchestrator and sends Orchestrator information back to the LDTM.
  4. The LDTM communicates with the Orchestrator.
  5. The Grid Manager communicates with the Resource Manager for available resources for the Orchestrator.
  6. The Resource Manager handles resource allocation on the data nodes through the Node Manager.
  7. The Orchestrator sends the tasks to the DTM Processes through the DTM Process Manger.
  8. The DTM Process Manager continually communicates with the DTM Processes.
  9. The DTM Processes continually communicate with the Data Exchange Framework to send and receive data across processing units that run on the cluster nodes.

0 COMMENTS

We’d like to hear from you!