Table of Contents


  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings in the Hadoop Environment
  4. Mapping Sources in the Hadoop Environment
  5. Mapping Targets in the Hadoop Environment
  6. Mapping Transformations in the Hadoop Environment
  7. Processing Hierarchical Data on the Spark Engine
  8. Configuring Transformations to Process Hierarchical Data
  9. Processing Unstructured and Semi-structured Data with an Intelligent Structure Model
  10. Stateful Computing on the Spark Engine
  11. Monitoring Mappings in the Hadoop Environment
  12. Mappings in the Native Environment
  13. Profiles
  14. Native Environment Optimization
  15. Cluster Workflows
  16. Connections
  17. Data Type Reference
  18. Function Reference
  19. Parameter Reference

Blaze Engine Logs

Blaze Engine Logs

The mapping run log appears in the LDTM log on the domain and in the tasklet logs on the Hadoop cluster.
You can find information about the mapping run on the Blaze engine in the following log files:
LDTM log
The LDTM logs the results of the mapping run on the Blaze engine. You can view the LDTM log from the Developer tool or the Monitoring tool for a mapping job.
You can configure the Data Integration Service to log details about the mapping execution to the session log. To enable logging of LDTM mapping execution details, set the log tracing level to verbose initialization or verbose data.
Informatica recommends setting the tracing level to verbose data only for debugging. Do not use verbose data to run jobs concurrently for production.
Mapping execution details include the following information:
  • Start time, end time, and state of each task
  • Blaze Job Monitor URL
  • Number of total, succeeded, and failed/cancelled tasklets
  • Number of processed and rejected rows for sources and targets
  • Data errors, if any, for transformations in each executed segment
Blaze component and tasklet logs
The Blaze engine stores tasklet and Blaze component log events in temporary and permanent directories on the Hadoop cluster.
The following list describes Blaze log properties and where to configure them:
Temporary directory for tasklet and component logs. An administrator must create a directory with read, write, and execute permissions on all nodes on the Hadoop cluster.
Configure this property in the Advanced properties of the Blaze configuration in the Hadoop connection. Default is
Blaze Staging Directory
Permanent directory on HDFS that contains tasklet log event.
Configure this property in the Blaze Configuration properties of the Hadoop connection.
Boolean property to determine whether to delete tasklet logs from the temporary directory after copying the logs to the permanent directory.
Configure this property in the Advanced properties of the Blaze configuration in the Hadoop connection. To retain the logs in the temporary directory, set to false. Default is true.
Directory for the DTM process. The Data Integration Service stores logs in this location if you do not configure the infagrid.node.local.root.log.dir or Blaze Staging Directory properties.
This property is configured by default when the cluster configuration is imported.

Updated October 23, 2019