Table of Contents


  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Connections
  4. Mappings in the Hadoop Environment
  5. Mapping Objects in the Hadoop Environment
  6. Processing Hierarchical Data on the Spark Engine
  7. Stateful Computing on the Spark Engine
  8. Monitoring Mappings in the Hadoop Environment
  9. Mappings in the Native Environment
  10. Profiles
  11. Native Environment Optimization
  12. Data Type Reference
  13. Complex File Data Object Properties
  14. Function Reference
  15. Parameter Reference

Blaze Engine Logs

Blaze Engine Logs

The mapping run log appears in the LDTM log on the domain and in the tasklet logs on the Hadoop cluster.
You can find information about the mapping run on the Blaze engine in the following log files:
LDTM log
The LDTM logs the results of the mapping run on the Blaze engine. You can view the LDTM log from the Developer tool or the Monitoring tool for a mapping job.
You can configure the Data Integration Service to log details about the mapping execution to the session log. To enable logging of LDTM mapping execution details, set the log tracing level to verbose initialization or verbose data.
Informatica recommends setting the tracing level to verbose data only for debugging. Do not use verbose data to run jobs concurrently for production.
Mapping execution details include the following information:
  • Start time, end time, and state of each task
  • Blaze Job Monitor URL
  • Number of total, succeeded, and failed/cancelled tasklets
  • Number of processed and rejected rows for sources and targets
  • Data errors, if any, for transformations in each executed segment
Blaze component and tasklet logs
The Blaze engine stores tasklet and Blaze component log events in temporary and permanent directories on the Hadoop cluster. The log file directories are specified by properties in the file located in the following location for each Hadoop distribution:
<Informatica Installation directory>/services/shared/hadoop/<distribution directory>/infaConf
The temporary directory is specified by the following property in the file:
. An administrator must create a directory with read, write, and execute permissions on all nodes on the Hadoop cluster.
For example, configure the following path for the property:
After the mapping completes, the Data Integration Service moves the tasklet log events from the temporary directory to a permanent directory on HDFS. The permanent directory is specified by the following property in the file:
For example, configure the following path for the property:
If you want to retain the tasklet logs in the temporary directory, set the value of the following property in the file to false:
If you do not configure the temporary or permanent directories, the tasklet log events appear in the directory configured for the DTM Process. You can get the directory for the DTM Process from the value for the
property in yarn-site.xml on the cluster node.
The following sample code describes the
<property> <name>yarn.nodemanager.local-dirs</name> <value>/var/lib/hadoop-yarn/cache/${}/nm-local-dir</value> <description>List of directories to store local files.</description> </property>

Updated November 09, 2018