Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings in the Hadoop Environment
  4. Mapping Sources in the Hadoop Environment
  5. Mapping Targets in the Hadoop Environment
  6. Mapping Transformations in the Hadoop Environment
  7. Processing Hierarchical Data on the Spark Engine
  8. Configuring Transformations to Process Hierarchical Data
  9. Processing Unstructured and Semi-structured Data with an Intelligent Structure Model
  10. Stateful Computing on the Spark Engine
  11. Monitoring Mappings in the Hadoop Environment
  12. Mappings in the Native Environment
  13. Profiles
  14. Native Environment Optimization
  15. Cluster Workflows
  16. Connections
  17. Data Type Reference
  18. Function Reference
  19. Parameter Reference

Monitoring Azure HDInsight Cluster Workflow Jobs

Monitoring Azure HDInsight Cluster Workflow Jobs

You can access mapping log URLs through the
Monitoring
tab in the Administrator tool to monitor workflow jobs that run on an Azure HDInsight cluster. The log location depends on the run-time engine that each mapping uses.

Blaze and Spark engines

To access the monitoring URL for mappings that run on Blaze or Spark, expand the workflow and the mapping in the
Monitoring
tab. Select the Grid Task and view the value for the Monitoring URL property in the lower pane. Use this path to find the log.

Hive engine

To access the monitoring URL for mappings that run on Hive, expand the workflow and the mapping in the
Monitoring
tab. Select a Hive Query job, and then expand the MR Job Details node in the lower pane. The Job ID is hyperlinked, but clicking on the link does not lead to the log. To find the job monitoring log, copy the URL path and read it to find the log. Repeat the steps for each Hive Query job.


Updated October 23, 2019