Table of Contents

Search

  1. Preface
  2. Introduction to Big Data Management Administration
  3. Authentication
  4. Running Mappings on a Cluster with Kerberos Authentication
  5. Authorization
  6. Cluster Configuration
  7. Cloud Provisioning Configuration
  8. Data Integration Service Processing
  9. Connections
  10. Multiple Blaze Instances on a Cluster
  11. Monitoring REST API

Big Data Management Administrator Guide

Big Data Management Administrator Guide

MappingExecutionPlans

MappingExecutionPlans

With the MappingExecutionPlans REST API, you can view the execution plan for Hadoop jobs.

GET Request

To request information from the MappingExecutionPlans, use the following URL:
<RESTOperationsHubService_Host>:<RESTOperationsHubService_Port>/RestOperationsHub/services/v1/MappingService/MappingExecutionPlans('jobId')
The following table describes the required attributes in the MappingExecutionPlans Get URL:
Field
Type
Description
userName
String
Required. User name to connect to the domain.
You can pass the input value as a header.
encryptedpassword
String
Required. Password for the user. Encrypt the password with the pmpasswd command line program.
You can pass the input value as a header.
securityDomain
String
Optional. The security domain to which the domain user belongs.
You can pass the input value as a header.
jobId
String
Required. The argument of the entity that contains the ID for the mappings.
You can pass the input value as a query.

Get Response

Return information for the MappingExecutionPlans for the specified Job ID.
The following table describes the MappingExecutionPlans attributes present in the body of the response for the Spark or Blaze environment:
Field
Type
Description
name
Integer
Name of the script.
content
String
For Spark, the Data Integration Service translates the mapping to a Scala program and an optional set of commands. The execution plan shows the commands and the Scala program code.
For Blaze, the content comprises of session task, instances, and type.
depends
String
Tasks that the script depends on. Tasks include other scripts and Data Integration Service tasks, like the Start task.
The following table describes the MappingExecutionPlans attributes present in the content section of the body of the response for Blaze environment:
Field
Description
Session Task
Response can include three types of session tasks that are pre session task, main session task, and post session task. The main session task can contain Submappings.
Type
Type of the session task containing a set of segments and DAG vertices.
Instances
Transformation or object name.

0 COMMENTS

We’d like to hear from you!