Table of Contents

Search

  1. Preface
  2. Introduction to Big Data Management Administration
  3. Authentication
  4. Running Mappings on a Cluster with Kerberos Authentication
  5. Authorization
  6. Cluster Configuration
  7. Cloud Provisioning Configuration
  8. Data Integration Service Processing
  9. Connections
  10. Multiple Blaze Instances on a Cluster
  11. Monitoring REST API

Big Data Management Administrator Guide

Big Data Management Administrator Guide

Sample Retrieve Advanced Mapping Statistics

Sample Retrieve Advanced Mapping Statistics

The sample use case is to use the script to retrieve the details of the advanced mapping statistics on the Spark environment.
You can use the REST API to retrieve information about the advanced mapping statistics with the following request URL for a mapping with Job ID as _TNoO9ELEeiimY76kFyfuw:
<RESTOperationsHubService_Host>:<RESTOperationsHubService_Port>/RestOperationsHub/services/v1/MappingService/MappingAdvancedStats('_TNoO9ELEeiimY76kFyfuw')

Advanced Mapping Statistics Output

{ "@odata.context": "$metadata#MappingAdvancedStats/$entity", "jobId": "_TNoO9ELEeiimY76kFyfuw", "mappingStat": { "status": "COMPLETED", "mappingName": "HDFSTgtAppend_MultiPartition_SparkMode", "applicationName": "HDFSTargetAppend", "serviceName": "DIS_HDP_2.6", "logFileName": "/data/Informatica/10.2.2_252/logs/node252/services/DataIntegrationService/disLogs/ms/DEPLOYED_MAPPING_HDFSTargetAppend_HDFSTgtAppend_MultiPartition_SparkMode-TNoO9ELEeiimY76kFyfuw_20181016_115325_006.log", "startTime": 1539671005830, "endTime": 1539671244752, "executorType": "SPARK", "executingNode": "node252", "userName": "Administrator", "securityDomain": "Native" }, "detailedStats": { "memoryData": [], "createdTime": [ 1539671058384 ], "cpuData": [], "sourceTargetStats": [ { "instanceName": "Read_students_5", "isSource": true, "bytes": [ -1 ], "rows": [ 10 ] }, { "instanceName": "Read_students_HDFS_src", "isSource": true, "bytes": [ -1 ], "rows": [ 10 ] }, { "instanceName": "Read_student", "isSource": true, "bytes": [ -1 ], "rows": [ 10 ] }, { "instanceName": "Write_HDFSAppendTarget", "isSource": false, "bytes": [ -1 ], "rows": [ 28 ] } ], "lastPurgeTime": 0 }, "summaryStats": { "processStatSummary": { "avgCpuUsage": 0, "avgMemUsage": 0 }, "sourceTxStats": [ { "instanceName": "Read_students_5", "groupStats": [ { "processedBytes": -1, "processedRows": 10, "bytesThrougput": -1, "rowsThroughput": 10, "errorRows": -1, "errorBytes": -1, "groupName": "Read_students_5", "firstRowTime": 0 } ] }, { "instanceName": "Read_student", "groupStats": [ { "processedBytes": -1, "processedRows": 10, "bytesThrougput": -1, "rowsThroughput": 10, "errorRows": -1, "errorBytes": -1, "groupName": "Read_student", "firstRowTime": 0 } ] }, { "instanceName": "Read_students_HDFS_src", "groupStats": [ { "processedBytes": -1, "processedRows": 10, "bytesThrougput": -1, "rowsThroughput": 5, "errorRows": -1, "errorBytes": -1, "groupName": "Read_students_HDFS_src", "firstRowTime": 0 } ] } ], "targetTxStats": [ { "instanceName": "Write_HDFSAppendTarget", "groupStats": [ { "processedBytes": -1, "processedRows": 28, "bytesThrougput": -1, "rowsThroughput": 14, "errorRows": -1, "errorBytes": -1, "groupName": "Write_HDFSAppendTarget", "firstRowTime": 0 } ] } ] } }

0 COMMENTS

We’d like to hear from you!