Table of Contents

Search

  1. Preface
  2. Monitoring jobs
  3. Monitoring Data Integration jobs
  4. Data Integration job log files
  5. Monitoring Mass Ingestion jobs
  6. Monitoring Data Accelerator for Azure jobs
  7. Monitoring Data Profiling jobs
  8. Monitoring imports and exports
  9. Monitoring file transfer jobs
  10. Monitoring elastic clusters
  11. Monitoring source control logs

Monitor

Monitor

Spark task details

Spark task details

For
mapping
tasks that are based on
elastic mappings
, the mappings are translated into Spark tasks that process the data logic simultaneously. You can view details for each Spark task listed under
<Spark task name> Results
.
Each Spark task includes the following details:
Property
Description
Status
Status of the Spark task. The Spark task can have one of the following statuses:
  • Running. The task is running.
  • Succeeded. The task completed successfully.
  • Failed. The task did not complete because it encountered errors.
  • Stopped. The task was stopped.
  • Unknown. The status of the task is unknown.
If the Secure Agent fails while the
elastic job
is running, the status of the Spark tasks continues to display Running. You must cancel the job and run the job again.
Start time
Date and time when the Spark task started.
End time
Date and time when the Spark task ended.
Duration
Amount of time that the Spark task ran.
Memory Per Executor
Amount of memory that each Spark executor uses.
Cores Per Executor
Number of cores that each Spark executor uses.
Driver and Agent Job Logs
Select
Download
to download the Spark driver and agent job logs.
Advanced Log Location
The log location that is configured in the elastic configuration for the elastic cluster. You can navigate to the advanced log location to view and download the agent job log, Spark driver log, and Spark executor logs.
Each Spark task is translated into Spark jobs, which are further broken down into stages. You can view the following details for each Spark job and stage:
Property
Description
Job Name
Name of the Spark job or stage.
Start time
Date and time when the Spark job or stage started. Start time might be "NA" for aborted tasks.
End time
Date and time when the Spark job or stage ended. End time might be "NA" for aborted tasks.
Duration
Amount of time that the Spark job or stage ran.
Total Tasks
Number of tasks the Spark job or stage attempted.
Successful Tasks
Number of tasks the Spark job or stage sucessfully completed.
Failed Tasks
Number of tasks that the Spark job or stage failed to complete.
Running Tasks
Number of tasks that the Spark job or stage is currently running.
Input Size / Records
Size of the file and number of records input by the Spark job or stage.
Output Size / Records
Size of the file and number of records output by the Spark job or stage.
Status
Status of the Spark job or stage. The status can be one of the following values:
  • Running. The job or stage is running.
  • Success. The job or stage completed successfully.
  • Failed. The job or stage did not complete because it encountered errors.
  • Aborted. The job or stage did not complete because the user aborted the
    mapping
    task.
After you abort a
mapping
task, there might be some lag time before the Monitor service shows the status as
Aborted
.


Updated June 17, 2021