Table of Contents

Search

  1. Preface
  2. Monitoring jobs
  3. Monitoring Data Integration jobs
  4. Data Integration job log files
  5. Monitoring Mass Ingestion jobs
  6. Monitoring Data Accelerator for Azure jobs
  7. Monitoring Data Profiling jobs
  8. Monitoring imports and exports
  9. Monitoring file transfer jobs
  10. Monitoring elastic clusters
  11. Monitoring source control logs

Monitor

Monitor

Database ingestion job details

Database ingestion
job details

For
database ingestion
jobs, you can view job-specific details on the
Job Overview
,
Object Detail
, and
Alerts
tabs. Access these tabs by drilling down on a job from the
My Jobs
page in
Mass Ingestion
or from the
All Jobs
tab on the
Mass Ingestion
page in
Monitor
.

Job Overview tab

On the
Job Overview
tab, view detailed information for the entire job, including the name of the associated task, the load type, source and target connection names, current state, number of records read and written, start and end times, and run duration. For incremental load jobs and combined initial and incremental load jobs, you can also download the job log.
The following image shows the
Job Overview
tab for a completed
database ingestion
job:
The diagram at the top of the page displays the calculated data throughput, in rows per second, if the job has successfully propagated data to the target, regardless of the job's current state. If the calculated value is 0, indicating no data has flowed to the target, the throughput is not displayed.
The following table describes the job overview properties:
Property
Description
Job Name
The name of the job.
Database ingestion
job names have format
task name
-
job instance number
.
Task Name
The name of the associated ingestion task.
Task Type
The type of task, which is Database Ingestion Task.
Load Type
The type of load operation that the job performs. Options are:
  • Initial Load
    . Loads a snapshot of source data read at a specific point-in-time to a target.
  • Incremental Load
    . Loads incremental data changes to a target on a continuous basis, until the job is stopped or ends.
  • Initial and Incremental Load
    . Performs an initial load and then automatically switches to an incremental load.
Task Location
The project or project folder that contains the ingestion task definition.
Started By
The name of the user who started the job.
Source Connection
The name of the source connection.
Target Connection
The name of the target connection.
State
The state of the job, which can be one of the following values:
  • Up and Running
    . The job is running.
  • Running with Warning
    . The job is running with a warning. This state can also occur when one or more table-specific subtasks fail but some subtasks are still running.
  • On Hold
    . The job is in a paused state while the
    Mass Ingestion Databases
    (DBMI) agent is being updated.
  • Stopping
    . The job is stopping in response to a Stop request.
  • Stopped
    . The job was intentionally stopped.
  • Failed
    . The job ended abnormally, the task deployment to the job failed, or one or more table-specific subtasks failed. Also, for an initial load job, the job was stopped.
  • Deploying
    . The job is being deployed.
  • Deployed
    . The job has been deployed.
  • Aborting
    . The job is stopping immediately in response to an Abort request.
  • Aborted
    . The job has been aborted.
  • Undeploying
    . The job is being undeployed.
  • Undeployed
    . The job has been undeployed.
  • Completed
    . The job completed successfully.
Subtasks
The number of subtasks that the
database ingestion
job used to propagate data from source tables to the target. When a job runs, it uses a separate subtask to process each source table.
Records Read
The number of records that were read from the source.
Records Written
The number of records that were successfully propagated to an Amazon S3, Apache Kafka, flat file, or Microsoft Azure Data Lake Storage target.
For a Microsoft Azure SQL Data Warehouse target, the number of records written to the intermediate Microsoft Azure Data Lake Storage files.
For a Snowflake target, the number of records written to the internal staging area that is created when the job runs.
Start Time
The date and time when the job was deployed.
End Time
The date and time when the job ended because it completed processing, was stopped, or failed. This field is not displayed for running jobs
Duration
The amount of time, in the hh:mm:ss format, that the job ran before it ended.
Log
For incremental load jobs and combined initial and incremental load jobs, you can download the job execution log for the entire job run. Select one of the following log types:
  • Complete Log
    . The entire log, including all types of messages. It is available for any job that ran, regardless of its state.
  • Error
    . The error log, which includes messages only for errors that occurred. It is available for Failed jobs only. Use this log to determine the reason for the job failure, for example, the deployment failed. If the log file ends with an ellipsis (...), the log has been truncated because of its long length. In this case, download the Complete Log to see all error messages.
To download a log to your local system, click the Download icon.
For initial load jobs, you can get the job log for a specific source object from the
Object Detail
tab.

Object Detail tab

On the
Object Detail
tab, view statistics and status information by source object from the last run of a
database ingestion
job.
The following image shows the
Object Detail
tab for a
database ingestion
job:
The following table describes the properties that are displayed for each object:
Column
Description
Object
The name of the source table for which data was propagated to the target.
For an incremental load job or a combined initial and incremental load job, click the arrow icon to the left of the object name to display detailed counts of LOBs, Inserts, Deletes, Updates, and DDL statements processed. For a combined initial and incremental load job, the Unload Count field is also displayed to show the number of records that the initial load portion of processing read from the source. The following usage notes apply to the detailed CDC counts:
  • The counts are only for the current job run. If you stop and restart the job, the counts start over from zero. Do not use these counts to identify the number of rows written to the target.
  • The counts are based on rows read from the source and do not reflect the records written to the target. Target write operations might be optimized by combining operations and reducing the number of physical writes. In this case, the counts might not match the number of write operations.
  • The value N/A means that the count value is not applicable for the count type or the value has not yet been calculated.
  • The Unload Count might not reflect the number of source records at the time the job is started or resynchronized because of a delay in the start of unload processing. Between the time of the unload request and start of unload processing, rows might be added to or deleted from the source table.
Target Object
The name of the target object that is mapped to the source table.
Records Read
For an initial load job, the number of records that were read from the source. For other load types, this information is available only at the job-level on the
Job Overview
tab.
Records Written
For an initial load job, the number of records that were successfully propagated to an Amazon S3, Apache Kafka, flat-file, or Microsoft Azure Data Lake Storage target. For other load types, this information is available only at the job-level on the
Job Overview
tab.
For a Microsoft Azure SQL Data Warehouse target, the number of records written to the intermediate Microsoft Azure Data Lake Storage files.
For a Snowflake target, the number of records written to the internal staging area that is created when the job runs.
Task Duration
For an initial load job, the amount of time the subtask that processed the source table ran before it completed or was stopped. For other load types, this information is available only at the job-level on the
Job Overview
tab.
When a job runs, it uses a separate subtask to process each source table.
Stage
For a combined initial and incremental load job, this column shows the stage in the transition from initial load processing to CDC processing for the table-specific job subtask. This column does not appear for other load types.
The stage can be one of the following values:
  • Not Started
    . Initial load processing has not yet started for the table, or if an error occurred and the table is in the
    Error on Retry
    state, the next attempt to process the table has not yet started.
  • Started
    . Initial load processing has started.
  • Unloading
    . The subtask is unloading data from the table as part of initial load processing.
  • Unloaded
    . The subtask has finished unloading data from the table as part of initial load processing.
  • Completed
    . The subtask completed initial load processing of the table.
  • Normal
    . The subtask completed initial load processing of the table and has started CDC processing of the table.
  • Cancelled
    . Initial load processing was cancelled or stopped.
  • Error
    . The subtask detected an error in the source table.
State
The state of the job subtask for the source object.
If the job stops running, the subtask state reflects the state last collected before the job ended. For example, the job might be aborted but the subtask is in a Running state.
The state can be one of the following values:
  • Queued
    . The subtask has not yet started running.
  • Starting
    . The subtask is starting.
  • Started
    . For a combined initial and incremental load job, the subtask has started.
  • Running
    . The subtask is running.
  • On Hold
    . The subtask, as well as the job, is in a paused state while the
    Mass Ingestion Databases
    (DBMI) agent is being updated.
  • Completed
    . The subtask completed processing successfully.
  • Stopping
    . The subtask is stopping in response to a Stop request.
  • Stopped
    . The subtask has stopped.
  • Aborting
    . The subtask is ending immediately in response to an Abort request.
  • Aborted
    . The subtask has been aborted.
  • Failed
    . The subtask ended unexpectedly.
  • Error
    . The subtask is in error and no longer writing data to the target table. For a combined initial and incremental load job, the subtask might be running and processing incremental change data but no data is being sent to the target.
  • Error on Retry
    . An error occurred on the last retry of subtask processing, and now the subtask is waiting to retry processing again.
If a DDL change occurs on a source table and then you resume the job, the table subtask state might not change as expected until the first DML operation occurs on the source table.
Log
For initial load jobs, you can download the job execution log for a source object. Select one of the following log types:
  • Complete Log
    . The complete log for the object subtask from job execution.
  • Error
    . The log that contains error messages. This log type is available only for a Failed subtask.
For incremental load jobs, you can get the complete log for the entire job run from the
Job Overview
tab.
For combined initial and incremental load jobs, you can download the
Stage Log
. This log covers the transition from initial to incremental loading for a source object.
To download a log locally, click the Download icon.
If you undeployed the job, you can download the log for a table only if the associated task has not been deleted.
Actions menu > Resync
For a subtask in a combined initial and incremental load job, if the subtask stage is
Normal
and the subtask state is any state other than
Queued
or
Starting
, the Actions menu is displayed on the right end of the subtask row. From the Actions menu, you can select
Resync
to resynchronize the source and target objects. For more information, see "Resynchronizing source and target objects" in Mass Ingestion help.
This tab shows information on the latest job run. This tab is blank for jobs that have not run or are resuming.

Alerts tab

On the
Alerts
tab, view alert messages that appear for certain events.
The
Alerts
tab displays alert messages when a source schema change is detected. Messages are displayed for all detected schema changes even if you set the schema drift options for the associated task to Ignore.
The following table describes the columns of information that are displayed for each alert message:
Column
Description
Level
Severity level of the alert message.
Code
Alphanumeric code that identifies the alert type.
Details
Description of the event that raised the alert message.
Time
Date and time when the event occurred.


Updated August 03, 2020