When you run a mapping or other job, the Data Integration Service stores information about the job in a queue before submitting the job for processing. Queuing protects against data loss when the node that is processing a job fails.
When you run a mapping job or workflow mapping task, the Data Integration Service adds the job to the queue. The job state appears as "Queued" in the Administrator tool contents panel. When resources are available, the Data Integration Service takes a job from the queue and runs it.
The queueing process uses the following types of queues:
Local queue. A file-based queue on an individual compute node. Each Data Integration Service compute node contains a local queue.
Distributed queue. A database-based queue stored in the Model repository database and accessible from any available Data Integration Service node. If the Data Integration Service is enabled for big data recovery, it uses this queue.
The following image shows both types of queue and their location:
When you run a job that cannot be queued, the Data Integration Service immediately starts running the job. If there are not enough resources available, the job fails, and you must run the job again when resources are available.
The following jobs cannot be queued:
Jobs that cannot be deployed, such as previews and profiles