An administrator can enable data engineering recovery to recover a job configured to run on the Spark engine when a Data Integration Service node stops unexpectedly.
When a Data Integration Service node fails before a running job is complete, the Data Integration Service sends the job to another node, which resumes processing job tasks from the point at which the node failure occurred. Recovery occurs upon node startup.
To use data engineering recovery, you must configure jobs to run on the Spark engine and submit jobs from the infacmd client.
An administrator configures data engineering recovery in Data Integration Service properties. For more information about data engineering recovery, see the