An administrator can enable big data job recovery to recover a big data job configured to run on the Spark engine when a Data Integration Service node stops unexpectedly.
When a Data Integration Service node fails before a running job is complete, the Data Integration Service sends the job to another node, which resumes processing job tasks from the point at which the node failure occurred. Recovery occurs upon node startup.
To use big data recovery, you must configure jobs to run on the Spark engine and submit jobs from the infacmd client.
An administrator configures big data recovery in Data Integration Service properties. For more information about big data job recovery, see the