Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings
  4. Sources
  5. Targets
  6. Transformations
  7. Cluster Workflows
  8. Profiles
  9. Monitoring
  10. Hierarchical Data Processing
  11. Hierarchical Data Processing Configuration
  12. Hierarchical Data Processing with Schema Changes
  13. Intelligent Structure Models
  14. Stateful Computing
  15. Connections
  16. Data Type Reference
  17. Function Reference

User Guide

User Guide

Big Data Job Recovery

Big Data Job Recovery

An administrator can enable big data job recovery to recover a big data job configured to run on the Spark engine when a Data Integration Service node stops unexpectedly.
When a Data Integration Service node fails before a running job is complete, the Data Integration Service sends the job to another node, which resumes processing job tasks from the point at which the node failure occurred. Recovery occurs upon node startup.
To use big data recovery, you must configure jobs to run on the Spark engine and submit jobs from the infacmd client.
An administrator configures big data recovery in Data Integration Service properties. For more information about big data job recovery, see the
Big Data Management Administrator Guide.

0 COMMENTS

We’d like to hear from you!