Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings
  4. Sources
  5. Targets
  6. Transformations
  7. Data Preview
  8. Cluster Workflows
  9. Profiles
  10. Monitoring
  11. Hierarchical Data Processing
  12. Hierarchical Data Processing Configuration
  13. Hierarchical Data Processing with Schema Changes
  14. Intelligent Structure Models
  15. Stateful Computing
  16. Appendix A: Connections
  17. Appendix B: Data Type Reference
  18. Appendix C: Function Reference

Big Data Job Recovery

Big Data Job Recovery

An administrator can enable big data job recovery to recover a big data job configured to run on the Spark engine when a Data Integration Service node stops unexpectedly.
When a Data Integration Service node fails before a running job is complete, the Data Integration Service sends the job to another node, which resumes processing job tasks from the point at which the node failure occurred. Recovery occurs upon node startup.
To use big data recovery, you must configure jobs to run on the Spark engine and submit jobs from the infacmd client.
An administrator configures big data recovery in Data Integration Service properties. For more information about big data job recovery, see the
Big Data Management Administrator Guide.

0 COMMENTS

We’d like to hear from you!