Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

Hadoop Environment Logs

Hadoop Environment Logs

The Data Integration Service generates log events when you run a mapping in the Hadoop environment.
You can view logs for the Blaze and Spark engines. You can view log events relating to different types of errors such as Hadoop connection or job failures.
When you run a mapping on the Spark engine, you can view the Scala code in logs that the Logical Data Translation Generator generates from the Informatica mapping.
You can view reject files in the reject file directory specified for the Data Integration Service.

0 COMMENTS

We’d like to hear from you!