Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

Databricks Spark Engine Execution Details

Databricks Spark Engine Execution Details

You can view the details of a Databricks Spark engine execution plan from the Administrator tool or Developer tool.
The Databricks Spark engine execution plan shows the Scala code to run on the Databricks Spark engine.
The following image shows the execution plan for a mapping to run on the Databricks Spark engine:
The image shows an example of an execution plan dialog box with Script Name, Script, and Depends On text entry fields.
The Databricks Spark engine execution plan has the following details:
  • Script ID. Unique identifier for the Databricks Spark engine script.
  • Script. Scala code that the Data Integration Service generates based on the mapping logic.
  • Depends on. Tasks that the script depends on. Tasks include other scripts and Data Integration Service tasks.

0 COMMENTS

We’d like to hear from you!