Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings
  4. Sources
  5. Targets
  6. Transformations
  7. Data Preview
  8. Cluster Workflows
  9. Profiles
  10. Monitoring
  11. Hierarchical Data Processing
  12. Hierarchical Data Processing Configuration
  13. Hierarchical Data Processing with Schema Changes
  14. Intelligent Structure Models
  15. Stateful Computing
  16. Connections
  17. Data Type Reference
  18. Function Reference

Data Preview Process

Data Preview Process

When you preview data, the Data Integration Service determines whether to run the job in the native environment or on the Spark engine.
If the preview point or any upstream transformation in the mapping contains hierarchical data, a Python transformation, or any transformation supported only on the Spark engine, the Data Integration Service pushes the job to the Spark engine. Otherwise, the Data Integration Service runs the data preview job. The preview point is the object in a mapping that you choose to view data for.
The Data Integration Service uses the following process when it pushes a data preview job to the Spark engine:
  1. The Data Integration Service generates a mapping that includes a target based on the preview point.
  2. It passes the mapping and the preview request to the Spark engine.
  3. The Spark engine runs the mapping and stages the data based on the configured staging directories.
  4. The Data Integration Service passes the staged data to the Developer tool and then deletes the staged data.
  5. The results of the preview appear in the data viewer of the Developer tool.
When you run data preview, the Data Integration Service validates the validation environments you have selected in the
Run-time
view.

0 COMMENTS

We’d like to hear from you!