When you preview data, the Data Integration Service determines whether to run the job in the native environment or on the Spark engine.
If the preview point or any upstream transformation in the mapping contains hierarchical data, a Python transformation, or any transformation supported only on the Spark engine, the Data Integration Service pushes the job to the Spark engine. Otherwise, the Data Integration Service runs the data preview job. The preview point is the object in a mapping that you choose to view data for.
The Data Integration Service uses the following process when it pushes a data preview job to the Spark engine:
The Data Integration Service generates a mapping that includes a target based on the preview point.
It passes the mapping and the preview request to the Spark engine.
The Spark engine runs the mapping and stages the data based on the configured staging directories.
The Data Integration Service passes the staged data to the Developer tool and then deletes the staged data.
The results of the preview appear in the data viewer of the Developer tool.
When you run data preview, the Data Integration Service validates the validation environments you have selected in the