You can validate and run mappings in the native environment or in a non-native environment, such as Hadoop or Databricks.
The Data Integration Service validates whether the mapping can run in the selected environment. You must validate the mapping for an environment before you run the mapping in that environment.
You can configure the mappings to run in the native or non-native environment.
Native environment
When you run mappings in the native environment, the Data Integration Service processes the mapping and runs the mapping from the Developer tool.
Spark Engine
When you select the Hadoop environment, the Data Integration Service pushes the mapping to a compute cluster and processes the mapping on a Spark engine. The Data Integration Service generates an execution plan to run mappings on the Spark engine.
Databricks
When you run mappings in the Databricks environment, the Integration Service pushes the mapping logic to the Databricks Spark engine, the Apache Spark engine packaged for Databricks.
You can view the plan in the Developer tool before you run the mapping and in the Administrator tool after you run the mapping.
For more information about the non-native environment, see the