indicate whether the Developer tool validates the mapping definition for the native or non-native execution environment.
You can configure the following properties for the
Validation Environments
:
Native
Default environment. The Data Integration Service runs the mapping in a native environment.
Hadoop
Run the mapping in the Hadoop environment. The Data Integration Service pushes the transformation logic to the Hadoop cluster through a Hadoop connection. Select the engine to process the mapping. You can select the Blaze or Spark engine.
Databricks
Run the mapping in the Databricks environment. The Data Integration Service pushes the transformation logic to the Databricks cluster through a Databricks connection. The Databricks cluster processes the mapping on the Databricks Spark engine.
You can use a mapping parameter to indicate the execution environment for the mapping. When you select the execution environment, click
Assign Parameter
. Configure a string parameter. Set the default value to native, hadoop, or spark-databricks.
When you validate the mapping, validation occurs for each engine that you choose in the
Validation Environments
. The validation log might contain validation errors specific to each engine. If the mapping is valid for at least one mapping, the mapping is valid. The errors for the other engines appear in the validation log as warnings. If the mapping is valid for multiple engines, you can view the execution plan to determine which engine will run the job.
The following image shows a sample validation log: