indicate whether the Developer tool validates the mapping definition for the native execution environment or the Hadoop execution environment.
You can configure the following properties for the
Validation Environments
:
Native
Default environment. The Data Integration Service runs the mapping in a native environment.
Hadoop
Run the mapping in the Hadoop environment. The Data Integration Service pushes the transformation logic to the Hadoop cluster through a Hadoop connection. The Hadoop cluster processes the data. Select the engine to process the mapping. You can select the Blaze, Spark, or Hive engines.
You can use a mapping parameter to indicate the execution environment. When you select the execution environment, click
Assign Parameter
. Configure a string parameter. Set the default value to Native or Hadoop.
When you validate the mapping, validation occurs for each engine that you choose in the
Validation Environments
. The validation log might contain validation errors specific to each engine. If the mapping is valid for at least one mapping, the mapping is valid. The errors for the other engines appear in the validation log as warnings. If the mapping is valid for multiple Hadoop engines, you can view the execution plan to determine which engine will run the job. You can view the execution plan in the
Data Viewer
view.
The following image shows validation errors for the Blaze, Spark, and Hive engines: