Data Engineering Integration
- Data Engineering Integration 10.4.1
- All Products
Name
| Description
|
---|---|
Connection
| Configure for the Hadoop and Databricks environments.
Defines the connection information that the Data Integration Service requires to push the mapping execution to the compute cluster. Select the non-native connection to run the mapping in the compute cluster. You can assign a user-defined parameter for the non-native connection.
|
Runtime Properties
| Configure for the Hadoop environment.
You can configure run-time properties for the Hadoop environment in the Data Integration Service, the Hadoop connection, and in the mapping. You can override a property configured at a high level by setting the value at a lower level. For example, if you configure a property in the Data Integration Service custom properties, you can override it in the Hadoop connection or in the mapping. The Data Integration Service processes property overrides based on the following priorities:
|
Reject File Directory
| Configure for the Hadoop environment.
The directory for Hadoop mapping files on HDFS when you run mappings in the Hadoop environment.
The Blaze engine can write reject files to the Hadoop environment for flat file, HDFS, and Hive targets. The Spark engine can write reject files to the Hadoop environment for flat file and HDFS targets.
Choose one of the following options:
|
Name
| Description
|
---|---|
Pushdown type
| Configure for the Hadoop environment.
Choose one of the following options:
|
Pushdown Compatibility
| Configure for the Hadoop environment.
Optionally, if you choose full pushdown optimization and the mapping contains an Update Strategy transformation, you can choose a pushdown compatibility option or assign a pushdown compatibility parameter.
Choose one of the following options:
|
Name
| Description
|
---|---|
Maximum Rows Read
| Reserved for future use.
|
Maximum Runtime Interval
| Reserved for future use.
|
State Store
| Reserved for future use.
|