Metadata Manager 10.4.0
- Metadata Manager 10.4.0
- All Products
When the Spark engine runs a mapping, property values set in the cluster configuration override values set using Environment SQL in the Hadoop connection.
When an Update Strategy transformation contains an insert, update, or a delete operation, and a JDBC target, incorrect number of inserted, updated, or deleted rows appear from the Spark events.
When a mapping that runs on Spark uses an Update Strategy transformation, table constraints might cause a BatchUpdateException and mapping failures.
Workaround: Edit the Spark.JdbcNumPartition setting in the mapping Runtime Properties to reduce the number of partitions to 1. This causes all rows to be processed as a single partition.
When you monitor statistics for the Data Integration Service configured with file-based queuing, the jobs incorrectly appear to run even when the job state is queued.
When a dynamic mapping that runs on Spark uses an Update Strategy transformation, adding a column to the Hive target table schema causes the mapping to fail.
When you import a cluster workflow, the import wizard does not include the option to choose the non-native connection that was associated with the Create Cluster task.
Workaround: After you import the workflow, manually assign a Databricks or Hadoop connection to the Create Cluster task.
If you use the numberOfErrorRows system-defined mapping output in a mapping that runs on the Spark engine, the engine returns an incorrect value for the mapping output.