Table of Contents

Search

  1. Abstract
  2. Installation and Upgrade
  3. 10.4.0 Fixed Limitations and Closed Enhancements
  4. 10.4.0 Known Limitations
  5. Cumulative Known Limitations
  6. Emergency Bug Fixes Merged into 10.4.0
  7. Informatica Global Customer Support

Data Engineering Integration Known Limitations (Cumulative)

Data Engineering Integration Known Limitations (Cumulative)

The following table describes known limitations that are carried forward from previous releases:
Bug
Description
BDM-29679
When the Spark engine runs a mapping, property values set in the cluster configuration override values set using Environment SQL in the Hadoop connection.
BDM-23550
When an Update Strategy transformation contains an insert, update, or a delete operation, and a JDBC target, incorrect number of inserted, updated, or deleted rows appear from the Spark events.
BDM-23392
When a mapping that runs on Spark uses an Update Strategy transformation, table constraints might cause a BatchUpdateException and mapping failures.
Workaround: Edit the Spark.JdbcNumPartition setting in the mapping Runtime Properties to reduce the number of partitions to 1. This causes all rows to be processed as a single partition.
BDM-23317
When you monitor statistics for the Data Integration Service configured with file-based queuing, the jobs incorrectly appear to run even when the job state is queued.
BDM-22490
When a dynamic mapping that runs on Spark uses an Update Strategy transformation, adding a column to the Hive target table schema causes the mapping to fail.
BDM-20856
When you import a cluster workflow, the import wizard does not include the option to choose the non-native connection that was associated with the Create Cluster task.
Workaround: After you import the workflow, manually assign a Databricks or Hadoop connection to the Create Cluster task.
BDM-20697
If you use the numberOfErrorRows system-defined mapping output in a mapping that runs on the Spark engine, the engine returns an incorrect value for the mapping output.