Test Data Management All Products
When the Spark engine runs a mapping, the mapping fails with an error like:
SEVERE: Data integration service failed to create DTM instance because of the following error: java.lang.UnsupportedOperationException: PARAM_READ ...
When the Spark engine runs a mapping on an EMR 6.0 cluster using a Rank transformation that accesses flat file sources and targets, the mapping fails.
The Spark engine writes an incorrect date to a Hive target on Amazon EMR 6.0 when the mapping source is a flat file Hive source.
When you run an EDC Resource that profiles a high number of objects (400 or more), some of the DTM processes (pmdtmsvc2) hang permanently.
When the Spark execution stops unexpectedly, a Null Pointer Exception error occurs and data is not collected for subsequent mapping runs.
When the Spark engine runs a Sqoop mapping on the Amazon EMR 6.0 cluster to write data that contains the date or time data types to Netezza, Greenplum, or Microsoft SQL Server, the mapping fails.