Common Content for Data Engineering
- Common Content for Data Engineering 10.2.2 HotFix 1
- All Products
Bug
| Description
|
---|---|
BDM-29679
| When the Spark engine runs a mapping, property values set in the cluster configuration override values set using Environment SQL in the Hadoop connection.
|
BDM-26560
| A mapping fails because the Model Repository Service cannot communicate with the Data Integration Service. This happens when the following conditions are true:
Workaround: Configure both the JDBC connection and the secured JDBC parameter in the JDBC connection string. For example:
Separate configuration of the JDBC connection and the secured JDBC parameter in the Model Repository Service is default behavior.
|
BDM-26248
| If you specify a compression codec in a custom query, the Blaze engine fails to compress HDFS files using the codec on every Hadoop distribution except Hortonworks HDP 3.1.
|
BDM-26109
| The Spark mappings might fail during concurrent mapping runs on CDH 6.1 cluster with the following Java run-time error:
"<file> is not a Parquet file. expected magic number at tail"
|
BDM-26206
| A mapping with flat file sources and targets that uses the Spark engine to run on a WANdisco-enabled Hortonworks HDP 2.6.5 cluster fails.
Workaround:
|
BDM-26003
| If a Hive target table is partitioned by a Date data type, a partition in the Hive target table cannot be truncated on a Hortonworks HDP 3.1 cluster.
Workaround: Add the following property to the end of the Data Access Connection String: ?hive.optimize.sort.dynamic.partition=true
|
BDM-25475
| When you delete contents for Erasure Coding (EC) within the cluster staging directory and run the mapping, the mapping might fail when the auto installer copy sometimes fails to create the XOR codec for raw EC.
|