Common Content for Data Engineering
- Common Content for Data Engineering 10.2.2 HotFix 1
- All Products
Bug
| Description
|
---|---|
OCON-19759
| When you use the Hortonworks Connector for Teradata to run a dynamic mapping on the Spark engine to write data of the Date data type to a Teradata target, the data gets corrupted.
|
OCON-19601
| When you run a Sqoop optimized mapping on the Spark engine to read data from a Sqoop source and write data to a Hive target and the mapping contains an SQL query, the Data Integration Service ignores the SQL query and writes data to the Hive target.
|
OCON-19495
| When you run a Sqoop mapping to read data and the custom SQL query used to override contains aliases for column names, the mapping fails on the Spark engine.
|
OCON-19312
| When you run a Sqoop pass-through mapping on the Hortonworks HDP 3.1 distribution to write to a Hive target, the mapping is not optimized in the following scenarios:
|
OCON-19140
| When you run a Sqoop mapping on the Spark engine to write data of the Date data type to Teradata, the mapping fails.
|
BDM-25442
| When you write data of the Optimized Row Columnar (ORC) file format to an external Hive target table in HDP 3.1, mappings sometimes fail on the Blaze engine.
|
BDM-24961
| A mapping that accesses Hive sources and targets and which stages temporary results in the staging directory fails with an error like:
Examples of mappings that stage temporary results are mappings with a SQL override query to a Hive source, mappings that contain an Update Strategy Transformation, or mappings that create a transactional table.
The mapping fails when you run it using the Blaze engine, or when you run it using the Spark engine on HDP 3.1. If you run it using the Spark engine on HDP 2.6, it succeeds.
This issue occurs because the property hive.server2.enable.doAs is set to FALSE and thus the cluster is not configured to use user impersonation.
|
BDM-24919
| When you read a Hive table created with the compression codec in HDP 3.1, the mapping fails to run on the Blaze engine.
|
Bug
| Description
|
---|---|
BDM-24191
| When you enable Hive on Spark in the cluster for a Hive task, the cluster launches Spark jobs and the Data Integration Service processes the HiveServer2 tasks through MapReduce or Tez.
|