Common Content for Data Engineering
- Common Content for Data Engineering 10.2.2 Service Pack 1
- All Products
Bug
| Description
|
---|---|
BDM-24961
| A mapping that accesses Hive sources and targets and which stages temporary results in the staging directory fails with an error like:
Examples of mappings that stage temporary results are mappings with a SQL override query to a Hive source, mappings that contain an Update Strategy Transformation, or mappings that create a transactional table.
The mapping fails when you run it using the Blaze engine, or when you run it using the Spark engine on HDP 3.1. If you run it using the Spark engine on HDP 2.6, it succeeds.
This issue occurs because the property hive.server2.enable.doAs is set to FALSE and thus the cluster is not configured to use user impersonation.
Workaround:
If the cluster uses Sentry or Ranger authorization, then verify that the Blaze working directory is managed by Sentry and Ranger, and verify that the impersonation user and the Hive user have full permissions on the Blaze working directory.
If the cluster does not use Sentry or Ranger authorization, then edit hive-site.xml to set the property hive.server2.enabled.doAs to TRUE.
|
BDM-24919
| The Blaze engine does not read source data with compression format from a Hive table when the following conditions are true:
Workaround: Run mappings with an external hive table source on the Spark engine.
|
BDM-24789
| If a mapping on the Blaze engine reads data from a Hive table and you specify database names in both the Data Access Connection String and in the runtime properties, a SQL override uses the database in the Hive connection instead of the database in the data object.
|
BDM-24787
| If a mapping on the Spark engine reads data from a Hive table and you specify the database name in the data object, a SQL override uses the database in the Hive connection instead of the database in the data object.
|
OCON-19697
| When you use a Hive connection to create a physical data object with a schema name which is different from the one specified in the Connection String, you cannot use a custom query to read data from a Hive source as the mapping results might be inconsistent.
|
OCON-19687
| If you configure the
-DINFA_HADOOP_DIST_DIR property to a value other than the default value in the
developerCore.ini file, importing a complex file fails on the Developer tool machine.
Workaround: Remove the value configured for the -DINFA_HADOOP_DIST_DIR property and reimport the object.
|
OCON-19627
| A mapping can fail on the Spark engine with Amazon EMR version 5.2.0 when the following conditions are true:
|
OCON-19518
| A mapping can fail on the Spark engine with MapR version 6.1.0 when the following conditions are true:
|
OCON-19467
| When you select a Hive table from the
Select a resource dialog box, the tables from other databases are also selected.
Workaround: In the Connection Explorer, minimize other databases and expand only the database that you want to select.
|
OCON-19466
| You cannot use the
Show Default Schema Only option in the Connection Explorer to show tables that use the default schema and to add tables from non-default schemas.
|
OCON-19451
| When you run a Hive mapping to read data from a custom data object and the source has two Hive tables with same name but from different databases, the mapping fails.
|
OCON-19312
| When you run a Sqoop pass-through mapping on the Hortonworks HDP 3.1 distribution to write to a Hive target, the mapping is not optimized in the following scenarios:
|
OCON-16421
| When you run a mapping in the native environment to write data to a Hive target, the mapping runs successfully. However, the session log displays the following error:
|