Table of Contents

Search

  1. Abstract
  2. 10.2.2 HotFix 1 Installation and Upgrade
  3. 10.2.2 HotFix 1 Fixed Limitations
  4. 10.2.2 HotFix 1 Known Limitations
  5. 10.2.2 Service Pack 1 Fixed Limitations
  6. 10.2.2 Service Pack 1 Known Limitations
  7. 10.2.2 Fixed Limitations and Closed Enhancements
  8. 10.2.2 Known Limitations
  9. Emergency Bug Fixes Merged into 10.2.2 HotFix 1
  10. Informatica Global Customer Support

Big Data Release Notes

Big Data Release Notes

Big Data Management Known Limitations

Big Data Management Known Limitations

The following table describes known limitations:
Bug
Description
BDM-24961
A mapping that accesses Hive sources and targets and which stages temporary results in the staging directory fails with an error like:
SEVERE: [GRIDDTM_1016] The Integration Service failed to execute grid mapping with following error [An internal exception occurred with message: java.lang.RuntimeException: Failure to execute Query CREATE EXTERNAL TABLE
Examples of mappings that stage temporary results are mappings with a SQL override query to a Hive source, mappings that contain an Update Strategy Transformation, or mappings that create a transactional table.
The mapping fails when you run it using the Blaze engine, or when you run it using the Spark engine on HDP 3.1. If you run it using the Spark engine on HDP 2.6, it succeeds.
This issue occurs because the property hive.server2.enable.doAs is set to FALSE and thus the cluster is not configured to use user impersonation.
Workaround:
If the cluster uses Sentry or Ranger authorization, then verify that the Blaze working directory is managed by Sentry and Ranger, and verify that the impersonation user and the Hive user have full permissions on the Blaze working directory.
If the cluster does not use Sentry or Ranger authorization, then edit hive-site.xml to set the property hive.server2.enabled.doAs to TRUE.
BDM-24919
The Blaze engine does not read source data with compression format from a Hive table when the following conditions are true:
  • The table was created with a compression format.
  • Table data was not loaded through an Informatica mapping.
Workaround: Run mappings with an external hive table source on the Spark engine.
BDM-24789
If a mapping on the Blaze engine reads data from a Hive table and you specify database names in both the Data Access Connection String and in the runtime properties, a SQL override uses the database in the Hive connection instead of the database in the data object.
BDM-24787
If a mapping on the Spark engine reads data from a Hive table and you specify the database name in the data object, a SQL override uses the database in the Hive connection instead of the database in the data object.
OCON-19697
When you use a Hive connection to create a physical data object with a schema name which is different from the one specified in the Connection String, you cannot use a custom query to read data from a Hive source as the mapping results might be inconsistent.
OCON-19687
If you configure the
-DINFA_HADOOP_DIST_DIR
property to a value other than the default value in the
developerCore.ini
file, importing a complex file fails on the Developer tool machine.
Workaround: Remove the value configured for the
-DINFA_HADOOP_DIST_DIR
property and reimport the object.
OCON-19627
A mapping can fail on the Spark engine with Amazon EMR version 5.2.0 when the following conditions are true:
  • The mapping includes a transformation that reads one or more reference tables.
  • A column name in a reference table uses mixed-case characters.
OCON-19518
A mapping can fail on the Spark engine with MapR version 6.1.0 when the following conditions are true:
  • The mapping includes a transformation that reads one or more reference tables.
  • A column name in a reference table uses mixed-case characters.
OCON-19467
When you select a Hive table from the
Select a resource
dialog box, the tables from other databases are also selected.
Workaround: In the Connection Explorer, minimize other databases and expand only the database that you want to select.
OCON-19466
You cannot use the
Show Default Schema Only
option in the Connection Explorer to show tables that use the default schema and to add tables from non-default schemas.
OCON-19451
When you run a Hive mapping to read data from a custom data object and the source has two Hive tables with same name but from different databases, the mapping fails.
OCON-19312
When you run a Sqoop pass-through mapping on the Hortonworks HDP 3.1 distribution to write to a Hive target, the mapping is not optimized in the following scenarios:
  • You have enabled the
    Create or replace table at runtime
    target schema strategy and you do not specify a DDL query for create or replace table in the Hive target.
  • You have enabled the
    Retain Existing Target Schema
    option for a Hive target.
OCON-16421
When you run a mapping in the native environment to write data to a Hive target, the mapping runs successfully. However, the session log displays the following error:
java.lang.ArrayIndexOutOfBoundsException

0 COMMENTS

We’d like to hear from you!