Table of Contents

Search

  1. Abstract
  2. 10.2.2 HotFix 1 Installation and Upgrade
  3. 10.2.2 HotFix 1 Fixed Limitations
  4. 10.2.2 HotFix 1 Known Limitations
  5. 10.2.2 Service Pack 1 Fixed Limitations
  6. 10.2.2 Service Pack 1 Known Limitations
  7. 10.2.2 Fixed Limitations and Closed Enhancements
  8. 10.2.2 Known Limitations
  9. Emergency Bug Fixes Merged into 10.2.2 HotFix 1
  10. Informatica Global Customer Support

Big Data Release Notes

Big Data Release Notes

Big Data Management Fixed Limitations (10.2.2 HF1)

Big Data Management Fixed Limitations (10.2.2 HF1)

Review the Release Notes of previous releases for information about previous fixed limitations.
The following table describes fixed limitations:
Bug
Description
OCON-19759
When you use the Hortonworks Connector for Teradata to run a dynamic mapping on the Spark engine to write data of the Date data type to a Teradata target, the data gets corrupted.
OCON-19601
When you run a Sqoop optimized mapping on the Spark engine to read data from a Sqoop source and write data to a Hive target and the mapping contains an SQL query, the Data Integration Service ignores the SQL query and writes data to the Hive target.
OCON-19495
When you run a Sqoop mapping to read data and the custom SQL query used to override contains aliases for column names, the mapping fails on the Spark engine.
OCON-19312
When you run a Sqoop pass-through mapping on the Hortonworks HDP 3.1 distribution to write to a Hive target, the mapping is not optimized in the following scenarios:
  • You have enabled the
    Create or replace table at runtime
    target schema strategy and you do not specify a DDL query for create or replace table in the Hive target.
  • You have enabled the
    Retain Existing Target Schema
    option for a Hive target.
OCON-19140
When you run a Sqoop mapping on the Spark engine to write data of the Date data type to Teradata, the mapping fails.
BDM-25442
When you write data of the Optimized Row Columnar (ORC) file format to an external Hive target table in HDP 3.1, mappings sometimes fail on the Blaze engine.
BDM-24961
A mapping that accesses Hive sources and targets and which stages temporary results in the staging directory fails with an error like:
SEVERE: [GRIDDTM_1016] The Integration Service failed to execute grid mapping with following error [An internal exception occurred with message: java.lang.RuntimeException: Failure to execute Query CREATE EXTERNAL TABLE
Examples of mappings that stage temporary results are mappings with a SQL override query to a Hive source, mappings that contain an Update Strategy Transformation, or mappings that create a transactional table.
The mapping fails when you run it using the Blaze engine, or when you run it using the Spark engine on HDP 3.1. If you run it using the Spark engine on HDP 2.6, it succeeds.
This issue occurs because the property hive.server2.enable.doAs is set to FALSE and thus the cluster is not configured to use user impersonation.
BDM-24919
When you read a Hive table created with the compression codec in HDP 3.1, the mapping fails to run on the Blaze engine.
The following table describes closed enhancement requests:
Bug
Description
BDM-24191
When you enable Hive on Spark in the cluster for a Hive task, the cluster launches Spark jobs and the Data Integration Service processes the HiveServer2 tasks through MapReduce or Tez.

0 COMMENTS

We’d like to hear from you!