Table of Contents

Search

  1. Abstract
  2. Informatica 10.2.2 Service Pack 1
  3. Support Changes
  4. Verify System Requirements
  5. Upgrade Path
  6. Installation
  7. Post-Installation Steps
  8. 10.2.2 Service Pack 1 Fixed Limitations
  9. 10.2.2 Service Pack 1 Known Limitations
  10. 10.2.2 Fixed Limitations and Closed Enhancements
  11. 10.2.2 Known Limitations
  12. Emergency Bug Fixes Merged into 10.2.2 Service Pack 1
  13. Informatica Global Customer Support

Big Data Release Notes

Big Data Release Notes

Third-Party Known Limitations

Third-Party Known Limitations

The following table describes third-party known limitations:
Bug
Description
BDM-25070
The Spark engine fails to truncate external partitioned Hive tables on a Hortonworks HDP 3.1 cluster.
BDM-24903
When you run a mapping on the Spark engine with a partitioned Hive target, the mapping writes date/timestamp data in UTC format, resulting in incorrect data. The issue occurs when "Truncate target table" and "Truncate target partition" are enabled in the advanced properties of the mapping target object.
For more information, see the following Hive issues:
BDM-24885
When you run a Sqoop, Amazon S3, or Amazon Redshift mapping on the Blaze engine and on the Hortonworks HDP 3.1 cluster that uses Ranger KMS authorization, the mapping fails with the following error:
org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Attempt to obtain new INITIATE credentials failed! (null))
Hortonworks HDP ticket reference number: 00220585
BDM-24616
A mapping that launches a HiveServer 2 task fails when it is configured to create a partitioned Hive target table with a char datatype column during runtime when the mapping runs on Hortonworks 3.1.
Hortonworks HDP ticket reference number: 00219433
BDM-24440
A mapping that runs on a Cloudera CDH 6.x cluster cannot read data from a file that contains Unicode characters in the file name.
Set the LANG environment variable in the Hadoop connection.
Cloudera CDH ticket reference number: 356638
BDM-21486
If a mapping runs on an Azure HDInsight cluster, the mapping fails when you use a SQL override on a Date/Time column in a Hive source.
Apache Hive ticket reference number: HIVE-12200
BDM-20346
The Spark engine fails to run a built-in Hive SerDe of type CSV on a Hortonworks HDP 3.1 cluster.
Hortonworks HDP ticket reference number: 00220586
IDL-12225
You cannot preview a JSON file if the data lake is on Hortonworks HDP 3.1.0.
OCON-19673
In a Cloudera CDH 6.1 environment, Parquet files cannot be imported into the Developer tool because arrays are represented as bag data types in Parquet files on Cloudera CDH 6.1.
OCON-19610
When you run a Hive mapping to write data to a Hive target in Avro file format on the Hortonworks HDP 3.1 distribution, the mapping fails with the following error:
Storage schema reading not supported
OCON-19506
When you run a mapping on the Spark Engine to write data to a Google Cloud Storage flat file target and the data size is greater than 750 MB, multiple Spark jobs fail due to transient errors in Google Cloud Storage. However, the mapping runs successfully with the following error:
java.io.IOException: java.io.IOException: Write end dead
OCON-19488
When you use the Sqoop Hortonworks Connector for Teradata to run Teradata mappings on the Blaze or Spark engine with the Hortonworks HDP 3.1 cluster, the mapping fails with the following error:
NoClassDefFoundError: com/cloudera/sqoop/manager/ManagerFactoryNoClassDefFoundError
To run a mapping on the Spark engine, download the 1.6.0 version of the TDCH driver from Cloudera: https://www.cloudera.com/downloads/connectors/sqoop/teradata/1-6c5.html
There is no workaround for the Blaze engine.

0 COMMENTS

We’d like to hear from you!