Table of Contents

Search

  1. Abstract
  2. 10.2.2 HotFix 1 Installation and Upgrade
  3. 10.2.2 HotFix 1 Fixed Limitations
  4. 10.2.2 HotFix 1 Known Limitations
  5. 10.2.2 Service Pack 1 Fixed Limitations
  6. 10.2.2 Service Pack 1 Known Limitations
  7. 10.2.2 Fixed Limitations and Closed Enhancements
  8. 10.2.2 Known Limitations
  9. Emergency Bug Fixes Merged into 10.2.2 HotFix 1
  10. Informatica Global Customer Support

Big Data Release Notes

Big Data Release Notes

Big Data Management Known Limitations

Big Data Management Known Limitations

The following table describes known limitations:
Bug
Description
BDM-29679
When the Spark engine runs a mapping, property values set in the cluster configuration override values set using Environment SQL in the Hadoop connection.
BDM-26560
A mapping fails because the Model Repository Service cannot communicate with the Data Integration Service. This happens when the following conditions are true:
  • The Model Repository Service is configured with an SSL-enabled database
  • The JDBC connection string and secured JDBC parameter are configured separately on the Model Repository Service
Additionally, the Data Integration Service does not successfully restart if it is enabled for big data recovery.
Workaround: Configure both the JDBC connection and the secured JDBC parameter in the JDBC connection string. For example:
jdbc:informatica:db2://hostname.company.com:60012;DatabaseName=DB1a2b3c;EncryptionMethod=SSL;HostNameInCertificate=hostname.company.com;ValidateServerCertificate=true;TrustStore=/home/toolprod/INFA_Automation/newssldb/DB2_SSL_CERT;TrustStorePassword=informatica
Separate configuration of the JDBC connection and the secured JDBC parameter in the Model Repository Service is default behavior.
BDM-26248
If you specify a compression codec in a custom query, the Blaze engine fails to compress HDFS files using the codec on every Hadoop distribution except Hortonworks HDP 3.1.
BDM-26109
The Spark mappings might fail during concurrent mapping runs on CDH 6.1 cluster with the following Java run-time error:
"<file> is not a Parquet file. expected magic number at tail"
BDM-26206
A mapping with flat file sources and targets that uses the Spark engine to run on a WANdisco-enabled Hortonworks HDP 2.6.5 cluster fails.
Workaround:
  1. Copy the following .jar files from the cluster in the
    usr/hdp/<version>/hadoop/client
    directory to each node where the Informatica Data Integration Service is installed to the
    <Informatica home>/services/shared/spark/lib_spark_<version>
    directory:
    • hadoop-common-x.x.x.jar
    • hadoop-auth-x.x.x.jar
    where "x.x.x" is the .jar file version that WANdisco uses.
    For example, find the file hadoop-common-2.7.3.2.6.5.0-292.jar in the
    /usr/hdp/2.6.5.0-292/hadoop/client
    directory on the cluster.
  2. Restart the Data Integration Service on each node.
  3. On the cluster node where HDFS sources/targets reside, grant 733 permisson to HDFS.
BDM-26003
If a Hive target table is partitioned by a Date data type, a partition in the Hive target table cannot be truncated on a Hortonworks HDP 3.1 cluster.
Workaround: Add the following property to the end of the Data Access Connection String: ?hive.optimize.sort.dynamic.partition=true
BDM-25475
When you delete contents for Erasure Coding (EC) within the cluster staging directory and run the mapping, the mapping might fail when the auto installer copy sometimes fails to create the XOR codec for raw EC.

0 COMMENTS

We’d like to hear from you!