Table of Contents


  1. Abstract
  2. Installation and Upgrade
  3. 10.1.1 HotFix 1 Fixed Limitations and Closed Enhancements
  4. 10.1.1 HotFix 1 Known Limitations
  5. 10.1.1 Update 2 Fixed Limitations and Closed Enhancements
  6. 10.1.1 Update 2 Known Limitations
  7. 10.1.1 Update 1 Fixed Limitations and Closed Enhancements
  8. 10.1.1 Update 1 Known Limitations
  9. 10.1.1 Fixed Limitations and Closed Enhancements
  10. 10.1.1 Known Limitations
  11. Informatica Global Customer Support

Big Data Known Limitations (10.1.1 HotFix 1)

Big Data Known Limitations (10.1.1 HotFix 1)

The following table describes known limitations:
When you run TDCH mappings on the Hive engine to write time data to a Teradata target, the nanosecond part is truncated.
The Data Integration Service cannot load and run deployed applications when the MRS unsuccessfully fails over when the DIS on a grid fails over.
Mappings using an HBase connection and the Spark run-time engine fail on a Cloudera 5.11 cluster with Kerberos authentication enabled.
Workaround: Log in to each node of the Cloudera cluster and use the kdestroy command to delete the Kerberos credentials cache. Then re-run the mapping.
Mappings fail when they use the Spark engine to run on a Hortonworks 2.5 SUSE cluster where Transparent Encryption is enabled on HDFS and the Hive warehouse directory specified in the Hadoop connection is in an encrypted zone.
Mappings that are configured to use Snappy compression fail on a Hortonworks or Cloudera SUSE Linux cluster where Kerberos authentication is enabled.
  1. Copy Hadoop native libraries from the SUSE 11 cluster to the machine hosting the Data Integration Service.
    Copy the libraries from the following location:
    /usr/hdp/<Hadoop version>/hadoop/lib/native
    Copy the libraries to the following location:
    <Informatica Home Directory>/service/shared/hadoop/<Hadoop distribution name>/lib/native
  2. Modify the file in
    <Informatica installation directory>/Informatica/services/shared/hadoop/<Hadoop distribution name>_<version>/InfaConf/
    to point to an alternate directory location on the cluster to load native Hadoop libraries other than
    <Hadoop distribution directory>lib/native)
    1. Find the value
    2. Replace
      with the location of the native Hadoop libraries of the cluster. For example:
      /usr/hdp/<Hadoop version>/hadoop/lib/native

Updated January 17, 2019