Table of Contents


  1. Abstract
  2. Informatica Installation
  3. Informatica Upgrade
  4. Informatica Closed Enhancements
  5. Informatica Fixed Limitations
  6. Informatica Known Limitations
  7. Informatica Third-Party Limitations
  8. Informatica Global Customer Support

Big Data Fixed Limitations

Big Data Fixed Limitations

Review the Release Notes of previous releases for information about previous fixed limitations.
The following table describes fixed limitations:
Mapping that reads from a flat file source and writes to an HDFS target fails to run on a CDH 5 cluster.
Mapping with a Teradata lookup that uses a JDBC connection fails in the Hadoop environment.
Mapping with an UUID_UNPARSE function returns Null values in the Hadoop environment.
The Monitoring tool displays the status of mappings in the Hadoop environment as running even though the mappings complete.
Files are note removed from the Hive scratch directory even after the mapping that is run in the Hadoop environment completes.
Mapping that contains sources with SQL overrides and Joiner transformations fails to run in the Hadoop environment.
When you run a mapping with a JDBC source and target in Hive environment, the mapping fails in Hortonworks version 2.2 with the following error in the job logs:
2015-01-13 17:23:08,919 INFO [IPC Server handler 5 on 50241] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1421061665984_0216_m_000000_0: Error: Mapping execution failed with the following error: ODL_26128 Database error encountered in connection object [insplash_stghdlr_base] with the following error message: [The Data Integration Service could not find the run-time OSGi bundle for the adapter [com.informatica.adapter.infajdbc.InfaJDBCConnectInfo] for the operating system [LINUX]. Copy the adapter run-time OSGi bundle and verify that you have set the correct library name in the plugin.xml file
When you test an HDFS connection in the Developer tool, the test does not verify if the NameNode URI is correct.
When you perform a data preview for a mapping that has a Hive, HBase, HDFS, or complex file source and is configured for user impersonation the Data Integration Service uses the SPN of the Data Integration Service user to perform the data preview.
Reference tables are not removed from the Hive warehouse directory when you run the mapping in the Hadoop environment.
If you upgrade the Data Integration Service, the
Hadoop Kerberos keytab
Hadoop Kerberos service principal name
properties do not appear in the Data Integration Service properties.
When a data domain discovery profile runs as a part of enterprise discovery on Cloudera CDH 5.2, the profile fails with a run-time error.
Mapping with a Data Masking transformation fails to run in the Hadoop environment.
does not use the scratch directory for the user specified in the Hive connection. As a result, the mapping fails due to a user permission issue.

Updated October 25, 2018