Table of Contents


  1. Abstract
  2. Informatica Installation
  3. Informatica Upgrade
  4. Informatica Closed Enhancements
  5. Informatica Fixed Limitations
  6. Informatica Known Limitations
  7. Informatica Third-Party Limitations
  8. Informatica Global Customer Support

Big Data Known Limitations

Big Data Known Limitations

The following table describes known limitations:
A Blaze engine mapping hangs in the Developer tool and the Monitoring tool displays no status for the mapping because a synchronization error occurs between Blaze engine components.
Workaround: Run the Blaze engine mapping again.
Mappings that read from one of the following sources fail to run in the native environment when the Data Integration Service is configured to run jobs in separate remote processes:
  • Flat file or complex file in the Hadoop Distributed File System (HDFS)
  • HIVE table
  • HBase table
Workaround: On the Compute view for the Data Integration Service, configure the INFA_HADOOP_DIST_DIR environment variable for each node with the compute role. Set the environment variable to the same value configured for the Data Integration Service Hadoop Distribution Directory execution option for the Data Integration Service.
The DEF framework creates too many file descriptors for each Blaze grid segment and does not clear them till the mapping ends.
Mapping with a Hive target that contains more than 4000 columns takes a long time to complete.
Data corruption occurs for a mapping in the Hadoop environment that contains an Oracle source with a new line character.
You cannot monitor jobs that use the Blaze engine if the Application Timeline Server uses Kerberos authentication.
Workaround: Do not use Kerberos authentication with the Application Timeline Server.
Mapping fails in the native environment when it contains a Hive binary data type for an IBM BigInsights and Pivotal cluster.
When you run the stopBlazeService command, some component logs might not be written to aggregate log files on HDFS.
Workaround: View the Blaze engine logs in the directory configured for the Blaze engine logs.
When you use an ODBC connection to write time data to a Netezza database, the mapping fails. This issue occurs when you run the mapping on Cloudera 5u4.
If a Netezza column has the same precision and scale, and contains a 0 as a data value, the data is corrupted when the Data Integration Service writes it to the target. This issue occurs when you use a Netezza connection and run the mapping on Cloudera 5u4.
The output data differs between a mapping run in the native environment and the Hadoop environment when you add MAX and MIN decimal functions in an Aggregator transformation.
Cannot validate a mapping with an Update Strategy transformation after you specify a primary key or preview data for a set of primary keys on a Hive table.
Mapping fails to validate when it contains Timestamp with Time Zone data type columns that are not connected to any transformation or target.
When a mapping containing a Hive source or target runs in the Hadoop environment, the summary statistics for the mapping do not appear in the Monitoring tool.
The path of the resource file in a complex file object appears as a recursive path of directories starting with the root directory and ending with a string.
Mapping with a Hive source and target that uses an ABS function with an IIF function fails in the Hadoop environment.
Mapping in the Hadoop environment fails when it contains a Hive source and a filter condition that uses the default table name prefixed to the column name.
Workaround: Edit the filter condition to remove the table name prefixed to the column name and run the mapping again.
Mapping in the Hadoop environment fails because the Hadoop connection uses 128 characters in its name.
Mapping validation errors occur when you validate a mapping that has complex data types in the Hive environment.
Workaround: Run the mapping in the native environment.

Updated October 25, 2018