Table of Contents

Search

  1. Abstract
  2. Installation and Upgrade
  3. 10.1.1 HotFix 1 Fixed Limitations and Closed Enhancements
  4. 10.1.1 HotFix 1 Known Limitations
  5. 10.1.1 Update 2 Fixed Limitations and Closed Enhancements
  6. 10.1.1 Update 2 Known Limitations
  7. 10.1.1 Update 1 Fixed Limitations and Closed Enhancements
  8. 10.1.1 Update 1 Known Limitations
  9. 10.1.1 Fixed Limitations and Closed Enhancements
  10. 10.1.1 Known Limitations
  11. Informatica Global Customer Support

Big Data Fixed Limitations (10.1.1 Update 2)

Big Data Fixed Limitations (10.1.1 Update 2)

Review the Release Notes of previous releases for information about previous fixed limitations.
The following table describes fixed limitations:
Bug
Description
PLAT-13738
When you use Sqoop and join two tables that contain a column with the same name, the mapping fails. (457072)
OCON-7632
When you run a Sqoop mapping with unconnected target ports on the Hive engine, the mapping fails with the following error message:
Invalid column reference
OCON-6756
In a Sqoop mapping, if you add a Filter transformation to filter timestamp data from a Teradata source and export the data to a Teradata target, the mapping runs successfully on the Blaze engine. However, the Sqoop program does not write the timestamp data to the Teradata target.
OCON-1316
The Union transformation produces incorrect results for Sqoop mappings that you run on the Hortonworks distribution by using the TEZ engine. (460889)
BDM-8816
When a mapping reads from a string port and writes to a decimal port in the Hive target, the mapping that runs on the Hive engine writes NULL values if the decimal port precision is greater than 28.
BDM-8791
A mapping that reads from a partitioned Hive source fails on the Blaze engine if it uses parameterized filter conditions on a string port.
BDM-8768
When mappings run on the Hive engine, the connection to the Hadoop cluster intermittently fails if the cluster uses Kerberos authentication.
BDM-8588
If the Hadoop cluster uses Kerberos authentication, mappings with HDFS sources and targets fail on the Spark engine because the mappings are not able to access the default Hive warehouse directory.
BDM-7174
A mapping that reads from and writes to a Hive database hosted on HDFS fails with an error message about access to the default Hive warehouse directory.
BDM-6652
When the Blaze engine reads from a compressed Hive table with text format, the mapping fails if the TBLPROPERTIES clause is not set for the Hive table.
BDM-5328
When you add a filter condition for an Oracle source that has RAPartitioning enabled, the Data Integration Service generates an incorrect WHERE clause and ignores the filter condition. This issue occurs when you run the mapping on the Hive engine.
BDM-5200
A mapping with a DATE_DIFF function returns results that are not valid when the mapping is run on the Hadoop cluster.
BDM-4960
Mappings that use Snappy compression fail on Hadoop clusters if the cluster uses Kerberos authentication.
BDM-4639
Data gets corrupted while reading from or writing to tab-delimited Hive Tables. Reader will read null for columns and writer will corrupt the delimiter.
BDM-4624
When Hive tables are delimited by certain Unicode characters, a mapping that runs with the Blaze engine reads data incorrectly.
BDM-4598
If the Data Integration Service becomes unavailable while running mappings with Hive sources and targets on the Blaze engine, the lock acquired on a Hive target table may fail to be released.
BDM-4564
Mappings that use the Blaze engine hang if machines in the cluster are behind a firewall, and a misleading error message was sent.


Updated January 17, 2019