Table of Contents

Search

  1. Abstract
  2. Installation and Upgrade
  3. 10.1.1 HotFix 1 Fixed Limitations and Closed Enhancements
  4. 10.1.1 HotFix 1 Known Limitations
  5. 10.1.1 Update 2 Fixed Limitations and Closed Enhancements
  6. 10.1.1 Update 2 Known Limitations
  7. 10.1.1 Update 1 Fixed Limitations and Closed Enhancements
  8. 10.1.1 Update 1 Known Limitations
  9. 10.1.1 Fixed Limitations and Closed Enhancements
  10. 10.1.1 Known Limitations
  11. Informatica Global Customer Support

Big Data Third-Party Known Limitations (10.1.1 Update 2)

Big Data Third-Party Known Limitations (10.1.1 Update 2)

The following table describes third-party known limitations:
Bug
Description
OCON-7974
Sqoop mappings fail when a column name contains spaces.
Apache ticket reference number: SQOOP-2737
OCON-7505
Sqoop mappings that read byte or varbyte data from a Teradata source and write it to a Teradata target fail on the Blaze engine. This issue occurs if you use Cloudera Connector Powered by Teradata.
Cloudera ticket reference number: 124305
OCON-7504
When you use Sqoop to read time data from a Teradata source and write it to a Teradata target, only milliseconds are written to the target. This issue occurs if you run the Teradata Parallel Transporter mapping on a Cloudera cluster and on the Blaze engine.
Cloudera ticket reference number: 124302
OCON-7503
When you use Sqoop to read time data from a Teradata source and write it to a Teradata target, the fractional seconds get corrupted. This issue occurs if you use Cloudera Connector Powered by Teradata or Hortonworks Connector for Teradata, and run the mapping on the Blaze engine.
Cloudera ticket reference number: 124306
BDM-5431
When the Blaze engine runs mappings as the impersonation user on a MapR cluster, the following error message appears:
[org.apache.hadoop.security.AccessControlException: User impUser3(user id 2002) does not have access
Workaround: Disable user impersonation. In the Hadoop connection properties, in the Common Properties area, set the impersonation user to the user ID of the Data Integration Service user.
MapR case number 00045006.
BDM-5022
When you run a mapping with the Spark engine on a MapR cluster, the execution engine uses the default MapR staging directory and not the staging directory that you configured in the "SPARK HDFS Staging Directory" property in the Hadoop connection.
For example, the engine uses the directory /user/<Data Integration Service username>/. If you configured the impersonation user, the engine uses the directory /user/<impersonation user>. If you configured the operating system profile (OSP) user, the engine uses the directory /user/<OSP user>.
MapR case number 00045736.
BDM-1363
Big Data Management on the MapR Hadoop distribution does not support the use of operating system profiles when you run mappings using the Blaze engine.
MapR case number 00045006.


Updated January 17, 2019