Table of Contents

Search

  1. Abstract
  2. Installation and Upgrade
  3. 10.2 HF1 Fixed Limitations and Closed Enhancements
  4. 10.2 HF1 Known Limitations
  5. 10.2 Fixed Limitations and Closed Enhancements
  6. 10.2 Known Limitations
  7. Emergency Bug Fixes Merged into 10.2
  8. Informatica Global Customer Support

Third-Party Known Limitations (10.2)

Third-Party Known Limitations (10.2)

The following table describes third-party known limitations:
Bug
Description
PLAT-14849
On AIX operating systems, when you enable secure communication to an SAP HANA database on AIX with the SSL protocol, mappings terminate unexpectedly.
SAP ticket reference number: 0001101086
(410495)
PLAT-14827
Mapping fails in the Hive environment if the user name or password for a target IBM DB2 table is more than eight characters. The following error appears in the Hadoop cluster logs:
Caused by: java.io.IOException: Mapping execution failed with the following error: WRT_8001 Error connecting to database... WRT_8001 [Session Write_EMP_OUT5_MAPPING_3285816766724683 Username test_it2 DB Error -1 [IBM][CLI Driver] SQL30082N Security processing failed with reason "24" ("USERNAME AND/OR PASSWORD INVALID"). SQLSTATE=08001
Workaround: Verify that the IBM DB2 database user name and password is less than eight characters. (410437)
PLAT-14796
When a MySQL table name contains special characters, the Developer tool does not import all the columns. This issue occurs when you use the DataDirect ODBC and JDBC drivers to import the metadata. (395943)
DataDirect ticket reference number: 00322369
PLAT-14658
When you preview data from the SAP HANA database for a decimal data type with a precision of 38 digits, the data preview runs continuously. When you run the mapping, the mapping run fails with an error. (414220)
SAP ticket reference number: 0000624569 2015
(414220)
PLAT-14653
When you import a Timestamp with Time Zone metadata, the scale appears as 0 instead of 6 for the data type.
DataDirect reference number: 00310850
(413119)
OCON-9943
If you configure Sqoop to import time data from a Netezza database, the mapping fails.
Apache ticket reference number: SQOOP-2978
OCON-9881
If you export data to Oracle through Sqoop and the columns contain mixed case characters, the mapping fails. This issue occurs when you run the mapping on a Cloudera cluster.
OCON-8786
If you configure Sqoop to export data of the Clob or DBClob data type to IBM DB2 z/OS targets, the mapping fails.
OCON-8561
If you configure Sqoop to export data of the Money data type to Microsoft SQL Server targets, the mapping fails.
OCON-8387
If you configure TDCH and Sqoop and run a mapping on the Blaze or Spark engine to export data of the Time data type, only milliseconds are written to the target. The nanosecond part is truncated.
Cloudera ticket reference number: 124306
OCON-8332
If you configure Sqoop to export data of the Clob or DBClob data type to IBM DB2 targets, the mapping fails.
OCON-7974
If you configure Sqoop and a column name contains spaces, the mapping fails.
Apache ticket reference number: SQOOP-2737
OCON-7687
If you export data through Sqoop and the columns contain mixed case characters, the mapping fails.
OCON-7669
If you configure Sqoop and OraOop, and export data to an Oracle target that contains mixed case characters in the table name, the mapping fails.
Workaround: Use the generic Oracle JDBC driver to export data.
OCON-7620
If you import data from an IBM DB2 source through Sqoop and the table name contains mixed case characters, the mapping fails.
Sqoop JIRA issue number: SQOOP-3211
OCON-7505
Sqoop mappings that read byte or varbyte data from a Teradata source and write it to a Teradata target fail on the Blaze engine. This issue occurs if you use Cloudera Connector Powered by Teradata.
Cloudera ticket reference number: 124305
OCON-7504
When you use Sqoop to read data of the Timestamp data type from a Teradata source and write it to a Teradata target, only milliseconds are written to the target. This issue occurs if you run the Teradata Parallel Transporter mapping on a Cloudera cluster and on the Blaze engine.
Cloudera ticket reference number: 124302
OCON-7503
When you use Sqoop to read time data from a Teradata source and write it to a Teradata target, the fractional seconds get corrupted. This issue occurs if you use Cloudera Connector Powered by Teradata or Hortonworks Connector for Teradata, and run the mapping on the Blaze engine.
Cloudera ticket reference number: 124306
OCON-7459
When you export data to an IBM DB2 target through Sqoop, the mapping fails if all of the following conditions are true:
  • You create or replace the IBM DB2 target table at run time.
  • The IBM DB2 target table name or column names contain mixed case characters.
  • You run the mapping on a Cloudera 5u8 cluster.
Apache ticket reference number: SQOOP-3212
OCON-7431
When you read time data from a Teradata source and write it to a Teradata target, the fractional seconds get corrupted. This issue occurs if you run the Teradata Parallel Transporter mapping on a Hortonworks cluster and on the Blaze engine.
Cloudera ticket reference number: 124302
OCON-7365
Sqoop mappings fail on MapR 5.2 clusters.
Workaround: Add the following property in the mapred-site.xml file on all nodes of the cluster, and restart the Hadoop services and cluster:
<property> <name>mapreduce.jobhistory.address</name> <value><Host_Name>:10020</value> </property>
OCON-7219
When you run a Sqoop mapping on the Blaze engine to export Teradata float data, the data is truncated after the decimal point.
Cloudera support ticket number: 113716
OCON-7214
Sqoop mappings fail on the Blaze engine if you use a custom query with the Order By clause to import data.
Sqoop JIRA issue number: SQOOP-3064
OCON-7213
The Sqoop program does not honor the --num-mappers argument and -m argument when you export data and run the mapping on the Blaze or Spark engine.
Sqoop JIRA issue number: SQOOP-2837
OCON-7211
When you run a Sqoop mapping to import data from or export data to Microsoft SQL Server databases that are hosted on Azure, the mapping fails.
Sqoop JIRA issue number: SQOOP-2349
OCON-417
Sqoop does not read the OraOop arguments that you configure in the
oraoop-site.xml
file.
Workaround: Specify the OraOop arguments as part of the Sqoop arguments in the mapping. (455750)
OCON-2847
Loading a Microsoft SQL Server resource fails when TLS encryption is enabled for the source database and the Metadata Manager repository is a Microsoft SQL Server database with TLS encryption enabled. (452471)
Data Direct case number: 00343832
OCON-1100
When you export data to an IBM DB2 z/OS database through Sqoop and do not configure the batch argument, the mapping fails.
Workaround: Configure the batch argument in the mapping and run the mapping again. (459671)
Apache ticket reference number: SQOOP-2980
OCON-1080
When you use Sqoop for a data object and update its properties in the associated Read or Write transformation, the mapping terminates with an IVector error message.
Workaround: Create a new data object and mapping. (453097)
IDE-1677
When you run a data domain discovery profile with multiple data domains on MapR 4.0.2 Yarn or MapR 4.0.2 classic Hadoop distribution files, profile run fails. (448529)
BDM-9585
Mappings fail on the Spark engine when you configure an SQL override to access a Hive view.
Apache Spark ticket reference number: SPARK-21154.
BDM-8932
Complex file data objects cannot parse a multi-line JSON file on the Spark engine.
Apache Spark ticket reference number: SPARK-18352
BDM-5478
You cannot use impersonation users or operating system profiles to run mappings on an Azure HDInsight cluster that uses ADLS or WASB. HDInsight clusters that use ADLS or WASB are single-user clusters.
BDM-5431
When the Blaze engine runs mappings as the impersonation user on a MapR cluster, the following error message appears:
[org.apache.hadoop.security.AccessControlException: User impUser3(user id 2002) does not have access
Workaround: Disable user impersonation. In the Hadoop connection properties, in the Common Properties area, set the impersonation user to the user ID of the Data Integration Service user.
MapR case number 00045006.
BDM-4682
A mapping fails when it contains an Update Strategy transformation and writes to a partitioned Hive target on an Amazon EMR cluster.
Hive issue 17221.
BDM-1363
Big Data Management on the MapR Hadoop distribution does not support the use of operating system profiles when you run mappings using the Blaze engine.
MapR case number 00045006.
BDM-10455
Inserts into a bucketed table can sometimes fail when you use Hive on Tez as the execution engine. The issue is more probable if the table is a Hive ACID table and a delete operation is performed before the inserts.
Apache ticket reference number: TEZ-3814
BDM-10410
For IBM BigInsights and Hortonworks clusters, insert into a Hive target table on Amazon S3 can fail if the table is enabled for transactions.
BDM-10100
Impersonation is not honored and mappings fail on a MapR cluster when the following conditions are true:
  • The Data Integration Service uses operating system profiles.
  • The
    hive.metastore.sasl.enabled
    property is set to true in the
    hive-site.xml
    configuration set.
MapR case number: 00052930


Updated June 29, 2020