Table of Contents

Search

  1. Abstract
  2. Support Changes
  3. Installation and Upgrade
  4. 10.4.1 Fixed Limitations and Closed Enhancements
  5. 10.4.1 Known Limitations
  6. Cumulative Known Limitations
  7. Emergency Bug Fixes Merged into 10.4.1
  8. Informatica Global Customer Support

Release Notes

Release Notes

Third-Party Known Limitations (Cumulative)

Third-Party Known Limitations (Cumulative)

The following table describes known limitations that are carried forward from previous releases:
Bug
Description
IIS-4126
When you create a Databricks Delta table in a streaming mapping, the Databricks Delta target does not support data type cast for float, boolean, date, smallint, and tinyint data types.
When you create a Databricks Delta table, use the following data types:
  • double (for float data type)
  • integer (for boolean, smallint, and tinyint data types)
  • timestamp (for date data type)
EIC-25554
Lineage of the Qlik Sense resource in Enterprise Data Catalog is incomplete when you use the global dimension of the drill-down type.
MITI ticket reference number: INFAEDC-1261
PLAT-14849
On AIX operating systems, when you enable secure communication to an SAP HANA database on AIX with the SSL protocol, mappings terminate unexpectedly.
SAP ticket reference number: 0001101086
(410495)
PLAT-14796
When a MySQL table name contains special characters, the Developer tool does not import all the columns. This issue occurs when you use the DataDirect ODBC and JDBC drivers to import the metadata. (395943)
DataDirect ticket reference number: 00322369
PLAT-14658
When you preview data from the SAP HANA database for a decimal data type with a precision of 38 digits, the data preview runs continuously. When you run the mapping, the mapping run fails with an error. (414220)
SAP ticket reference number: 0000624569 2015
(414220)
PLAT-14653
When you import a Timestamp with Time Zone metadata, the scale appears as 0 instead of 6 for the data type.
DataDirect reference number: 00310850
(413119)
OCON-9943
If you configure Sqoop to import time data from a Netezza database, the mapping fails.
Apache ticket reference number: SQOOP-2978
OCON-9881
If you export data to Oracle through Sqoop and the columns contain mixed case characters, the mapping fails. This issue occurs when you run the mapping on a Cloudera cluster.
OCON-9377
When you configure Sqoop and run a Teradata Parallel Transporter mapping on a Cloudera cluster to export data of the Byte or Varbyte data type to a Teradata target, the mapping fails on the Blaze engine.
OCON-9376
If you configure Sqoop to export data of the Blob or Clob data type to a Teradata target, TDCH mappings fail on the Spark engine.
OCON-8850
If you configure Sqoop to export data of the Timestamp data type from a Hive source to a Microsoft Azure SQL Data Warehouse target, the mapping fails.
OCON-8786
If you configure Sqoop to export data of the Clob or DBClob data type to IBM DB2 z/OS targets, the mapping fails.
OCON-8779
If you configure Sqoop to export data of the Real data type to IBM DB2 z/OS targets, the mapping fails.
OCON-8561
If you configure Sqoop to export data of the Money data type to Microsoft SQL Server targets, the mapping fails.
OCON-8387
If you configure TDCH and Sqoop and run a mapping on the Blaze or Spark engine to export data of the Time data type, only milliseconds are written to the target. The nanosecond part is truncated.
Cloudera ticket reference number: 124306
OCON-8332
If you configure Sqoop to export data of the Clob or DBClob data type to IBM DB2 targets, the mapping fails.
OCON-7974
If you configure Sqoop and a column name contains spaces, the mapping fails.
Apache ticket reference number: SQOOP-2737
OCON-7687
If you export data through Sqoop and the columns contain mixed case characters, the mapping fails.
OCON-7669
When you configure Sqoop and OraOop, and export data to an Oracle target that contains mixed case characters in the table name, the mapping fails.
Workaround: Use the generic Oracle JDBC driver to export data.
OCON-7620
If you import data from an IBM DB2 source through Sqoop and the table name contains mixed case characters, the mapping fails. Sqoop JIRA issue number: SQOOP-3211
OCON-7505
Sqoop mappings that read byte or varbyte data from a Teradata source and write it to a Teradata target fail on the Blaze engine. This issue occurs if you use Cloudera Connector Powered by Teradata.
Cloudera ticket reference number: 124305
OCON-7504
When you use Sqoop to read data of the Timestamp data type from a Teradata source and write it to a Teradata target, only milliseconds are written to the target. This issue occurs if you run the Teradata Parallel Transporter mapping on a Cloudera cluster and on the Blaze engine.
Cloudera ticket reference number: 124302
OCON-7503
When you use Sqoop to read time data from a Teradata source and write it to a Teradata target, the fractional seconds get corrupted. This issue occurs if you use Cloudera Connector Powered by Teradata or Hortonworks Connector for Teradata, and run the mapping on the Blaze engine.
Cloudera ticket reference number: 124306
OCON-7459
When you export data to an IBM DB2 target through Sqoop, the mapping fails if all of the following conditions are true:
  • You create or replace the IBM DB2 target table at run time.
  • The IBM DB2 target table name or column names contain mixed case characters.
  • You run the mapping on a Cloudera 5u8 cluster.
Apache ticket reference number: SQOOP-3212
OCON-7431
When you read time data from a Teradata source and write it to a Teradata target, the fractional seconds get corrupted. This issue occurs if you run the Teradata Parallel Transporter mapping on a Hortonworks cluster and on the Blaze engine.
Cloudera ticket reference number: 124302
OCON-7219
When you run a Sqoop mapping on the Blaze engine to export Teradata float data, the data is truncated after the decimal point.
Cloudera support ticket number: 113716
OCON-7216
If a Sqoop source or target contains a column name with double quotes, the mapping fails on the Blaze engine. However, the Blaze Job Monitor incorrectly indicates that the mapping ran successfully and that rows were written into the target.
OCON-7214
Sqoop mappings fail on the Blaze engine if you use a custom query with the Order By clause to import data.
Sqoop JIRA issue number: SQOOP-3064
OCON-7213
The Sqoop program does not honor the --num-mappers argument and -m argument when you export data and run the mapping on the Blaze or Spark engine.
Sqoop JIRA issue number: SQOOP-2837
OCON-7212
If there are unconnected ports in a target, Sqoop mappings fail on the Blaze engine. This issue occurs when you run the Sqoop mapping on any cluster other than a Cloudera cluster.
Workaround: Before you run the mapping, create a table in the target database with columns corresponding to the connected ports.
OCON-7205
When you run a Sqoop mapping on the Blaze engine to export data of the Numeric data type from Netezza, the scale part of the data is truncated.
OCON-2847
Loading a Microsoft SQL Server resource fails when TLS encryption is enabled for the source database and the Metadata Manager repository is a Microsoft SQL Server database with TLS encryption enabled. (452471)
Data Direct case number: 00343832
OCON-19488
When you use the Sqoop Hortonworks Connector for Teradata to run Teradata mappings on the Blaze or Spark engine with the Hortonworks HDP 3.1 cluster, the mapping fails.
OCON-17245
When you run a Sqoop mapping on an Azure HDInsight 3.6 ADLS cluster, the mapping fails on the Blaze engine.
Microsoft ticket reference number: 118121026003203
OCON-14861
When you use NTLM authentication for a Microsoft SQL Server connection to connect to Microsoft SQL Server 2017 hosted on Linux, the connection fails with the following error:
Login failed. The login is from an untrusted domain and cannot be used with Integrated authentication.
DataDirect ticket reference number: 00448019
OCON-14398
When you use the Microsoft SQL Server connection to write large volumes of data to Microsoft Azure SQL Database in bulk mode, the session performance slows down.
DataDirect ticket reference number: 00421602
OCON-14296
When you use the Microsoft SQL Server connection and configure multiple partitions to write large volumes of data to Microsoft Azure SQL Database, the session performance slows down.
Microsoft ticket reference number: 118062518450002
IIS-3381
On a Hortonworks HDP 3.1 cluster, Spark engine incorrectly converts string data type to binary data type.
Hortonworks ticket reference number: 270
IDE-1677
When you run a data domain discovery profile with multiple data domains on MapR 4.0.2 Yarn or MapR 4.0.2 classic Hadoop distribution files, profile run fails. (448529)
BDM-28598
When the Spark engine processes an input value of zero in a decimal port that is configured with equivalent precision and scale, the engine treats the value as data overflow and the return value is NULL on a Hortonworks HDP 3.1 cluster.
Cloudera ticket reference number: 635063
BDM-25513
When you have a mapping with a Hive ACID source and target tables that runs on the Spark engine, the Summary Statistics view does not reflect any throughput statistics for the mapping job.
Cloudera ticket reference number: 00225986
BDM-25490
BDM-25491
When a mapping that runs on a Hortonworks HDP 3.1 cluster uses an Update Strategy transformation, and you enabled the Hive Warehouse Connector, the mapping writes incorrect data to rows marked with DD_INSERT.
Cloudera ticket reference number: 00228458
BDM-24885
When you run a Sqoop, Amazon S3, or Amazon Redshift mapping on the Blaze engine and on the Hortonworks HDP 3.1 cluster that uses Ranger KMS authorization, the mapping fails with the following error:
org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Attempt to obtain new INITIATE credentials failed! (null))
Hortonworks HDP ticket reference number: 00230890
BDM-23420
A mapping with a Sorter transformation that contains datetime data in its default value changes the data when you run the mapping on a MapR cluster.
MapR ticket reference number: 00072094
BDM-23104
The Spark engine cannot write data to a bucketed Hive target if the Hadoop distribution is MapR.
MapR case number: 00074338
BDM-21486
If a mapping runs on an Azure HDInsight cluster, the mapping fails when you use a SQL override on a Date/Time column in a Hive source.
Apache Hive ticket reference number: HIVE-12200
BDM-17020
When you run a mapping that uses a schema in an Avro file, the Spark engine adds a NULL data type to the primitive data types in the schema.
BDM-14422
The mapping fails with an error on Spark engine due to duplicate columns in the Hive table.
SPARK-23519
BDM-14410
The mapping fails because the Spark engine cannot read from an empty ORC Hive source.
SPARK-19809
BDM-10570
The Spark job fails with out of memory errors when a mapping that converts relational data to hierarchical data contains more than three Aggregator and Joiner transformations.
Workaround: To convert relational data to a hierarchical data of more than four levels, develop more than one mapping to stage the intermediate data. For example, develop a mapping that converts relational data to a hierarchical data up to three levels. Use the hierarchical data in another mapping to generate a hierarchical data of four levels.
SPARK-22207
BDM-10455
Inserts into a bucketed table can sometimes fail when you use Hive on Tez as the execution engine. The issue is more probable if the table is a Hive ACID table and a delete operation is performed before the inserts.
Apache ticket reference number: TEZ-3814
BDM-4674
A mapping with an Update Strategy transformation fails at run time if the Hive target has more than one primary key.
Apache Hive ticket reference number: HIVE-19911

0 COMMENTS

We’d like to hear from you!