Table of Contents

Search

  1. Abstract
  2. Installation and Upgrade
  3. 10.2 HF1 Fixed Limitations and Closed Enhancements
  4. 10.2 HF1 Known Limitations
  5. 10.2 Fixed Limitations and Closed Enhancements
  6. 10.2 Known Limitations
  7. Emergency Bug Fixes Merged into 10.2
  8. Informatica Global Customer Support

Big Data Fixed Limitations (10.2)

Big Data Fixed Limitations (10.2)

Review the Release Notes of previous releases for information about previous fixed limitations.
The following table describes fixed limitations:
Bug
Description
PLAT-17656
After being upgraded from 9.5.1 to 10.1.1x, a mapping fails with an error indicating an invalid symbol reference.
PLAT-17310
The Data Integration Service cannot load and run deployed applications when the MRS unsuccessfully fails over when the DIS on a grid fails over.
PLAT-17036
After being upgraded from 9.6.1 to 10.1.1, a mapping fails when high precision is enabled.
PLAT-16902
Applications deployed to a grid take a long time to load.
PLAT-16798
Mappings, applications or workflows generate a high number of threads when deployed to the Data Integration Service, and the deployment process fails to complete.
PLAT-16432
The mapping execution process generates extraneous .data and .bin files in the temp directory of the Data Integration Service.
PLAT-16118
A mapping with two relational physical data object targets and a parameterized connection property uses the connection configured in the target object instead of the connection configured in the parameter.
PLAT-16114
After being upgraded from 9.6x to 10.1x, a mapping opens in an unsaved state, and the user cannot edit the connection that is associated with the target transaction.
PLAT-16074
Copying a port from a physical data object to customized data object output ports caused mapping transformations to be not valid.
PLAT-14325
You cannot run a mapping in the native environment when the following conditions are true:
  • You select a native validation environment and a Hive or Blaze validation environment for the mapping.
  • The mapping contains a Match transformation.
PLAT-13734
The Developer tool allows you to change an Avro data type in a complex file object to one that Avro does not support. As a result, mapping errors occur at run time.
OCON-9414
If you run a Sqoop mapping on the Hive engine, the mapping parameters that you add in the
Additional Sqoop Import Arguments
field or
Additional Sqoop Export Arguments
field do not work.
OCON-7557
If you configure Sqoop to import Oracle data of the Number data type, the Developer tool imports the data with the precision and scale as 0. After you import the data, you cannot edit the scale.
OCON-7073
When you run a Sqoop mapping on a Cloudera cluster that uses Kerberos authentication, you must manually configure mapreduce properties in the
yarn-site.xml
file on the Data Integration Service node and restart the Data Integration Service. To run the mapping on the Blaze engine, you must restart the Grid Manager and Blaze Job Monitor.
OCON-6745
When you use a JDBC connection in a mapping to connect to a Netezza source that contains the Time data type, the mapping fails to run on the Blaze engine.
OCON-605
When you use Sqoop to import data from an Aurora database by using the MariaDB JDBC driver, the mapping stops responding. (456704)
OCON-1216
When you use Sqoop and the first mapper task fails, the subsequent mapper tasks fail with the following error message:
File already exists
(456884)
IDE-2407
Column profile run fails when the following conditions are true:
  1. The profiling warehouse repository is on Microsoft SQL Server and you enable the Use DSN option to use the DSN configured in the Microsoft ODBC Administrator as the connect string.
  2. You create a column profile with data domain discovery and choose the sampling option as Random sample or Random sample (auto), or you create a column profile to perform only data domain discovery.
  3. You run the profile on the Blaze engine in the Hadoop run-time environment set up on an Azure HDInsight cluster.
BDM-960
Mappings with an HDFS connection fail with a permission error on the Spark and Hive engines when all the following conditions are true:
  • The HDFS connection user is different from the Data Integration Service user.
  • The Hadoop connection does not have an impersonation user defined.
  • The Data Integration Service user does not have write access to the HDFS target folder.
BDM-9571
Sqoop mappings that contain a Java transformation fail on the Hive engine.
BDM-9490
The job log for a mapping or profile job being run in the Hadoop mode of execution does not contain the Hadoop job ID when the tracing level is set to NONE.
BDM-9468
Connection reset errors might occur when you run mappings in a Kerberos-enabled domain if login requests from the Data Integration Service overload the Kerberos Key Distribution Center (KDC).
BDM-9318
The Blaze engine does not honor the Start import at line property for Hive targets.
BDM-9266
Mappings using an HBase connection and the Spark run-time engine fail on a Cloudera 5.11 cluster with Kerberos authentication enabled.
BDM-9260
Mapping execution fails on the Blaze engine when you define a parameterized DDL query for a Hive target.
BDM-9220
Mappings fail when they use the Spark engine to run on a Hortonworks 2.5 SUSE cluster where Transparent Encryption is enabled on HDFS and the Hive warehouse directory specified in the Hadoop connection is in an encrypted zone.
BDM-9172
When you run a mapping that processes more than 1 GB of data on an HDInsight cluster, the Blaze engine duplicates the number of rows in the target.
BDM-9151
When you run a workflow on the Spark engine, the Summary Statistics view displays an incorrect row count for mapping tasks.
BDM-9117
The container ID in the Blaze Grid Manager log does not identify the specific Blaze component assigned for that container.
BDM-8894
If a Lookup object uses an ODBC or JDBC connection and the lookup condition contains a mapping parameter, the mappings fails on the Hive engine.
BDM-8806
A mapping fails on the Blaze engine if it contains a filter condition or Filter transformation on a parameterized string source.
BDM-8596
Java transformation performance on the Spark engine is slow.
BDM-7230
The Spark mapping fails when the argument for the functions LTRIM and RTRIM is an empty string or a NULL value.
BDM-7126
Mappings that run on the Spark engine fail if you change the operating system profile user or the impersonation user.
BDM-6840
A mapping executed with the Blaze engine writes an inaccurate row count to the target. The row count includes rejected rows.
BDM-6804
After a mapping is upgraded from 9.6x to 10.1x, the data type is incorrect for date format columns, and the mapping fails.
BDM-6694
When the Blaze engine reads from a compressed Hive table with text format, the mapping fails if the TBLPROPERTIES clause is not set for the Hive table.
BDM-6590
Results are corrupted and inconsistent with native mode results when a mapping runs in Hive mode.
BDM-5465
Mappings that read from or write to partitioned or bucketed Hive sources and targets on Amazon S3 take longer to execute than expected.
BDM-5245
The Big Data Management log collector does not display a Spark job execution log.
BDM-5079
A mapping with a filter condition and an Oracle source that uses RAPartitioning produced incorrect results.
BDM-4795
The execution parameter dialog box for editing mapping runtime properties does not open.
BDM-4652
If you do not configure the Spark HDFS staging directory in a Hadoop connection, Sqoop mappings fail on the Spark engine with a null pointer exception.
BDM-4509
Hive mappings with a Lookup transformation fail when the Hive tables on the cluster have encrypted zones.
BDM-4508
When you run a pass-through mapping on the Spark engine, the mapping fails with an error message about access to the default Hive warehouse directory.
BDM-4471
In a Hortonworks HDP or an Azure HDInsight environment, a mapping that runs on the Hive engine enabled for Tez loads only the first data table to the target if the mapping contains a Union transformation.
BDM-4323
If an SQL override in the Hive source contains a DISTINCT or LIMIT clause, the mapping fails on the Spark engine.
BDM-4230
If the Blaze Job Monitor starts on a node different from the node that it last ran on, the Administrator tool displays the Monitoring URL of the previous node.
BDM-3989
Blaze mappings fail with the error "The Integration Service failed to generate the grid execution plan for the mapping" when any of the following conditions are true:
  • Apache Ranger KMS is not configured correctly on a Hortonworks HDP cluster.
  • Hadoop KMS is not configured correctly for HDFS transparent encryption on a Cloudera CDH cluster.
  • The properties hadoop.kms.proxyuser.<SPN_user>.groups and hadoop.kms.proxyuser.<SPN_USER>.hosts for the Kerberos SPN are not set on the Hadoop cluster.
BDM-3886
Executing a mapping on the Hadoop cluster in Hive mode with a filter expression like
A=NULL or B=C
fails with a ClassCast exception.
BDM-3744
When a Hadoop cluster is restarted without stopping the components of the Blaze engine, stale Blaze processes remain on the cluster.
BDM-3544
When the proxy user setting is not correctly configured in core-site.xml, a mapping that you run with the Spark engine hangs with no error message.
BDM-3416
When you run a mapping on a cluster where Ranger KMS authorization is configured, the mapping fails with an "UndeclaredThrowableException" error.
BDM-3267
On a Blaze engine, when an unconnected Lookup expression is referenced in a join condition, the mapping fails if the master source is branched and the Joiner transformation is optimized with a map-side join. The mapping fails with the following error: [TE_7017] Internal error. Failed to initialize transformation [producer0]. Contact Informatica Global Customer Support.
BDM-2641
When mappings fail, the Spark engine does not drop temporary Hive tables used to store data during mapping execution. You can manually remove the tables. (450507)
BDM-10301
When you run a mapping on the Blaze engine that reads from a Hive partition that contains Parquet metadata files, the mapping fails.
BDM-10205
When you set a relational source as distinct and change the source to use a SQL override, the Data Integration Service reads NULL values instead of the source data.


Updated June 29, 2020