Data Services All Products
After being upgraded from 9.5.1 to 10.1.1x, a mapping fails with an error indicating an invalid symbol reference.
The Data Integration Service cannot load and run deployed applications when the MRS unsuccessfully fails over when the DIS on a grid fails over.
After being upgraded from 9.6.1 to 10.1.1, a mapping fails when high precision is enabled.
Applications deployed to a grid take a long time to load.
Mappings, applications or workflows generate a high number of threads when deployed to the Data Integration Service, and the deployment process fails to complete.
The mapping execution process generates extraneous .data and .bin files in the temp directory of the Data Integration Service.
A mapping with two relational physical data object targets and a parameterized connection property uses the connection configured in the target object instead of the connection configured in the parameter.
After being upgraded from 9.6x to 10.1x, a mapping opens in an unsaved state, and the user cannot edit the connection that is associated with the target transaction.
Copying a port from a physical data object to customized data object output ports caused mapping transformations to be not valid.
You cannot run a mapping in the native environment when the following conditions are true:
The Developer tool allows you to change an Avro data type in a complex file object to one that Avro does not support. As a result, mapping errors occur at run time.
If you run a Sqoop mapping on the Hive engine, the mapping parameters that you add in the
Additional Sqoop Import Argumentsfield or
Additional Sqoop Export Argumentsfield do not work.
If you configure Sqoop to import Oracle data of the Number data type, the Developer tool imports the data with the precision and scale as 0. After you import the data, you cannot edit the scale.
When you run a Sqoop mapping on a Cloudera cluster that uses Kerberos authentication, you must manually configure mapreduce properties in the
yarn-site.xmlfile on the Data Integration Service node and restart the Data Integration Service. To run the mapping on the Blaze engine, you must restart the Grid Manager and Blaze Job Monitor.
When you use a JDBC connection in a mapping to connect to a Netezza source that contains the Time data type, the mapping fails to run on the Blaze engine.
When you use Sqoop to import data from an Aurora database by using the MariaDB JDBC driver, the mapping stops responding. (456704)
When you use Sqoop and the first mapper task fails, the subsequent mapper tasks fail with the following error message:
File already exists
Column profile run fails when the following conditions are true:
Mappings with an HDFS connection fail with a permission error on the Spark and Hive engines when all the following conditions are true:
Sqoop mappings that contain a Java transformation fail on the Hive engine.
The job log for a mapping or profile job being run in the Hadoop mode of execution does not contain the Hadoop job ID when the tracing level is set to NONE.
Connection reset errors might occur when you run mappings in a Kerberos-enabled domain if login requests from the Data Integration Service overload the Kerberos Key Distribution Center (KDC).
The Blaze engine does not honor the Start import at line property for Hive targets.
Mappings using an HBase connection and the Spark run-time engine fail on a Cloudera 5.11 cluster with Kerberos authentication enabled.
Mapping execution fails on the Blaze engine when you define a parameterized DDL query for a Hive target.
Mappings fail when they use the Spark engine to run on a Hortonworks 2.5 SUSE cluster where Transparent Encryption is enabled on HDFS and the Hive warehouse directory specified in the Hadoop connection is in an encrypted zone.
When you run a mapping that processes more than 1 GB of data on an HDInsight cluster, the Blaze engine duplicates the number of rows in the target.
When you run a workflow on the Spark engine, the Summary Statistics view displays an incorrect row count for mapping tasks.
The container ID in the Blaze Grid Manager log does not identify the specific Blaze component assigned for that container.
If a Lookup object uses an ODBC or JDBC connection and the lookup condition contains a mapping parameter, the mappings fails on the Hive engine.
A mapping fails on the Blaze engine if it contains a filter condition or Filter transformation on a parameterized string source.
Java transformation performance on the Spark engine is slow.
The Spark mapping fails when the argument for the functions LTRIM and RTRIM is an empty string or a NULL value.
Mappings that run on the Spark engine fail if you change the operating system profile user or the impersonation user.
A mapping executed with the Blaze engine writes an inaccurate row count to the target. The row count includes rejected rows.
After a mapping is upgraded from 9.6x to 10.1x, the data type is incorrect for date format columns, and the mapping fails.
When the Blaze engine reads from a compressed Hive table with text format, the mapping fails if the TBLPROPERTIES clause is not set for the Hive table.
Results are corrupted and inconsistent with native mode results when a mapping runs in Hive mode.
Mappings that read from or write to partitioned or bucketed Hive sources and targets on Amazon S3 take longer to execute than expected.
The Big Data Management log collector does not display a Spark job execution log.
A mapping with a filter condition and an Oracle source that uses RAPartitioning produced incorrect results.
The execution parameter dialog box for editing mapping runtime properties does not open.
If you do not configure the Spark HDFS staging directory in a Hadoop connection, Sqoop mappings fail on the Spark engine with a null pointer exception.
Hive mappings with a Lookup transformation fail when the Hive tables on the cluster have encrypted zones.
When you run a pass-through mapping on the Spark engine, the mapping fails with an error message about access to the default Hive warehouse directory.
In a Hortonworks HDP or an Azure HDInsight environment, a mapping that runs on the Hive engine enabled for Tez loads only the first data table to the target if the mapping contains a Union transformation.
If an SQL override in the Hive source contains a DISTINCT or LIMIT clause, the mapping fails on the Spark engine.
If the Blaze Job Monitor starts on a node different from the node that it last ran on, the Administrator tool displays the Monitoring URL of the previous node.
Blaze mappings fail with the error "The Integration Service failed to generate the grid execution plan for the mapping" when any of the following conditions are true:
Executing a mapping on the Hadoop cluster in Hive mode with a filter expression like
A=NULL or B=Cfails with a ClassCast exception.
When a Hadoop cluster is restarted without stopping the components of the Blaze engine, stale Blaze processes remain on the cluster.
When the proxy user setting is not correctly configured in core-site.xml, a mapping that you run with the Spark engine hangs with no error message.
When you run a mapping on a cluster where Ranger KMS authorization is configured, the mapping fails with an "UndeclaredThrowableException" error.
On a Blaze engine, when an unconnected Lookup expression is referenced in a join condition, the mapping fails if the master source is branched and the Joiner transformation is optimized with a map-side join. The mapping fails with the following error: [TE_7017] Internal error. Failed to initialize transformation [producer0]. Contact Informatica Global Customer Support.
When mappings fail, the Spark engine does not drop temporary Hive tables used to store data during mapping execution. You can manually remove the tables. (450507)
When you run a mapping on the Blaze engine that reads from a Hive partition that contains Parquet metadata files, the mapping fails.
When you set a relational source as distinct and change the source to use a SQL override, the Data Integration Service reads NULL values instead of the source data.