Data Services
All Products
Bug
| Description
|
---|---|
PLAT-14325
| You cannot run a mapping in the native environment when the following conditions are true:
|
PLAT-13734
| The Developer tool allows you to change an Avro data type in a complex file object to one that Avro does not support. As a result, mapping errors occur at run time.
Workaround: If you change an Avro data type, verify that it is a supported type. (456866)
|
PLAT-12073
| Mappings that read from one of the following sources fail to run in the native environment when the Data Integration Service is configured to run jobs in separate remote processes:
Workaround: On the Compute view for the Data Integration Service, configure the INFA_HADOOP_DIST_DIR environment variable for each node with the compute role. Set the environment variable to the same value configured for the Data Integration Service Hadoop Distribution Directory execution option for the Data Integration Service. (443164)
|
OCON-937
| When you use an ODBC connection to write time data to a Netezza database, the mapping fails. This issue occurs when you run the mapping on Cloudera 5u4. (440423)
|
OCON-730
| When you export data through Sqoop and there are primary key violations, the mapping fails and bad records are not written to the bad file. (456616)
|
OCON-7216
| If a Sqoop source or target contains a column name with double quotes, the mapping fails on the Blaze engine. However, the Blaze Job Monitor incorrectly indicates that the mapping ran successfully and that rows were written into the target.
|
OCON-7212
| Sqoop mappings fail on the Blaze engine if there are unconnected ports in a target. This issue occurs when you run the Sqoop mapping on any cluster other than a Cloudera 5.8 cluster.
Workaround: Before you run the mapping, create a table in the target database with columns corresponding to the connected ports.
|
OCON-7208
| When you run a Sqoop mapping on the Blaze engine and the columns contain Unicode characters, the Sqoop program reads them as null values.
|
OCON-7205
| When you run a Sqoop mapping on the Blaze engine to export Netezza numeric data, the scale part of the data is truncated.
|
OCON-7078
| Sqoop mappings that import data from or export data to an SSL-enabled database fail on the Blaze engine.
|
OCON-7076
| When you run a Sqoop mapping and abort the mapping from the Developer tool, the Sqoop map-reduce jobs continue to run.
Workaround: On the Sqoop data node, run the following command to kill the Sqoop map-reduce jobs:
yarn application -kill <application_ID> |
OCON-688
| When you enable Sqoop for a logical data object and export data to an IBM DB2 database, the Sqoop export command fails. However, the mapping runs successfully without any error. (456455)
|
OCON-6745
| When you use a JDBC connection in a mapping to connect to a Netezza source that contains the Time data type, the mapping fails to run on the Blaze engine.
|
OCON-605
| When you use Sqoop to import data from an Aurora database by using the MariaDB JDBC driver, the mapping stops responding. (456704)
|
OCON-471
| When you enable Sqoop for a data object and a table or column name contains Unicode characters, the mapping fails. (452114)
|
OCON-417
| Sqoop does not read the OraOop arguments that you configure in the
oraoop-site.xml file.
Workaround: Specify the OraOop arguments as part of the Sqoop arguments in the mapping. (455750)
|
OCON-1267
| The path of the resource file in a complex file object appears as a recursive path of directories starting with the root directory and ending with a string. (437196)
|
OCON-1216
| When you use Sqoop and the first mapper task fails, the subsequent mapper tasks fail with the following error message:
File already exists |
OCON-1188
| When you export data to a Netezza database through Sqoop and the database contains a column of the float data type, the mapping fails. (456285)
|
OCON-1100
| When you export data to an IBM DB2 z/OS database through Sqoop and do not configure the batch argument, the mapping fails.
Workaround: Configure the batch argument in the mapping and run the mapping again. (459671)
|
OCON-1080
| When you use Sqoop for a data object and update its properties in the associated Read or Write transformation, the mapping terminates with an IVector error message.
Workaround: Create a new data object and mapping. (453097)
|
IDE-1689
| Mappings and profiles that use snappy compression fail in HiveServer2 mode on HDP and CDH SUSE clusters.
Workaround:
On the Informatica domain, edit the property that contains the location of the cluster native library:
On the Hadoop cluster:
|
BDM-960
| Mappings with an HDFS connection fail with a permission error on the Spark and Hive engines when all the following conditions are true:
Workaround: In the Hadoop connection, define an impersonation user with write permission to access the HDFS target folder.
|
BDM-4652
| Sqoop mappings fail with a null pointer exception on the Spark engine if you do not configure the Spark HDFS staging directory in the Hadoop connection.
|
BDM-4621
| When you configure SQL override on a mapping with a Hive source and the mapping runs concurrently many times, the mapping might hang.
Workaround: Use the scheduler on the cluster to schedule the MapReduce job to run at a time of low cluster memory use.
|
BDM-4473
| The Data Integration Service fails with out of memory errors when you run a large number of concurrent mappings on the Spark engine.
Workaround: Increase the heap memory settings on the machine where the Data Integration Service runs
|
BDM-4471
| In a Hortonworks HDP or an Azure HDInsight environment, a mapping that runs on the Hive engine enabled for Tez loads only the first data table to the target if the mapping contains a Union transformation.
Workaround: Run the mapping on the Hive engine enabled for MapReduce.
|
BDM-4323
| If an SQL override in the Hive source contains a DISTINCT or LIMIT clause, the mapping fails on the Spark engine.
|
BDM-4230
| If the Blaze Job Monitor starts on a node different from the node that it last ran on, the Administrator tool displays the Monitoring URL of the previous node.
Workaround: Correct the URL with the current job monitor host name from the log. Or restart the Grid Manager to correct the URL for the new jobs that start.
|
BDM-4107
| If a mapping or workflow contains a parameter, the mapping does not return system-defined mapping outputs when run in the Hadoop environment.
|
BDM-3989
| Blaze mappings fail with the error "The Integration Service failed to generate the grid execution plan for the mapping" when any of the following conditions are true:
|
BDM-3853
| When the Blaze engine runs a mapping that uses source or target files in the WASB location on a cluster, the mapping fails with an error like:
The mapping fails because the cluster attempts to decrypt the data but cannot find a file needed to perform the decryption operation.
Workaround: Find the following files on the cluster and copy them to the
/usr/lib/python2.7/dist-packages/hdinsight_common directory on the machine that runs the Data Integration Service:
|
BDM-3744
| When a Hadoop cluster is restarted without stopping the components of the Blaze engine, stale Blaze processes remain on the cluster.
Workaround: Kill the stale processes using the pkill command.
|
BDM-3687
| When you run a Sqoop mapping on the Spark engine, the Sqoop map-reduce jobs run in the default yarn queue instead of the yarn queue that you configure.
Workaround: To run a map-reduce job in a particular yarn queue, configure the following property in the
Sqoop Arguments field of the JDBC connection:
-Dmapreduce.job.queuename=<NameOfTheQueue> To run a Spark job in a particular yarn queue, configure the following property in the Hadoop connection:
spark.yarn.queue=<NameOfTheQueue> |
BDM-3544
| When the proxy user setting is not correctly configured in core-site.xml, a mapping that you run with the Spark engine hangs with no error message.
Workaround: Set the value of the following properties in core-site.xml to “*” (asterisk):
|
BDM-3416
| When you run a mapping on a cluster where Ranger KMS authorization is configured, the mapping fails with an "UndeclaredThrowableException" error.
To address this issue, choose one of the following workarounds:
|
BDM-3267
| On a Blaze engine, when an unconnected Lookup expression is referenced in a join condition, the mapping fails if the master source is branched and the Joiner transformation is optimized with a map-side join. The mapping fails with the following error: [TE_7017] Internal error. Failed to initialize transformation [producer0]. Contact Informatica Global Customer Support.
|
BDM-3228
| A user who is not in the Administrator group, but who has the privileges and permissions to access the domain and its services, does not have access to the Rest application properties in the Administrator tool when the applications are deployed by another user.
|
BDM-2641
| When mappings fail, the Spark engine does not drop temporary Hive tables used to store data during mapping execution. You can manually remove the tables. (450507)
|
BDM-2222
| The Spark engine does not run the footer row command configured for a flat file target. (459942)
|
BDM-2181
| The summary and detail statistics empty for mappings run on Tez. (452224)
|
BDM-2141
| Mapping with a Hive source and target that uses an ABS function with an IIF function fails in the Hadoop environment. (424789)
|
BDM-2137
| Mapping in the Hadoop environment fails when it contains a Hive source and a filter condition that uses the default table name prefixed to the column name.
Workaround: Edit the filter condition to remove the table name prefixed to the column name and run the mapping again. (422627)
|
BDM-2136
| Mapping in the Hadoop environment fails because the Hadoop connection uses 128 characters in its name. (421834)
|
BDM-1271
| If you define an SQL override in the Hive source and choose to update the output ports based on the custom query, the mapping fails on the Blaze engine.
|
Updated January 17, 2019