Table of Contents

Search

  1. Abstract
  2. Support Changes
  3. Installation and Upgrade
  4. Hotfix Installation and Rollback
  5. 10.5.1 Fixed Limitations and Closed Enhancements
  6. 10.5.1 Known Limitations
  7. Cumulative Known Issues
  8. Emergency Bug Fixes Merged into 10.5.1
  9. Informatica Global Customer Support

Release Notes (10.5.1)

Release Notes (10.5.1)

Third-Party Known Issues (Cumulative)

Third-Party Known Issues (Cumulative)

The following table describes known limitations that are carried forward from previous releases:
Issue
Description
BDM-34608
BDM-34615
When the Spark engine runs a mapping with a Hive WASBS source and a flat file target on a non-ESP-enabled Hadoop cluster, it creates separate staging directories for users other than the expected impersonation user.
Workaround for non-ESP-enabled HDInsight 4.0 clusters: Set the following properties on the cluster:
  • Set "Run as end user instead of Hive user" (doAs) to TRUE.
  • Enable hive.metastore.execute.setugi.
Workaround for ESP-enabled HDInsight 4.0 clusters: set "Run as end user instead of Hive user" (doAs) to FALSE.
Microsoft ticket number: 120072723000878.
BDM-34066
When you import a Cloudera CDH compute cluster configuration directly from the cluster, the cluster configuration does not include the hive-site.xml file.
Download all *-site.xml files from the Hadoop cluster as a .zip file and import the compute cluster configuration from the archive file.
Cloudera ticket: 697586
BDM-33828
When the Spark engine runs a mapping on a Cloudera CDP cluster, the mapping fails with the following SPARK_1003 error:
MetaException (message:Permission denied: user [<impersonation user>] does not have [SELECT] privilege on [default])
In Ranger, add SELECT privileges to the default database for all users who run Spark queries.
Cloudera ticket: CDPD-12622
For more information, see Cloudera 7.1.1 Release Notes.
BDM-31657
When the Spark engine runs a mapping that uses the Hive Warehouse Connector to write to a Hive target, the mapping fails if a column name in the target contains a special character.
The mapping fails with the following error:
java.lang.IllegalArgumentException: Missing required char ':' at 'struct<<table name>>'
Cloudera ticket: 751777
BDM-28598
When the Spark engine processes an input value of zero in a decimal port that is configured with equivalent precision and scale, the engine treats the value as data overflow and the return value is NULL on a Hortonworks HDP 3.1 cluster.
Cloudera ticket reference number: 635063
BDM-25513
When you have a mapping with a Hive ACID source and target tables that runs on the Spark engine, the Summary Statistics view does not reflect any throughput statistics for the mapping job.
Cloudera ticket reference number: 00225986
BDM-23104
The Spark engine cannot write data to a bucketed Hive target if the Hadoop distribution is MapR.
MapR case number: 00074338
BDM-4674
A mapping with an Update Strategy transformation fails at run time if the Hive target has more than one primary key.
Apache Hive ticket reference number: HIVE-19911
BDM-37988
When the Blaze engine runs a mapping on a MapR 6.1 cluster, the mapping fails if it reads from or writes to an Avro-formatted file stored on Hive.
The mapping fails with the following error:
[GRIDDTM_1016] The Integration Service failed to execute grid mapping with following error [An internal exception occurred with message: java.lang.RuntimeException: Failure to execute Query <query> on the hive Server.]
MapR ticket number: 00101221
BDM-34066
When you import the cluster configuration directly from Cloudera and you choose to create associated connections, the import wizard fails to create a Hive connection and displays the following error:
[EDR_1069] Failed to resolve the option [connectstring] of connection [HIVE_cco_ac] because the associated cluster configuration [CCC_AC] does not contain the property [hive, metastore, uris].
Workaround:
  1. Create the cluster configuration by importing the configuration from an archive file.
  2. Edit the Hive connection to fix the metadata connection string and data access connect string to point to the compute cluster.
Cloudera ticket number: OPSAPS-57696

0 COMMENTS

We’d like to hear from you!