Metadata Manager 10.1.1
- Metadata Manager 10.1.1
- All Products
If you run a mapping on HiveServer2 on a SUSE 11 Hortonworks cluster that is enabled with Kerberos authentication, a MySQL connection leak occurs and the mapping fails with the following error:
[HiveServer2-Handler-Pool: Thread-3439]: transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
If you configure user impersonation and run a Sqoop mapping on a Hadoop cluster that uses Kerberos authentication, the mapping fails.
Workaround: Use the Hadoop service principal name in the Hadoop connection and run the mapping. (460997)
The Big Data Management Configuration Utility (Hadoop Configuration Manager) does not create a separate log file for each run.
The Analyst Service does not shut down when you use the infaservice.sh shutdown command.
In a Cloudera CDH environment, mappings fail on the Blaze engine if the Resource Manager is highly available and the cluster uses Kerberos authentication. (BDM-1596)
A mapping fails to run in the Blaze environment if multiple transformation strategies in the mapping identify the same probabilistic model file or classifier model file.
Column profile with data domain discovery fails when the data source is a Hive source, you choose the sampling option as
All rows, and you run the profile on the Blaze engine.
When you run mappings on the Spark engine within a very short time span, such as 20 seconds, the mappings fail with OSGI errors.
If the join condition in a Joiner transformation contains string ports with different precision values, the mapping returns an incorrect number of output rows when run on the Blaze engine. (BDM-1585)
Workflows are rescheduled to a different time instead of the original scheduled time when the Integration Service shuts down unexpectedly and misses the scheduled time.
When you run mappings on the Spark engine, the mapping run fails with a compilation error.
Cause: The cluster uses an instance of Java other than the Java that ships with Informatica Big Data Management.
Big Data Management supports Hortonworks Hadoop clusters that use Java 1.8. When the cluster uses Java 1.7, mappings that you execute using the Hive engine fail. You see an error like:
When you export data to an Oracle database through Sqoop, the mapping fails in certain situations. This issue occurs when all of the following conditions are true:
Lookup performance on the Spark engine is very slow when the lookup data contains null values.
When you generate and execute a DDL script to create or replace a Hive target table in the Blaze run-time environment, the mapping fails.
When you synchronize a Hive view in the Developer tool, the links from the mapping source or the connections are not retained. (BDM-2255)
When a Hadoop cluster uses Kerberos authentication, the mapping that writes to HDFS in the native run-time environment fails with the following error if the KDC service ticket has expired: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (BDM-2190)
If you run multiple concurrent mappings on the Spark engine, performance might be slow and the log messages indicate that resources are not available. The Data Integration Service indicates that the mapping failed even though it is still running in the cluster.
The MRX_MAPPINGS view does not show any MAPPING objects even though mappings exist in the repository.