Table of Contents

Search

  1. Abstract
  2. Support Changes
  3. Installation and Upgrade
  4. 10.4.1 Fixed Limitations and Closed Enhancements
  5. 10.4.1 Known Limitations
  6. Cumulative Known Limitations
  7. Emergency Bug Fixes Merged into 10.4.1
  8. Informatica Global Customer Support

Release Notes

Release Notes

Data Engineering Integration Known Limitations (10.4.1)

Data Engineering Integration Known Limitations (10.4.1)

The following table describes known limitations that were found in 10.4.1:
Issue
Description
BDM-36415
If a mapping uses a parameter value to specify the execution environment, and you specify "native" or "Hadoop" as arguments when running one of the following commands, the Data Integration Service ignores the command and the mapping is not updated. The issue affects the following commands:
  • infacmd mrs disableMappingValidationEnvironment
  • iInfacmd mrs enableMappingValidationEnvironment
  • infacmd dis disableMappingValidationEnvironment
  • infacmd dis enableMappingValidationEnvironment
The issue occurs because the commands do not support parameter resolution.
Workaround: Apply EBF-20109 and use an additional argument for the command, ValidationEnvironmentFilterName. For example,
ValidationEnvironmentFilterName=Hadoop
BDM-34065
When you import the cluster configuration from Compute cluster configured on Cloudera and you choose to create associated connections, the creation process fails to create the Hive connection and displays the following error: [EDR_1069] Failed to resolve the option [connectString] of connection [HIVE_cco_ac] because the associated cluster configuration [CCO_AC] does not contain the property [hive.metastore.uris]. Workaround:
  • Create cluster configuration using import from archive file option.
  • For Hive connection, fix Metadata connection string and data access connect string JDBC connection to point to compute cluster.
BDM-34036
When the Spark engine runs a mapping or previews data on a Cloudera CDP cluster, the job incorrectly writes date/time data to the target when the date is earlier than 1583 and the mapping uses one of the following sources:
  • Azure Blob
  • Google BigQuery
  • JDBC V2
  • Sqoop
  • Complex file objects
BDM-33922
In a Kerberos-enabled domain, a mapping that runs on the Spark engine using an impersonation user fails if the following conditions are true:
  • The cluster staging directory is in an encryption zone.
  • The impersonation user is configured as a proxy user for an Active Directory user.
  • The proxy user, and thus the mapping impersonation user, has the DECRYPT privilege in the encryption zone.
  • The Active Directory user does not have the DECRYPT privilege in the encryption zone.
  • The Hadoop staging user is set to the mapping impersonation user.
Workaround: Provide the DECRYPT privilege in the encryption zone to the Active Directory user, or use a cluster staging directory that is outside of the encryption zone. For example, use the /tmp directory.
BDM-33915
When the Blaze engines performs a lookup on a compressed Hive table of text format, the mapping fails with the following error:
"[CORE_3] An internal exception occurred with message: java.io.IOException: incorrect header check java.io.IOException: incorrect header check"
BDM-33914
When the Blaze engine reads a modified Hive partition table of text format, the source read fails with the following error:
"[CORE_3] An internal exception occurred with message: java.lang.RuntimeException: No data found for parttion state java.lang.RuntimeException: No data found for parttion state"
BDM-33894
When the Spark engine runs a mapping on a Cloudera CDP cluster with dynamic resource allocation enabled, the mapping fails with the following SPARK_1003 error:
"Failed to connect to <node running Spark shuffle service>"
BDM-33822
When the Blaze engine reads a partition table where partitions have been added using the Hive command ALTER TABLE ADD PARTITION, the Blaze engine does not recognize the new partition data. This error condition occurs with the following file formats:
  • Avro
  • ORC
  • Parquet
BDM-33585
When a mapping that runs on the Blaze engine reads from or writes to an encryption zone in HDFS, the mapping fails.
Workaround: Configure the user that runs a mapping on Blaze to be the same as the impersonation user. To use a different user, give DECRYPT privileges to the Blaze user.
BDM-22677
When you create a Cloudera CDP cluster configuration and connection during installation, the installer uses the default engine type, MRv2, rather than Tez in the Hadoop connection.
In the Administrator tool, edit the Hadoop connection
Common Properties
. Set
Engine Type
to Tez.
CORE-8958
When you run a mapping that reads from a Hive source on a Cloudera CDP cluster, the mapping fails with the following error:
java.lang.NoClassDefFoundError: org/apache/jute/Record
For CDP Private Cloud, in the Hive connection properties, set
Metadata Connection String
to the following value:
jdbc:hive2://<node running HiveServer2>:10000/default;principal=hive/_HOST@PLATFORMKRB.COM;ssl=true;
This workaround does not apply to CDP Public Cloud.
OCON-24969
When you create a custom SQL query in a Read transformation to read from a Hive source that contains complex data types, you cannot select the elements of the complex data field in the Advanced Query properties. You can manually select the name of an element in the query.

Third-Party Known Limitations

The following table describes third-party known limitations:
Issue
Description
BDM-33967
Databricks clusters do not support concurrent mapping processing. While very limited concurrency is sometimes possible, concurrency support depends on multiple factors and is difficult to predict.

0 COMMENTS

We’d like to hear from you!