Table of Contents

Search

  1. Abstract
  2. Installation and Upgrade
  3. 10.4.0 Fixed Limitations and Closed Enhancements
  4. 10.4.0 Known Limitations
  5. Cumulative Known Limitations
  6. Emergency Bug Fixes Merged into 10.4.0
  7. Informatica Global Customer Support

Release Notes

Release Notes

Data Engineering Integration Known Limitations (10.4.0)

Data Engineering Integration Known Limitations (10.4.0)

The following table describes known limitations that were found in 10.4.0:
Bug
Description
OCON-23296
When you run a mapping in the native environment to write data to a complex file object in Avro format with fixed data type, the mapping runs successfully but the Data Integration Service writes incorrect data to the target.
Change the data type in target schema to bytes and run the mapping again.
OCON-23234
A Sqoop mapping configured for incremental data extraction fails when the incremental key value for the -infa-incremental-key parameter is in lower case.
OCON-23113
When you run a mapping in the native environment to write data to a complex file object in Parquet format with fixed_len_byte_array data type, the mapping runs successfully but the Data Integration Service writes incorrect data to the target.
Change the data type in target schema to binary and run the mapping again.
For example:
physicalType: int32 for precision <=9, int64 for 9< precision <= 18, as BINARY for precision>18.logicalType: decimal(precision, scale)
CM-8210
Intelligent Structure Discovery does not process the Union data type in ORC input. When you try to base a model on a sample ORC file that contains Union data, the model creation fails with the following error: "Failed to create the model. Select a different sample file."
BDM-35220
When you use a Create Cluster task to create a Databricks cluster and you specify cluster tags in a file, the task fails with the following error:
SEVERE: Failed to create the cluster due to the following error: [Failed to Create Databricks Cluster with name [<name>] due to Empty value found for Cluster tags after parsing the given input: [<filepath>].]
In the cluster tags file contents, include a comma after the last key-value pair. For example:
<key1>=<value1>,<key2>=<value2>,<key3>=<value3>,
BDM-30018
If a mapping configured for HDP 3.1 or Dataproc 1.4 on the Spark engine reads data from a Hive table and you specify database names in both the Data Access Connection String and in the runtime properties, a SQL override uses the database specified in the Hive connection instead of the database in the data object.
BDM-32012
When you specify a file path as the value for the Databricks Create Cluster task advanced properties Cluster Tags, Spark Configurations, or Environment Variables, the Create Cluster task fails with the following error:
java.lang.RuntimeException: Failed to create Databricks Cluster with name [<name>] due to invalid character at [<position>] in [<file location>].
BDM-30008
If you select the target schema strategy
FAIL - Fail mapping if target schema is different
and truncate the target Hive table but the target schema from the mapping flow is the same as the schema of the target table, the Spark engine appends data to the target instead of truncating the target table.
BDM-29520
When you cancel a data preview job on the Spark engine, the behavior is unpredictable.
BDM-29441
Mappings configured to run using Blaze LLAP fail with "Table not found" error because the Blaze engine does not support LLAP.
Perform one of the following tasks to avoid the error:
  • Set the following property in the Blaze engine advanced properties for the Hadoop connection to a database-specific schema.
    For example, set
    hive.hiveserver2.jdbc.url="jdbc:hive2://<host>:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive"
  • In the advanced tab of the data source object, set the run-time owner name property.
BDM-29258
When you enable the Data Integration Service for data engineering recovery, the Data Integration Service logs point to
BigDataJobRecovery
Data Integration Service option instead of the
DataEngineeringRecovery
Data Integration Service option as displayed on the Administrator tool.
BDM-29037
Creating an SQL data service for a flat file data source in the Developer tool intermittently fails with null pointer exception.
BDM-28956
When you preview data with an Expression transformation configured for windowing as the preview point, the job might fail.
BDM-28953
Mapping validation fails on the Spark engine when you create a complex target from a transformation that includes a complex port and:
  • The complex port name exceeds 128 characters. OR
  • The name of one or more elements of the complex port exceeds 128 characters.
BDM-28937
The Data Integration Service does not recover mappings configured to run on the Spark engine that have mapping outputs even if you enable data engineering recovery for the mapping.
BDM-28882
When you enable data engineering recovery, the Monitor view of the Administrator tool does not display summary statistics for recovered jobs if the Data Integration Service stops after the application is submitted to the cluster.
BDM-28537
Midstream parsing complex functions PARSE_JSON and PARSE_XML cannot parse the data if the JSON keys or XML tags have a period or other special characters.
Remove special characters in the JSON keys or XML tags.
BDM-28514
The Developer tool accepts more than two arguments in the PARSE_JSON and PARSE_XML complex functions.
Use two arguments with the PARSE_JSON and PARSE_XML complex functions.
BDM-28405
When you compare two mappings containing Aggregator transformation that are different, the compare report shows the cacheDir value as object.
{ "property" : "Mapping.transformations.AggregatorTx[Aggregator]<'Aggregator'>.cacheDir$[PredefinedParameter]<'TempDir'>", "sourceValue" : "|object exists|", "sinkValue" : "|object doesn't exist|" }, { "property" : "Mapping.transformations.AggregatorTx[Aggregator]<'Aggregator'>.cacheDir$[PredefinedParameter]<'CacheDir'>", "sourceValue" : "|object doesn't exist|", "sinkValue" : "|object exists|" }
BDM-27924
The infacmd dis queryDesignTimeObjects and queryRunTimeObjects commands fail when you assign the reserved keywords such as where, name, project, and folder as value in a query.
Don't assign the reserved keywords in a query.
BDM-27438
When a mapping configured with HDP 3.1 on the Blaze engine writes to an external ORC target, the mapping stages data to a temporary Parquet table and uses HS2 to write to the target, which might cause performance impact due to additional staging tables.
BDM-27234
The Developer tool accepts complex data type definition names with special characters, which cannot be used in a mapping. When you validate expressions with special characters in the name, parse errors occur.
Do not use special characters in complex data type definition names.
BDM-27224
You cannot update the tag description in the following situations:
  • When you create a new tag by using infacmd dis tag or REST API PUT /core/v1/objects/tag/{tag} but a tag already exists with the same tag name.
  • When you replace all tags by using infacmd dis replaceAllTag or REST API POST /core/v1/objects/tag/{tag}.
B2BT-899
When an intelligent structure model is used midstream, Intelligent Structure Discovery passes only the first array element of each array in the input. All other array elements are discarded.
For example, if a "Companies" root element contains an array with multiple "Company" elements, Intelligent Structure Discovery passes only the first "Company" element.
B2BT-889
When you run data preview on a complex file data object that contains column names with special characters, the Data Integration Service fails to pass the request to the Spark engine.

0 COMMENTS

We’d like to hear from you!