Common Content for Data Engineering
- Common Content for Data Engineering 10.4.0
- All Products
Bug
| Description
|
---|---|
OCON-23296
| When you run a mapping in the native environment to write data to a complex file object in Avro format with fixed data type, the mapping runs successfully but the Data Integration Service writes incorrect data to the target.
Change the data type in target schema to bytes and run the mapping again.
|
OCON-23234
| A Sqoop mapping configured for incremental data extraction fails when the incremental key value for the -infa-incremental-key parameter is in lower case.
|
OCON-23113
| When you run a mapping in the native environment to write data to a complex file object in Parquet format with fixed_len_byte_array data type, the mapping runs successfully but the Data Integration Service writes incorrect data to the target.
Change the data type in target schema to binary and run the mapping again.
For example:
physicalType: int32 for precision <=9, int64 for 9< precision <= 18, as BINARY for precision>18.logicalType: decimal(precision, scale)
|
CM-8210
| Intelligent Structure Discovery does not process the Union data type in ORC input. When you try to base a model on a sample ORC file that contains Union data, the model creation fails with the following error: "Failed to create the model. Select a different sample file."
|
BDM-35220
| When you use a Create Cluster task to create a Databricks cluster and you specify cluster tags in a file, the task fails with the following error:
In the cluster tags file contents, include a comma after the last key-value pair. For example:
<key1>=<value1>,<key2>=<value2>,<key3>=<value3>,
|
BDM-30018
| If a mapping configured for HDP 3.1 or Dataproc 1.4 on the Spark engine reads data from a Hive table and you specify database names in both the Data Access Connection String and in the runtime properties, a SQL override uses the database specified in the Hive connection instead of the database in the data object.
|
BDM-32012
| When you specify a file path as the value for the Databricks Create Cluster task advanced properties Cluster Tags, Spark Configurations, or Environment Variables, the Create Cluster task fails with the following error:
|
BDM-30008
| If you select the target schema strategy
FAIL - Fail mapping if target schema is different and truncate the target Hive table but the target schema from the mapping flow is the same as the schema of the target table, the Spark engine appends data to the target instead of truncating the target table.
|
BDM-29520
| When you cancel a data preview job on the Spark engine, the behavior is unpredictable.
|
BDM-29441
| Mappings configured to run using Blaze LLAP fail with "Table not found" error because the Blaze engine does not support LLAP.
Perform one of the following tasks to avoid the error:
|
BDM-29258
| When you enable the Data Integration Service for data engineering recovery, the Data Integration Service logs point to
BigDataJobRecovery Data Integration Service option instead of the
DataEngineeringRecovery Data Integration Service option as displayed on the Administrator tool.
|
BDM-29037
| Creating an SQL data service for a flat file data source in the Developer tool intermittently fails with null pointer exception.
|
BDM-28956
| When you preview data with an Expression transformation configured for windowing as the preview point, the job might fail.
|
BDM-28953
| Mapping validation fails on the Spark engine when you create a complex target from a transformation that includes a complex port and:
|
BDM-28937
| The Data Integration Service does not recover mappings configured to run on the Spark engine that have mapping outputs even if you enable data engineering recovery for the mapping.
|
BDM-28882
| When you enable data engineering recovery, the Monitor view of the Administrator tool does not display summary statistics for recovered jobs if the Data Integration Service stops after the application is submitted to the cluster.
|
BDM-28537
| Midstream parsing complex functions PARSE_JSON and PARSE_XML cannot parse the data if the JSON keys or XML tags have a period or other special characters.
Remove special characters in the JSON keys or XML tags.
|
BDM-28514
| The Developer tool accepts more than two arguments in the PARSE_JSON and PARSE_XML complex functions.
Use two arguments with the PARSE_JSON and PARSE_XML complex functions.
|
BDM-28405
| When you compare two mappings containing Aggregator transformation that are different, the compare report shows the cacheDir value as object.
|
BDM-27924
| The infacmd dis queryDesignTimeObjects and queryRunTimeObjects commands fail when you assign the reserved keywords such as where, name, project, and folder as value in a query.
Don't assign the reserved keywords in a query.
|
BDM-27438
| When a mapping configured with HDP 3.1 on the Blaze engine writes to an external ORC target, the mapping stages data to a temporary Parquet table and uses HS2 to write to the target, which might cause performance impact due to additional staging tables.
|
BDM-27234
| The Developer tool accepts complex data type definition names with special characters, which cannot be used in a mapping. When you validate expressions with special characters in the name, parse errors occur.
Do not use special characters in complex data type definition names.
|
BDM-27224
| You cannot update the tag description in the following situations:
|
B2BT-899
| When an intelligent structure model is used midstream, Intelligent Structure Discovery passes only the first array element of each array in the input. All other array elements are discarded.
For example, if a "Companies" root element contains an array with multiple "Company" elements, Intelligent Structure Discovery passes only the first "Company" element.
|
B2BT-889
| When you run data preview on a complex file data object that contains column names with special characters, the Data Integration Service fails to pass the request to the Spark engine.
|