Data Services All Products
The Data Integration Service might become unresponsive when running multiple mappings in parallel that contain expressions on the Hive engine for the first time.
The Data Integration Service fails to delete .bin and .dat files in the disTemp directory after you view the execution plan for a mapping that runs in a Hadoop environment.
When you use Sqoop and define a join condition in the custom query, the mapping fails. (457397)
A mapping that reads data from Netezza fails on the Blaze engine when all of the following conditions are true:
Sqoop mappings that write data to a target fail on Tez.
The target contains duplicate data when map-side join is enabled and full outer or detail outer join should are selected in a Joiner transformation.
Blaze mappings fail at run time with a validation error if the data types of the Hive table do not match the data types in the physical data object that is used as a Source or Target transformation in the mapping.
Memory consumption might impact the performance of an active Java transformation included in a mapping run on the Spark engine.
When configuring BDM for a MapR cluster, the Hadoop Configuration Manager requests an hdfs-site.xml file that is not available in the MapR cluster.
Some edits to the Logical data object might corrupt virtual table logic in a deployed SQL data service. Run-time components that use the SQL data service might fail.
The Hadoop Configuration Manager fails to configure Big Data Management for the MapR Hadoop distribution if the hdfs-site.xml file is not available on the MapR cluster.
The Spark engine fails mappings when the mapping applies an LTRIM or RTRIM function to a column with null values or when the function trims the whole string.
When a mapping runs in an Apache Hive environment, the mapping statistics do not record the correct number of source rows. The mapping reports the number of rows as double the correct number. The issue is visible in the Monitoring tool and in the Mapping task output in a workflow.
When a mapping runs in an Apache Hive environment, the mapping statistics do not record the correct number of target rows. The mapping reports the number of rows as 0. The issue is visible in the Monitoring tool and in the Mapping task output in a workflow.