Metadata Manager 10.1.1 HotFix 1
- Metadata Manager 10.1.1 HotFix 1
- All Products
When you run TDCH mappings on the Hive engine to write time data to a Teradata target, the nanosecond part is truncated.
The Data Integration Service cannot load and run deployed applications when the MRS unsuccessfully fails over when the DIS on a grid fails over.
Mappings using an HBase connection and the Spark run-time engine fail on a Cloudera 5.11 cluster with Kerberos authentication enabled.
Workaround: Log in to each node of the Cloudera cluster and use the kdestroy command to delete the Kerberos credentials cache. Then re-run the mapping.
Mappings fail when they use the Spark engine to run on a Hortonworks 2.5 SUSE cluster where Transparent Encryption is enabled on HDFS and the Hive warehouse directory specified in the Hadoop connection is in an encrypted zone.
Mappings that are configured to use Snappy compression fail on a Hortonworks or Cloudera SUSE Linux cluster where Kerberos authentication is enabled.