PowerExchange Adapters for Informatica 10.2.2
- PowerExchange Adapters for Informatica 10.2.2
- All Products
When you use an ODBC data source to connect to Microsoft Azure SQL Data Warehouse and if the imported metadata contains unicode or special characters, the mapping fails.
When you run a mapping that reads or writes binary or varbinary data type columns, the mapping fails with the following error on the Databricks Spark engine:
<DTF-ThreadGroup-3-thread-43> WARNING: [LDTM_6009] Skipping translator [ImfDatabricksTargetTxTranslator] because it encountered the following exception: (write, com.informatica.platform.ldtm.flex.domains.databricks.BatchDatabricksEngine@4248e4bc) = java.lang.NullPointerException.
When you use the default values for field delimiter or quote character in advance source or target properties, mappings fail with incompatible data types or precision errors or the Data Integration Service writes incorrect data to the target.
When you run a mapping that contains unconnected ports on the Hive engine, the mapping might fail or data corruption might happen.
When you upsert or update data to Microsoft Azure SQL Data Warehouse and more than one column in the source table contains same value as the target column on which the primary key is defined, the Data Integration Service updates data incorrectly.
When you run a mapping on the Hive engine to read data from or write data to Microsoft Azure SQL Data Warehouse, the intermediate files get downloaded in the staging directory even if you cancel the mapping.
When an Azure table contains a bad record, the Data Integration Service fails the mapping instead of rejecting the bad record.