PowerExchange Adapters for Informatica 10.2 HotFix 1
- PowerExchange Adapters for Informatica 10.2 HotFix 1
- All Products
When you read data from Microsoft Azure SQL Data Warehouse and the table contains special character, the mapping fails.
When you run a mapping on the Hive engine to read data from or write data to Microsoft Azure SQL Data Warehouse, the intermediate files get downloaded in the staging directory even if you cancel the mapping.
When you create a data object to read data from a large table in Microsoft Azure SQL Data Warehouse, preview the data, and set the
Read up to how many rowsfield to 1000, the Data Integration Services downloads the entire table in the staging directory.
You cannot run a Microsoft Azure SQL Data Warehouse mapping on the Blaze engine.
The Hadoop job log does not display reader logs.
The Data Integration Service reads a blank char, varchar, nchar, or nvarchar data type record from Microsoft Azure SQL Data Warehouse as Null.
When a mapping fails or when you cancel an operation, the Data Integration Service does not delete the external table and staging blob files. You must manually delete the files.
When an Azure table contains a bad record, the Data Integration Service fails the mapping instead of rejecting the bad record.
You cannot run a mapping in the Hadoop environment to delete data.
When you run a mapping on the Hive engine, the Data Integration Service submits the job as Yarn user instead of Data Integration Service user.