Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Cassandra JDBC
  6. PowerExchange for DataSift
  7. PowerExchange for Facebook
  8. PowerExchange for Google Analytics
  9. PowerExchange for Google BigQuery
  10. PowerExchange for Google Cloud Spanner
  11. PowerExchange for Google Cloud Storage
  12. PowerExchange for Greenplum
  13. PowerExchange for HBase
  14. PowerExchange for HDFS (10.4.1)
  15. PowerExchange for HDFS (10.4.0)
  16. PowerExchange for Hive
  17. PowerExchange for JD Edwards EnterpriseOne
  18. PowerExchange for JDBC V2
  19. PowerExchange for LDAP
  20. PowerExchange for LinkedIn
  21. PowerExchange for MapR-DB
  22. PowerExchange for Microsoft Azure Blob Storage
  23. PowerExchange for Microsoft Azure Cosmos DB SQL API
  24. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  25. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  26. PowerExchange for Microsoft Azure SQL Data Warehouse
  27. PowerExchange for Microsoft Dynamics CRM
  28. PowerExchange for MongoDB
  29. PowerExchange for Netezza
  30. PowerExchange for OData
  31. PowerExchange for Salesforce
  32. PowerExchange for Salesforce Marketing Cloud
  33. PowerExchange for SAP NetWeaver
  34. PowerExchange for Snowflake
  35. PowerExchange for Tableau
  36. PowerExchange for Tableau V3
  37. PowerExchange for Teradata Parallel Transporter API
  38. PowerExchange for Twitter
  39. PowerExchange for Web Content-Kapow Katalyst
  40. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Microsoft Azure Blob Storage Known Limitations (10.4.0)

PowerExchange for Microsoft Azure Blob Storage Known Limitations (10.4.0)

The following table describes known limitations:
Bug
Description
BDM-19847
For the write operation, when you run a mapping on the Spark engine and the folder path contains special characters, the Data Integration Service creates a new folder.
OCON-24942
When you refresh the source or target schema at runtime in a dynamic mapping, the values that you specify for the delimiter, text qualifier, and escape character for a flat file are not used and the default values are used. This issue leads to unexpected results in the target.
OCON-22511
When you read data from a Microsoft Azure SQL Data Warehouse source and use the
Create Target
option to create a Microsoft Azure Blob Storage target, if the Microsoft Azure Blob Storage connection uses SAS authentication, the mapping fails.
OCON-20605
When you run a mapping, in the native environment, to read a flat file that has unicode characters, a space, null values, single quotes, or a value that starts with a dollar sign, the Data Integration Service adds double quotes to the values when writing data to the target.
OCON-17642
When you enable Mapping Flow in a mapping that reads data from a flat file source and writes to a flat file target, the mapping fails with the following error in the native environment:
java.lang.Exception: [MPSVCCMN_10094] The Mapping Service Module failed to run the job with ID [Ic2j9ASPEemTlSYmtVHPww] because of the following error: [EdtmExec_00007] Exception: /tmp/insertd29a7def_bb59_452d_8051_ea4b4630807b9132318161205585091.azb (No such file or directory)
Workaround: Remove the FileName field from the imported source object and rerun the mapping.
OCON-17443
When you use the
Create Target
option to create a Microsoft Azure Blob Storage target and select Flat as the Resource Format, fields are not getting propagated to the target.
Workaround: Enable column projection and create fields manually in the target file and run the mapping.
OCON-17082
When you import an object from sub directories with names having a space, data preview fails.
OCON-12420
When you read or write a blob that has special characters, the mapping fails on the Spark engine.
OCON-12352
When a JSON file contains special characters, the Data Integration Service does not read the data correctly in the Spark mode.
OCON-12318
The Data Integration Service adds an extra blank new line at the end when you read or write a flat file in the native environment or in the Spark mode.
OCON-10125
When you read data from or write data to Microsoft Azure Blob Storage, the entire blob gets downloaded in the staging directory even if you cancel the mapping.

0 COMMENTS

We’d like to hear from you!