Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for DataSift
  6. PowerExchange for Facebook
  7. PowerExchange for Greenplum
  8. PowerExchange for HBase
  9. PowerExchange for HDFS
  10. PowerExchange for Hive
  11. PowerExchange for JD Edwards EnterpriseOne
  12. PowerExchange for LDAP
  13. PowerExchange for LinkedIn
  14. PowerExchange for MapR-DB
  15. PowerExchange for Microsoft Azure Blob Storage
  16. PowerExchange for Microsoft Azure Data Lake Store
  17. PowerExchange for Microsoft Azure SQL Data Warehouse
  18. PowerExchange for Microsoft Dynamics CRM
  19. PowerExchange for MongoDB
  20. PowerExchange for Netezza
  21. PowerExchange for OData
  22. PowerExchange for Salesforce
  23. PowerExchange for SAP NetWeaver
  24. PowerExchange for Snowflake
  25. PowerExchange for Tableau
  26. PowerExchange for Teradata Parallel Transporter API
  27. PowerExchange for Twitter
  28. PowerExchange for Web Content-Kapow Katalyst
  29. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Microsoft Azure Blob Storage Known Limitations (10.2.1)

PowerExchange for Microsoft Azure Blob Storage Known Limitations (10.2.1)

The following table describes known limitations:
Bug
Description
OCON-12469
The same Parquet file, when written in the native environment and on the Spark engine, has different sizes. The Data Integration Service cannot read the Parquet file, which is written by the Spark engine, in the native environment.
OCON-12424
A mapping fails on the Spark engine, if the Blob Name Override or Blob Container Override fields contain a space or a special character.
OCON-12420
When you read or write a blob that has special characters, the mapping fails on the Spark engine.
OCON-12417
For the write operation, when you run a mapping on the Spark engine and the folder path contains special characters, the Data Integration Service creates a new folder.
OCON-12378
When you run a mapping in the native environment and on the Spark engine to write an Avro file, the target files generated in both modes have different sizes. The Data Integration Service compresses the target file using the snappy compression format when a mapping is run on the Spark engine.
OCON-12352
When a JSON file contains special characters, the Data Integration Service does not read the data correctly in the Spark mode.
OCON-12351
The Data Integration Service stops responding during data preview when an Avro or a Parquet file contains bytes or bytes_array data types with NULL values.
OCON-12327
The write operation fails for a flat file in the native environment when single or double quotes are selected as text qualifier.
OCON-12318
The Data Integration Service adds an extra blank new line at the end when you read or write a flat file in the native environment or in the Spark mode.
OCON-10125
When you read data from or write data to Microsoft Azure Blob Storage, the entire blob gets downloaded in the staging directory even if you cancel the mapping.