Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Google BigQuery
  6. PowerExchange for Google Cloud Storage
  7. PowerExchange for Greenplum
  8. PowerExchange for HBase
  9. PowerExchange for HDFS
  10. PowerExchange for Hive
  11. PowerExchange for JDBC V2
  12. PowerExchange for JD Edwards EnterpriseOne
  13. PowerExchange for Kudu
  14. PowerExchange for LDAP
  15. PowerExchange for Microsoft Azure Blob Storage
  16. PowerExchange for Microsoft Azure Cosmos DB SQL API
  17. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  18. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  19. PowerExchange for Microsoft Azure SQL Data Warehouse
  20. PowerExchange for Microsoft Dynamics CRM
  21. PowerExchange for MongoDB
  22. PowerExchange for Netezza
  23. PowerExchange for OData
  24. PowerExchange for Salesforce
  25. PowerExchange for SAP NetWeaver
  26. PowerExchange for Snowflake
  27. PowerExchange for Teradata
  28. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Microsoft Azure Data Lake Storage Gen2 (10.5)

PowerExchange for Microsoft Azure Data Lake Storage Gen2 (10.5)

Fixed Issues

The following table describes fixed issues:
Bug
Description
OCON-24982
When you use the FileName port in a complex file target and run a mapping with mapping flow enabled in the native environment, the Data Integration Service does not create the folder structure as expected.
OCON-24923
When you read an ORC file on Spark engine or Databricks Spark engine, the Data Integration Service fails to read the special characters in the ORC file.
OCON-24734
When you read a JSON file in the native environment, the Data Integration Service incorrectly writes the values of double data type to the target in exponential format.
OCON-23135
When you import a flat file data object with default precision and update the precision value at run time, the Data Integration Service uses the default value instead of the value provided at run time.

Known Issues

The following table describes known issues:
Bug
Description
BDM-28877
When you select
No quotes
in
Text qualifier
and specify the escape character in the flat file schema format properties, extra columns are added while previewing the data.
Workaround: Select
Single quotes
or
Double quotes
in
Text qualifier
to import the object. After the object import, you can select
No quotes
in the Schema tab.
OCON-24963
When you run a mapping to write a complex file to a Microsoft Azure Data Lake Storage Gen2 target on Databricks Spark engine, the Data Integration Service does not override all the target files of the previous mapping in the target directory.
This issue occurs for all target files except the partition file.
OCON-23033
When you import an object from sub directories with names having a space, data preview fails.
OCON-21916
When you create a pass-through mapping to read data from a flat file and write data to another flat file target and enable run-time linking,
'#'
is added to the first column name in the target file when you run the mapping first time on the Spark engine. The mapping fails at subsequent runs.
OCON-20605
When you run a mapping, in the native environment, to read a flat file that has unicode characters, a space, null values, single quotes, or a value that starts with a dollar sign, the Data Integration Service adds double quotes to the values when writing data to the target.

Third-Party Known Issues

The following table describes third-party known issues:
Bug
Description
OCON-18625
You cannot configure the Azure Government end-points in mappings that run on Databricks Spark engine.

0 COMMENTS

We’d like to hear from you!