Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Google Analytics
  6. PowerExchange for Google BigQuery
  7. PowerExchange for Google Cloud Spanner
  8. PowerExchange for Google Cloud Storage
  9. PowerExchange for Greenplum
  10. PowerExchange for HBase
  11. PowerExchange for HDFS
  12. PowerExchange for Hive
  13. PowerExchange for JDBC V2
  14. PowerExchange for JD Edwards EnterpriseOne
  15. PowerExchange for Kudu
  16. PowerExchange for LDAP
  17. PowerExchange for Microsoft Azure Blob Storage
  18. PowerExchange for Microsoft Azure Cosmos DB SQL API
  19. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  20. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  21. PowerExchange for Microsoft Azure SQL Data Warehouse
  22. PowerExchange for Microsoft Dynamics CRM
  23. PowerExchange for MongoDB
  24. PowerExchange for Netezza
  25. PowerExchange for OData
  26. PowerExchange for Salesforce
  27. PowerExchange for SAP NetWeaver
  28. PowerExchange for Snowflake
  29. PowerExchange for Tableau
  30. PowerExchange for Tableau V3
  31. PowerExchange for Teradata Parallel Transporter API
  32. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Microsoft Azure Data Lake Storage Gen2 (10.5)

PowerExchange for Microsoft Azure Data Lake Storage Gen2 (10.5)

Fixed Issues

The following table describes fixed issues:
Bug
Description
OCON-24982
When you use the FileName port in a complex file target and run a mapping with mapping flow enabled in the native environment, the Data Integration Service does not create the folder structure as expected.
OCON-24923
When you read an ORC file on Spark engine or Databricks Spark engine, the Data Integration Service fails to read the special characters in the ORC file.
OCON-24734
When you read a JSON file in the native environment, the Data Integration Service incorrectly writes the values of double data type to the target in exponential format.
OCON-23135
When you import a flat file data object with default precision and update the precision value at run time, the Data Integration Service uses the default value instead of the value provided at run time.

Known Issues

The following table describes known issues:
Bug
Description
BDM-28877
When you select
No quotes
in
Text qualifier
and specify the escape character in the flat file schema format properties, extra columns are added while previewing the data.
Workaround: Select
Single quotes
or
Double quotes
in
Text qualifier
to import the object. After the object import, you can select
No quotes
in the Schema tab.
OCON-24963
When you run a mapping to write a complex file to a Microsoft Azure Data Lake Storage Gen2 target on Databricks Spark engine, the Data Integration Service does not override all the target files of the previous mapping in the target directory.
This issue occurs for all target files except the partition file.
OCON-23033
When you import an object from sub directories with names having a space, data preview fails.
OCON-21916
When you create a pass-through mapping to read data from a flat file and write data to another flat file target and enable run-time linking,
'#'
is added to the first column name in the target file when you run the mapping first time on the Spark engine. The mapping fails at subsequent runs.
OCON-20605
When you run a mapping, in the native environment, to read a flat file that has unicode characters, a space, null values, single quotes, or a value that starts with a dollar sign, the Data Integration Service adds double quotes to the values when writing data to the target.

Third-Party Known Issues

The following table describes third-party known issues:
Bug
Description
OCON-18625
You cannot configure the Azure Government end-points in mappings that run on Databricks Spark engine.

0 COMMENTS

We’d like to hear from you!