Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Cassandra JDBC
  6. PowerExchange for DataSift
  7. PowerExchange for Facebook
  8. PowerExchange for Google Analytics
  9. PowerExchange for Google BigQuery
  10. PowerExchange for Google Cloud Spanner
  11. PowerExchange for Google Cloud Storage
  12. PowerExchange for Greenplum
  13. PowerExchange for HBase
  14. PowerExchange for HDFS (10.4.1)
  15. PowerExchange for HDFS (10.4.0)
  16. PowerExchange for Hive
  17. PowerExchange for JD Edwards EnterpriseOne
  18. PowerExchange for JDBC V2
  19. PowerExchange for LDAP
  20. PowerExchange for LinkedIn
  21. PowerExchange for MapR-DB
  22. PowerExchange for Microsoft Azure Blob Storage
  23. PowerExchange for Microsoft Azure Cosmos DB SQL API
  24. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  25. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  26. PowerExchange for Microsoft Azure SQL Data Warehouse
  27. PowerExchange for Microsoft Dynamics CRM
  28. PowerExchange for MongoDB
  29. PowerExchange for Netezza
  30. PowerExchange for OData
  31. PowerExchange for Salesforce
  32. PowerExchange for Salesforce Marketing Cloud
  33. PowerExchange for SAP NetWeaver
  34. PowerExchange for Snowflake
  35. PowerExchange for Tableau
  36. PowerExchange for Tableau V3
  37. PowerExchange for Teradata Parallel Transporter API
  38. PowerExchange for Twitter
  39. PowerExchange for Web Content-Kapow Katalyst
  40. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Microsoft Azure Data Lake Storage Gen2 Known Limitations (10.4.1)

PowerExchange for Microsoft Azure Data Lake Storage Gen2 Known Limitations (10.4.1)

The following table describes known limitations:
Bug
Description
OCON-25687
When you run a mapping to read data from a Parquet file on the Spark engine, the mapping fails with the following error:
Spark task failed with the following error: [com.informatica.sdk.dtm.ExecutionException
This issue occurs when you use the Cloudera CDP 7.1 distribution.
OCON-25683
When you set the
-DINFA_HADOOP_DIST_DIR=hadoop\<Distro>
option in the
developerCore.ini
file and import a Parquet file, the format of the imported metadata differs based on the distribution. For Cloudera CDP 7.1, the metadata is imported as string and for other supported distributions, the metadata is imported as UTF8.
OCON-24963
When you run a mapping to write a complex file to a Microsoft Azure Data Lake Storage Gen2 target on Databricks Spark engine, the Data Integration Service does not override all the target files of the previous mapping in the target directory.
This issue occurs for all target files except the partition file.
OCON-24923
When you read an ORC file on Spark engine or Databricks Spark engine, the Data Integration Service fails to read the special characters in the ORC file.
OCON-24734
When you read a JSON file in the native environment, the Data Integration Service incorrectly writes the values of double data type to the target in exponential format.
OCON-23135
When you import a flat file data object with default precision and update the precision value at run time, the Data Integration Service uses the default value instead of the value provided at run time.

0 COMMENTS

We’d like to hear from you!