Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Cassandra JDBC
  6. PowerExchange for DataSift
  7. PowerExchange for Facebook
  8. PowerExchange for Google Analytics
  9. PowerExchange for Google BigQuery
  10. PowerExchange for Google Cloud Spanner
  11. PowerExchange for Google Cloud Storage
  12. PowerExchange for Greenplum
  13. PowerExchange for HBase
  14. PowerExchange for HDFS
  15. PowerExchange for Hive
  16. PowerExchange for JD Edwards EnterpriseOne
  17. PowerExchange for JDBC V2
  18. PowerExchange for LDAP
  19. PowerExchange for LinkedIn
  20. PowerExchange for MapR-DB
  21. PowerExchange for Microsoft Azure Blob Storage
  22. PowerExchange for Microsoft Azure Cosmos DB SQL API
  23. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  24. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  25. PowerExchange for Microsoft Azure SQL Data Warehouse
  26. PowerExchange for Microsoft Dynamics CRM
  27. PowerExchange for MongoDB
  28. PowerExchange for Netezza
  29. PowerExchange for OData
  30. PowerExchange for Salesforce
  31. PowerExchange for Salesforce Marketing Cloud
  32. PowerExchange for SAP NetWeaver
  33. PowerExchange for Snowflake
  34. PowerExchange for Tableau
  35. PowerExchange for Tableau V3
  36. PowerExchange for Teradata Parallel Transporter API
  37. PowerExchange for Twitter
  38. PowerExchange for Web Content-Kapow Katalyst
  39. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes (10.4.1.2)

PowerExchange Adapters for Informatica Release Notes (10.4.1.2)

PowerExchange for Google Cloud Storage Fixed Issues (10.4.0)

PowerExchange for Google Cloud Storage Fixed Issues (10.4.0)

Review the Release Notes of previous releases for information about previous fixed issues.
The following table describes fixed issues:
Bug
Description
OCON-18427
When you run a mapping on the Spark Engine to write data to a Google Cloud Storage flat file target and the data size is greater than 750 MB, the mapping hangs indefinitely with the following error:
ERROR AbstractGoogleAsyncWriteChannel: Exception not convertible into handled response java.io.IOException: Write end dead
Workaround: Set the following property in the
Spark Execution Parameters
in the Hadoop connection configured for the Spark engine:
spark.executor.instances=1
When you run the mapping on a single spark executor instance, you may experience performance degradation.
OCON-17842
When you use the
Create Target
option to create a Google Cloud Storage target and select
Flat
as the
Resource Format
, fields are not getting propagated to the target.
Workaround: Create fields manually in the target file and run the mapping.

0 COMMENTS

We’d like to hear from you!