Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Cassandra JDBC
  6. PowerExchange for DataSift
  7. PowerExchange for Facebook
  8. PowerExchange for Google Analytics
  9. PowerExchange for Google BigQuery
  10. PowerExchange for Google Cloud Storage
  11. PowerExchange for Greenplum
  12. PowerExchange for HBase
  13. PowerExchange for HDFS
  14. PowerExchange for Hive
  15. PowerExchange for JD Edwards EnterpriseOne
  16. PowerExchange for LDAP
  17. PowerExchange for LinkedIn
  18. PowerExchange for MapR-DB
  19. PowerExchange for Microsoft Azure Blob Storage
  20. PowerExchange for Microsoft Azure Cosmos DB SQL API
  21. PowerExchange for Microsoft Azure Data Lake Store
  22. PowerExchange for Microsoft Azure SQL Data Warehouse
  23. PowerExchange for Microsoft Dynamics CRM
  24. PowerExchange for MongoDB
  25. PowerExchange for Netezza
  26. PowerExchange for Salesforce
  27. PowerExchange for SAP NetWeaver
  28. PowerExchange for Snowflake
  29. PowerExchange for Tableau
  30. PowerExchange for Tableau V3
  31. PowerExchange for Teradata Parallel Transporter API
  32. PowerExchange for Twitter
  33. PowerExchange for Web Content-Kapow Katalyst
  34. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Google Cloud Storage Known Limitations (10.2.2)

PowerExchange for Google Cloud Storage Known Limitations (10.2.2)

The following table describes known limitations:
Bug
Description
OCON-18427
When you run a mapping on the Spark Engine to write data to a Google Cloud Storage flat file target and the data size is greater than 750 MB, the mapping hangs indefinitely with the following error:
ERROR AbstractGoogleAsyncWriteChannel: Exception not convertible into handled response java.io.IOException: Write end dead
Workaround: Set the following property in the
Spark Execution Parameters
in the Hadoop connection configured for the Spark engine:
spark.executor.instances=1
When you run the mapping on a single spark executor instance, you may experience performance degradation.
OCON-17932
When you specify the
Google Cloud Storage Path
in the
gs://<bucket name>
format and run a mapping on the Spark engine to write data to a Google Cloud Storage target, the mapping fails.
Workaround: Specify the
Google Cloud Storage Path
in the following format:
gs://<bucket name>/<folder_name>
OCON-17842
When you use the
Create Target
option to create a Google Cloud Storage target and select
Flat
as the
Resource Format
, fields are not getting propagated to the target.
Workaround: Create fields manually in the target file and run the mapping.