Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Cassandra JDBC
  6. PowerExchange for DataSift
  7. PowerExchange for Facebook
  8. PowerExchange for Google Analytics
  9. PowerExchange for Google BigQuery
  10. PowerExchange for Google Cloud Spanner
  11. PowerExchange for Google Cloud Storage
  12. PowerExchange for Greenplum
  13. PowerExchange for HBase
  14. PowerExchange for HDFS (10.4.1)
  15. PowerExchange for HDFS (10.4.0)
  16. PowerExchange for Hive
  17. PowerExchange for JD Edwards EnterpriseOne
  18. PowerExchange for JDBC V2
  19. PowerExchange for LDAP
  20. PowerExchange for LinkedIn
  21. PowerExchange for MapR-DB
  22. PowerExchange for Microsoft Azure Blob Storage
  23. PowerExchange for Microsoft Azure Cosmos DB SQL API
  24. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  25. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  26. PowerExchange for Microsoft Azure SQL Data Warehouse
  27. PowerExchange for Microsoft Dynamics CRM
  28. PowerExchange for MongoDB
  29. PowerExchange for Netezza
  30. PowerExchange for OData
  31. PowerExchange for Salesforce
  32. PowerExchange for Salesforce Marketing Cloud
  33. PowerExchange for SAP NetWeaver
  34. PowerExchange for Snowflake
  35. PowerExchange for Tableau
  36. PowerExchange for Tableau V3
  37. PowerExchange for Teradata Parallel Transporter API
  38. PowerExchange for Twitter
  39. PowerExchange for Web Content-Kapow Katalyst
  40. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Amazon Redshift Known Limitations (10.4.0)

PowerExchange for Amazon Redshift Known Limitations (10.4.0)

The following table describes known limitations:
Bug
Description
OCON-23228
When you run a mapping that creates a Amazon Redshift target at run time and column names from the output ports of a transformation in the pipeline contain mix of upper case and lower case, the mapping fails with the null pointer exception.
Workaround: Use an expression or a case converter transformation to convert the column names into lower case before you select
Create Target
.
OCON-10209
When you use the MapR distribution, Amazon Redshift mapping fails on the Spark engine when it reads from or writes to an Amazon Redshift cluster that has Version 4 authentication with the following error message:
com.amazonaws.services.s3.model.AmazonS3Exception:
OCON-9834
When you use the Hortonworks 2.6 distribution, Amazon Redshift mapping fails on the Spark engine when it reads from or writes to an Amazon Redshift cluster that has Version 4 authentication with the following error message:
Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 9BDDEEB8241688A2)
OCON-9827
When you use the IBM BigInsight 4.2 distribution, Amazon Redshift mapping fails on the Spark engine when it reads from or writes to an Amazon Redshift cluster that has Version 4 authentication with the following error message:
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 5EEB36DFAC18DE3B, AWS Error Code: null, AWS Error Message: Bad Request
OCON-9663
When you run an Amazon Redshift mapping to read or write data, the
Stop on Errors
property does not work.
OCON-8022
If you import an Amazon Redshift table that has a single quote (') in the column name, the mapping fails with the following error message:
[LDTM_0072] [Amazon](500051) ERROR processing query/statement. Error: Parsing failed, Query: unload ('SELECT "adpqa"."sq_col"."id'" FROM "adpqa"."sq_col"') TO 's3://infa.qa.bucket/ 0b0ad503-1c2c-4514-95ac-85a5adb71b3b1489385038407/sq_col_' credentials 'aws_access_key_id=********;aws_secret_access_key=********' ESCAPE DELIMITER ','
OCON-7965
When you run an Amazon Redshift mapping on the Blaze engine to read data from or write data to an Amazon Redshift cluster that requires Version 4 authentication, the mapping fails. This issue occurs if you use the Hortonworks 2.3 distribution.
OCON-7909
When you run an Amazon Redshift mapping on the Blaze engine to read data from or write data to an Amazon Redshift cluster that requires Version 4 authentication, the mapping fails. This issue occurs if you use the MapR 5.2 distribution.
OCON-7322
If you import an Amazon Redshift table that has a single quote (') or a backslash (\) in the table name, the read and write operations fail.
OCON-6929
If you do not connect all the ports in an Amazon Redshift mapping and run the mapping on the Hive engine, the mapping fails with the following error:
java.lang.RuntimeException [HIVE_1070]The Integration Service failed to run Hive query [exec3_query_2] for task [exec3] due to following error: Hive error code [10,044] , Hive message [FAILED: SemanticException [Error 10044]
OCON-6921
When you run an Amazon Redshift mapping that contains a timestamp field in the native environment, the Data Integration Service truncates the ultraseconds values to milliseconds.
OCON-6883
When you use IBM BigInsight 4.2 distribution, Amazon Redshift mapping fails on the Blaze engine when it reads from or writes to an Amazon Redshift cluster that has Version 4 authentication.
OCON-6785
When the Amazon Redshift source contains both double quotes (") and the delimiter you specified in the mapping, double quotes are truncated in the target. Also, the escape character is retained in the target.
OCON-6583
If you set the Parallel option off in the unload command and run an Amazon Redshift mapping on the Blaze engine, all the rows from the source are not written to the Amazon Redshift target even though the mapping runs successfully.
OCON-6505
If you specify user impersonation in a Hadoop Connection and run an Amazon Redshift mapping on the Hive engine, no data is written to the target even though the mapping runs successfully.
OCON-6346
When you run an Amazon Redshift mapping on the Blaze engine, the success and error files are not generated.
OCON-6266
When you run an Amazon Redshift mapping that compresses the staging files on the Blaze engine, the mapping fails. The staging files compression is ignored.
OCON-6260
When you run an Amazon Redshift mapping on the Blaze engine, the tasklet log does not display the row statistics even if the mapping runs successfully.
OCON-6252
When you run a mapping on the Blaze engine, the Real and Double data type values are rounded off.
Workaround: Use the Numeric data type in place of Real and Double data types.
OCON-1297
When you configure the following attributes and run an Amazon Redshift mapping in the Hadoop environment, the mapping might fail based on the engine selected for mapping execution:
Read Operation Attributes:
  • S3 Client Encryption
  • Staging Directory Location
Write Operation Attributes:
  • Enable Compression
  • CopyOptions Property File
  • Null value for CHAR and VARCHAR data types
  • S3 Server Side Encryption
  • S3 Client Side Encryption
  • Staging Directory Location
  • Success File Directory
  • Error File Directory
OCON-1275
A mapping with more than one RedShift object fails in Hadoop run-time environment for MapR distribution.
OCON-361
For Hadoop cluster that uses Kerberos, an Amazon Redshift mapping fails on the Hive engine.

0 COMMENTS

We’d like to hear from you!