Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Google BigQuery
  6. PowerExchange for Google Cloud Storage
  7. PowerExchange for Greenplum
  8. PowerExchange for HBase
  9. PowerExchange for HDFS
  10. PowerExchange for Hive
  11. PowerExchange for JDBC V2
  12. PowerExchange for JD Edwards EnterpriseOne
  13. PowerExchange for Kudu
  14. PowerExchange for LDAP
  15. PowerExchange for Microsoft Azure Blob Storage
  16. PowerExchange for Microsoft Azure Cosmos DB SQL API
  17. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  18. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  19. PowerExchange for Microsoft Azure SQL Data Warehouse
  20. PowerExchange for Microsoft Dynamics CRM
  21. PowerExchange for MongoDB
  22. PowerExchange for Netezza
  23. PowerExchange for OData
  24. PowerExchange for Salesforce
  25. PowerExchange for SAP NetWeaver
  26. PowerExchange for Snowflake
  27. PowerExchange for Teradata
  28. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Redshift (10.5)

PowerExchange for Redshift (10.5)

Fixed Issues

The following table describes fixed issues:
Issue
Description
OCON-27505
If you parameterized an Amazon Redshift table name and upgraded from Informatica version 10.2.2 to version 10.5, the parameterized value of the table name does not appear after the upgrade.

Known Issues

The following table describes known issues:
Bug
Description
OCON-23228
When you run a mapping that creates a Amazon Redshift target at run time and column names from the output ports of a transformation in the pipeline contain mix of upper case and lower case, the mapping fails with the null pointer exception.
Workaround: Use an expression or a case converter transformation to convert the column names into lower case before you select
Create Target
.
OCON-10209
When you use the MapR distribution, Amazon Redshift mapping fails on the Spark engine when it reads from or writes to an Amazon Redshift cluster that has Version 4 authentication with the following error message:
com.amazonaws.services.s3.model.AmazonS3Exception:
OCON-9834
When you use the Hortonworks 2.6 distribution, Amazon Redshift mapping fails on the Spark engine when it reads from or writes to an Amazon Redshift cluster that has Version 4 authentication with the following error message:
Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 9BDDEEB8241688A2)
OCON-9663
When you run an Amazon Redshift mapping to read or write data, the
Stop on Errors
property does not work.
OCON-8022
If you import an Amazon Redshift table that has a single quote (') in the column name, the mapping fails with the following error message:
[LDTM_0072] [Amazon](500051) ERROR processing query/statement. Error: Parsing failed, Query: unload ('SELECT "adpqa"."sq_col"."id'" FROM "adpqa"."sq_col"') TO 's3://infa.qa.bucket/ 0b0ad503-1c2c-4514-95ac-85a5adb71b3b1489385038407/sq_col_' credentials 'aws_access_key_id=********;aws_secret_access_key=********' ESCAPE DELIMITER ','
OCON-7965
When you run an Amazon Redshift mapping on the Blaze engine to read data from or write data to an Amazon Redshift cluster that requires Version 4 authentication, the mapping fails. This issue occurs if you use the Hortonworks 2.3 distribution.
OCON-7909
When you run an Amazon Redshift mapping on the Blaze engine to read data from or write data to an Amazon Redshift cluster that requires Version 4 authentication, the mapping fails. This issue occurs if you use the MapR 5.2 distribution.
OCON-7322
If you import an Amazon Redshift table that has a single quote (') or a backslash (\) in the table name, the read and write operations fail.
OCON-6921
When you run an Amazon Redshift mapping that contains a timestamp field in the native environment, the Data Integration Service truncates the ultraseconds values to milliseconds.
OCON-6785
When the Amazon Redshift source contains both double quotes (") and the delimiter you specified in the mapping, double quotes are truncated in the target. Also, the escape character is retained in the target.
OCON-6583
If you set the Parallel option off in the unload command and run an Amazon Redshift mapping on the Blaze engine, all the rows from the source are not written to the Amazon Redshift target even though the mapping runs successfully.
OCON-6346
When you run an Amazon Redshift mapping on the Blaze engine, the success and error files are not generated.
OCON-6266
When you run an Amazon Redshift mapping that compresses the staging files on the Blaze engine, the mapping fails. The staging files compression is ignored.
OCON-6260
When you run an Amazon Redshift mapping on the Blaze engine, the tasklet log does not display the row statistics even if the mapping runs successfully.
OCON-6252
When you run a mapping on the Blaze engine, the Real and Double data type values are rounded off.
Workaround: Use the Numeric data type in place of Real and Double data types.
OCON-1297
When you configure the following attributes and run an Amazon Redshift mapping in the Hadoop environment, the mapping might fail based on the engine selected for mapping execution:
Read Operation Attributes:
  • S3 Client Encryption
  • Staging Directory Location
Write Operation Attributes:
  • Enable Compression
  • CopyOptions Property File
  • Null value for CHAR and VARCHAR data types
  • S3 Server Side Encryption
  • S3 Client Side Encryption
  • Staging Directory Location
  • Success File Directory
  • Error File Directory
OCON-1275
A mapping with more than one RedShift object fails in Hadoop run-time environment for MapR distribution.

0 COMMENTS

We’d like to hear from you!