Effective in version 10.2.2, PowerExchange for Amazon Redshift includes the following features:
You can read data from or write data to the following regions:
China(Ningxia)
EU(Paris)
You can use Amazon Redshift objects as dynamic sources and target in a mapping.
You can use octal values of printable and non-printable ASCII characters as a DELIMITER or QUOTE.
You can enter pre-SQL and post-SQL commands to run queries for source and target objects in a mapping.
You can define an SQL query for read data objects in a mapping to override the default query. You can enter an SQL statement supported by the Amazon Redshift database.
You can specify the maximum size of an Amazon S3 object in bytes when you download large Amazon S3 objects in multiple parts.
You can read unique values when you read data from an Amazon Redshift source.
When you upload an object to Amazon S3, you can specify the minimum size of the object and the number of threads to upload the objects in parallel as a set of independent parts.
You can choose to retain an existing target table, replace a target table at runtime, or create a new target table if the table does not exist in the target.
You can configure the Update Strategy transformations for an Amazon Redshift target in the native environment.
When you write data to Amazon Redshift, you can override the Amazon Redshift target table schema and the table name during run time.
When the connection type is ODBC, the Data Integration Service can push transformation logic to Amazon Redshift sources and targets using source-side and full pushdown optimization.
You can use Server-Side Encryption with AWS KMS (AWS Key Management Service) on Amazon EMR version 5.16 and Cloudera CDH version 5.15 and 5.16.
PowerExchange for Amazon Redshift supports AWS SDK for Java version 1.11.354.
For more information, see the
Informatica PowerExchange for Amazon Redshift 10.2.2 User Guide