Table of Contents

Search

  1. Preface
  2. Part 1: Version 10.5.5
  3. Part 2: Version 10.5.4 - 10.5.4.x
  4. Part 3: Version 10.5.3 - 10.5.3.x
  5. Part 4: Version 10.5.2 - 10.5.2.1.x
  6. Part 5: Version 10.5.1 - 10.5.1.1
  7. Part 6: Versions 10.5 - 10.5.0.1
  8. Part 7: Versions 10.4.1 - 10.4.1.3
  9. Part 8: Versions 10.4 - 10.4.0.2
  10. Part 9: Versions 10.2.2 - 10.2.2 HotFix 1
  11. Part 10: Version 10.2.1
  12. Part 11: Version 10.2 - 10.2 HotFix 2

What's New and Changed (10.5.5)

What's New and Changed (10.5.5)

PowerExchange for Amazon S3

PowerExchange for Amazon S3

Effective in version 10.2.2, PowerExchange for Amazon S3 includes the following features:
  • You can read data from or write data to the following regions:
    • China(Ningxia)
    • EU(Paris)
    • AWS GovCloud (US)
  • You can use Amazon S3 objects as dynamic sources and target in a mapping.
  • When you run a mapping in the native environment or on the Spark engine to read data from an Avro, flat, JSON, ORC, or Parquet file, you can use wildcard characters to specify the source directory name or the source file name.
  • You can add a single or multiple tags to the objects stored on the Amazon S3 bucket to categorize the objects. Each tag contains a key value pair. You can either enter the key value pairs or specify the absolute file path that contains the key value pairs.
  • You can specify the maximum threshold size to download an Amazon S3 object in multiple parts.
  • When you upload an object to Amazon S3, you can specify the minimum size of the object and the number of threads to upload the objects in parallel as a set of independent parts.
  • When you create a data object read or write operation, you can read data present in the FileName port that contains the endpoint name and source path of the file.
  • You can add new columns or modify the columns in the Port tab directly when you create a data object read or write operation.
  • You can copy the columns of the source transformations, target transformations, or any other transformations from the Port tab and paste the columns in the data object read or write operation directly when you create a mapping to read or write an Avro, JSON, ORC, or Parquet file.
  • You can update the Amazon S3 file format without losing the column metadata in the Schema field of the column projection properties even after you configure the column projection properties for another Amazon S3 file format.
  • You can use Server-Side Encryption with AWS KMS (AWS Key Management Service) on Amazon EMR version 5.16 and Cloudera CDH version 5.15 and 5.16.
  • PowerExchange for Amazon S3 supports AWS SDK for Java version 1.11.354.
For more information, see the
Informatica PowerExchange for Amazon S3 10.2.2 User Guide
.

0 COMMENTS

We’d like to hear from you!