Table of Contents

Search

  1. Preface
  2. Part 1: Version 10.5.2
  3. Part 2: Version 10.5.1 - 10.5.1.1
  4. Part 3: Versions 10.5 - 10.5.0.1
  5. Part 4: Versions 10.4.1 - 10.4.1.3
  6. Part 5: Versions 10.4 - 10.4.0.2
  7. Part 6: Versions 10.2.2 - 10.2.2 HotFix 1
  8. Part 7: Version 10.2.1
  9. Part 8: Version 10.2 - 10.2 HotFix 2

PowerExchange for Amazon S3

PowerExchange for Amazon S3

Effective in version 10.5, PowerExchange for Amazon S3 includes the following features:
  • When you run a mapping on the Spark engine, you can read data from and write data to Avro, ORC, and Parquet files organized based on directories.
  • You can configure an Amazon S3-compatible storage to access and manage the data that is stored over an S3 compliant interface. You can use the Scality RING S3-compatible storage. Use the Amazon S3 connection to connect to Scality RING and perform the read and write operations similar to Amazon S3.
  • You can configure Federated Single Sign-On (SSO) authentication to securely access the Amazon S3 resources.
  • You can configure a cached lookup operation to cache the lookup data when you run a mapping on the Spark engine.
  • You can read and write flat files with and without headers.
  • You can configure row delimiter and define qualifier scope when you read and write flat files.
  • You can perform audits for read operations in Amazon S3 mappings that run in the native environment or on the Spark engine.
For more information, see the
Informatica 10.5 PowerExchange for Amazon S3 User Guide
.

0 COMMENTS

We’d like to hear from you!