Table of Contents

Search

  1. Preface
  2. Part 1: Version 10.5.1
  3. Part 2: Versions 10.5 - 10.5.0.1
  4. Part 3: Versions 10.4.1 - 10.4.1.3
  5. Part 4: Versions 10.4 - 10.4.0.2
  6. Part 5: Versions 10.2.2 - 10.2.2 HotFix 1
  7. Part 6: Version 10.2.1
  8. Part 7: Version 10.2 - 10.2 HotFix 2

What's New and Changed (10.5.1)

What's New and Changed (10.5.1)

PowerExchange for Microsoft Azure Blob Storage

PowerExchange for Microsoft Azure Blob Storage

Effective in version 10.2 HotFix 1, PowerExchange for Microsoft Azure Blob Storage includes the following functionality:
  • You can run mappings on the Spark engine.
  • You can use the file port to store the file name from which the Data Integration Service reads the data at run-time.
  • You can read and write .csv, Avro, and Parquet files when you run a mapping in the native environment.
  • You can read a directory when you run a mapping in the native environment.
  • You can generate or skip header rows when you run a mapping in the native environment. On the Spark engine, the header row is created by default.
  • You can append an existing blob. The append operation is applicable to only to the .csv files in the native environment.
  • You can override the blob or container name. In the Blob Container Override field, specify the container name or sub-folders in the root container with the absolute path.
  • You can read and write .csv files compressed in the gzip format in the native environment.
For more information, see the
Informatica PowerExchange for Microsoft Azure Blob Storage 10.2 HotFix 1 User Guide
.

0 COMMENTS

We’d like to hear from you!