Table of Contents

Search

  1. Preface
  2. Part 1: Version 10.5.5
  3. Part 2: Version 10.5.4 - 10.5.4.x
  4. Part 3: Version 10.5.3 - 10.5.3.x
  5. Part 4: Version 10.5.2 - 10.5.2.1.x
  6. Part 5: Version 10.5.1 - 10.5.1.1
  7. Part 6: Versions 10.5 - 10.5.0.1
  8. Part 7: Versions 10.4.1 - 10.4.1.3
  9. Part 8: Versions 10.4 - 10.4.0.2
  10. Part 9: Versions 10.2.2 - 10.2.2 HotFix 1
  11. Part 10: Version 10.2.1
  12. Part 11: Version 10.2 - 10.2 HotFix 2

What's New and Changed (10.5.5)

What's New and Changed (10.5.5)

PowerExchange for Microsoft Azure Blob Storage

PowerExchange for Microsoft Azure Blob Storage

Effective in version 10.2 HotFix 1, PowerExchange for Microsoft Azure Blob Storage includes the following functionality:
  • You can run mappings on the Spark engine.
  • You can use the file port to store the file name from which the Data Integration Service reads the data at run-time.
  • You can read and write .csv, Avro, and Parquet files when you run a mapping in the native environment.
  • You can read a directory when you run a mapping in the native environment.
  • You can generate or skip header rows when you run a mapping in the native environment. On the Spark engine, the header row is created by default.
  • You can append an existing blob. The append operation is applicable to only to the .csv files in the native environment.
  • You can override the blob or container name. In the Blob Container Override field, specify the container name or sub-folders in the root container with the absolute path.
  • You can read and write .csv files compressed in the gzip format in the native environment.
For more information, see the
Informatica PowerExchange for Microsoft Azure Blob Storage 10.2 HotFix 1 User Guide
.

0 COMMENTS

We’d like to hear from you!