Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings
  4. Sources
  5. Targets
  6. Transformations
  7. Cluster Workflows
  8. Profiles
  9. Monitoring
  10. Hierarchical Data Processing
  11. Hierarchical Data Processing Configuration
  12. Hierarchical Data Processing with Schema Changes
  13. Intelligent Structure Models
  14. Stateful Computing
  15. Connections
  16. Data Type Reference
  17. Function Reference

User Guide

User Guide

Complex File Targets on Amazon S3

Complex File Targets on Amazon S3

Use a PowerExchange for Amazon S3 connection to write data to Amazon S3 data objects.
The following table shows the complex files that a mapping can process within Amazon S3 storage in the Hadoop environment:
File Type
Supported Formats
Supported Engines
Avro
  • Flat
  • Hierarchical
    1 2
  • Blaze
  • Spark
JSON
  • Flat
  • Hierarchical
    1 2
  • Blaze
  • Spark
ORC
  • Flat
  • Spark
Parquet
  • Flat
  • Hierarchical
    1 2
  • Blaze
  • Spark
1
To run on the Blaze engine, the complex file data object must be connected to a Data Processor transformation.
2
To run on the Spark engine, the complex file write operation must be enabled to project columns as complex data type.

0 COMMENTS

We’d like to hear from you!