Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings
  4. Sources
  5. Targets
  6. Transformations
  7. Cluster Workflows
  8. Profiles
  9. Monitoring
  10. Hierarchical Data Processing
  11. Hierarchical Data Processing Configuration
  12. Hierarchical Data Processing with Schema Changes
  13. Intelligent Structure Models
  14. Stateful Computing
  15. Connections
  16. Data Type Reference
  17. Function Reference

User Guide

User Guide

File Targets on Hadoop

File Targets on Hadoop

A mapping that runs in the Hadoop environment can process complex files and flat files.
To write large volumes of data, you can connect a complex file target to write data to a directory of files that have the same format and properties. You can read compressed binary files.
You can write to complex files in the following storage types in the Hadoop environment:
  • Amazon Simple Storage Service (Amazon S3)
  • Hadoop Distributed File System (HDFS)
  • MapR File System (MapR-FS)
  • Microsoft Azure Blob Storage (Azure Blob Storage)
  • Microsoft Azure Data Lake Store (ADLS)

0 COMMENTS

We’d like to hear from you!