Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

File Sources on Hadoop

File Sources on Hadoop

A mapping that runs in the Hadoop environment can process complex files and flat files.
To read large volumes of data, you can connect a complex file source to read data from a directory of files that have the same format and properties. You can read compressed binary files.
You can read complex files from the following storage types in the Hadoop environment:
  • Amazon Simple Storage Service (Amazon S3)
  • Hadoop Distributed File System (HDFS)
  • Hive
  • MapR File System (MapR-FS)
  • Microsoft Azure Blob Storage (Azure Blob Storage)
  • Microsoft Azure Data Lake Store (ADLS)

0 COMMENTS

We’d like to hear from you!