Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

Complex File Sources on Azure Data Lake Storage Gen2

Complex File Sources on Azure Data Lake Storage Gen2

You can use a PowerExchange for HDFS or a PowerExchange for Microsoft Azure Data Lake Storage Gen2 connection to read data from Azure Data Lake Storage Gen2 data objects.
The PowerExchange for Microsoft Azure Data Lake Storage Gen2 connection is an Azure Blob File System (ABFS) connection. You can use it if you have a PowerExchange for Microsoft Azure Data Lake Storage Gen2 connection license.
The following table shows the complex files that a PowerExchange for Microsoft Azure Data Lake Storage Gen2 mapping can process within Azure Data Lake Storage Gen2 in the Hadoop environment:
File Format
Read
Write
Native
Databricks Spark
Spark
Flat
Yes
Yes
Yes
Yes
Yes
Binary
Yes
Yes
Yes
No
No
Avro (Primitive and hierarchical data types)
Yes
Yes
Yes (Primitive data types only)
Yes
Yes
Json (Primitive and hierarchical data types)
Yes
Yes
No
Yes
Yes
Parquet (Primitive and hierarchical data types)
Yes
Yes
Yes (Primitive data types only)
Yes
Yes
Intelligent structure model
Yes
No
No
No
Yes
The following table shows the complex files that a PowerExchange for HDFS connection can process within Azure Data Lake Storage Gen2 in an HDInsight cluster in the Hadoop environment:
File Type
Supported Formats
Supported Engines
Avro
  • Flat
  • Hierarchical
    1 2
  • Blaze
  • Spark
JSON
  • Flat
    1
  • Hierarchical
    1 2
  • Blaze
  • Spark
ORC
  • Flat
  • Spark
Parquet
  • Flat
  • Hierarchical
    1 2
  • Blaze
  • Spark
1
To run on the Blaze engine, the complex file data object must be connected to a Data Processor transformation.
2
To run on the Spark engine, the complex file read operation must be enabled to project columns as complex data type.

0 COMMENTS

We’d like to hear from you!