Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

Databricks and Delta Lake Tables

Databricks and Delta Lake Tables

You can use managed and unmanaged Databricks tables as a target in mappings.
Mappings can access managed Databricks tables, including Delta Lake tables. Delta Lake is a data lake resource that stores data in large tables. Databricks uses proprietary Delta software to manage stored data and allow fast access to the data. Delta Lake supports ACID transactions.
Mappings can also access unmanaged Databricks tables built on top of any of the following storage types:
  • Azure blob storage
  • Azure Data Lake Storage (ADLS) Gen1 or Gen2
  • Amazon Web Services (AWS) S3
Mappings that access Delta Lake tables must use the Databricks Spark engine. If you run a Delta Lake mapping in the native environment with the JDBC connection, the mapping succeeds, but no data is written to the target.
You can configure dynamic mappings to apply Databricks source schema changes to Delta Lake targets. See Databricks Schema Evolution.
To use Delta Lake resources in a mapping, create a relational data object and use a JDBC connection to connect to the resource.
To perform data ingestion to a Delta Lake target, use a pass-through mapping with a flat file source.

0 COMMENTS

We’d like to hear from you!