You can use managed and unmanaged Databricks tables as a target in mappings.
Delta Lake is an open-source data lake resource that stores data in large tables. Databricks manages stored data and allows fast access to the data. Delta Lake supports ACID transactions and scalable handling of metadata. You can build a lakehouse with Databricks Delta Lake using compute engines such as Spark, PrestoDB, Flink, Trino, and Hive along with APIs for Scala, Java, Rust, and Python.
Mappings can also access unmanaged Databricks tables built on top of any of the following storage types:
Azure blob storage
Azure Data Lake Storage (ADLS) Gen1 or Gen2
Amazon Web Services (AWS) S3
Mappings that access Delta Lake tables must use the Databricks Spark engine. If you run a Delta Lake mapping in the native environment with the JDBC connection, the mapping succeeds, but no data is written to the target.
You can configure dynamic mappings to apply Databricks source schema changes to Delta Lake targets. See
Databricks Schema Evolution.
To use Delta Lake resources in a mapping, create a relational data object and use a JDBC connection to connect to the resource.
To perform data ingestion to a Delta Lake target, use a pass-through mapping with a flat file source.