You can use managed and unmanaged Databricks tables as a target in mappings.
Mappings can access managed Databricks tables, including Delta Lake tables. Delta Lake is a data lake resource that stores data in large tables. Databricks uses proprietary Delta software to manage stored data and allow fast access to the data. Delta Lake supports ACID transactions.
Mappings can also access unmanaged Databricks tables built on top of any of the following storage types:
Azure blob storage
Azure Data Lake Storage (ADLS) Gen1 or Gen2
Amazon Web Services (AWS) S3
Mappings that access Delta Lake tables must use the Databricks Spark engine. If you run a Delta Lake mapping in the native environment with the JDBC connection, the mapping succeeds, but no data is written to the target.
You can configure dynamic mappings to apply Databricks source schema changes to Delta Lake targets. See
Databricks Schema Evolution.
To use Delta Lake resources in a mapping, create a relational data object and use a JDBC connection to connect to the resource.
To perform data ingestion to a Delta Lake target, use a pass-through mapping with a flat file source.