Table of Contents

Search

  1. Preface
  2. Introduction to Data Engineering Streaming
  3. Data Engineering Streaming Administration
  4. Sources in a Streaming Mapping
  5. Targets in a Streaming Mapping
  6. Streaming Mappings
  7. Window Transformation
  8. Appendix A: Connections
  9. Appendix B: Monitoring REST API Reference
  10. Appendix C: Sample Files

Targets in a Streaming Mapping on Hadoop and Databricks

Targets in a Streaming Mapping on Hadoop and Databricks

A streaming mapping that runs in the Hadoop environment can include file, database, and streaming targets. A streaming mapping that runs in the Databricks environment can include file and streaming targets.
The following table lists the data objects that you can create based on the type of target you write to in either Hadoop or Databricks environment:
Targets in a Streaming Mapping
Environment
Amazon Kinesis
Hadoop, Databricks
Amazon S3
Hadoop, Databricks
Azure Event Hubs
Hadoop, Databricks
Cassandra
Hadoop
Complex File
Hadoop
Confluent Kafka
Databricks
Databricks Delta Lake
Databricks
Google Cloud Storage
Hadoop
HBase
Hadoop
JMS
Hadoop
Kafka
Hadoop, Databricks
Kudu
Hadoop
MapR Streams
Hadoop
Microsoft Azure Data Lake
Hadoop
Microsoft Azure Data Lake Storage Gen2
Databricks
Relational
Hadoop
Snowflake
Hadoop
In a Databricks environment, you cannot use CSV and Parquet payloads in a streaming mapping with Amazon S3 targets.
You can run streaming mappings in the Databricks environment on the AWS or Azure platforms. The following table shows the list of targets that you can include in a streaming mapping based on the cloud platform:
Targets
Cloud Platform
Amazon Kinesis
AWS
Amazon S3
AWS
Azure Event Hubs
Azure
Confluent Kafka
AWS, Azure
Databricks Delta Lake
AWS, Azure
Kafka
AWS, Azure
Microsoft Azure Data Lake Storage Gen2
Azure

0 COMMENTS

We’d like to hear from you!