A streaming mapping that runs in the Databricks environment can include streaming sources.
Based on the type of source you read from, create the following data object:
Amazon Kinesis
A physical data object that represents data in a Amazon Kinesis Data Stream. After you create an Amazon Kinesis connection, create an Amazon Kinesis data object to read from Amazon Kinesis Data Streams.
Azure Event Hubs
A physical data object that represents data in Microsoft Azure Event Hubs data streaming platform and event ingestion service.
Confluent Kafka
A physical data object that represents data in a Kafka stream or a Confluent Kafka stream. After you configure a Messaging connection, create a Confluent Kafka data object to read data from Kafka brokers or from Confluent Kafka brokers using schema registry.
Kafka
A physical data object that represents data in a Kafka stream. After you configure a Messaging connection, create a Kafka data object to read from Apache Kafka brokers.
You can run streaming mappings in AWS Databricks service in AWS cloud ecosystems or in Azure Databricks service in Microsoft Azure cloud services. The following table shows the list of sources that you can include in a streaming mapping based on the Databricks service: