Identify sources from which you need to stream data. You can access data that is in XML, JSON, Avro, flat, or binary format.
In Hadoop environment, you can use Kafka Amazon Kinesis stream and Azure Event Hubs sources to connect to multiple data engineering sources.
Configure the mapping and mapping logic to transform the data.
Run the mapping on the Spark engine in the Hadoop environment or on the Databricks Spark Engine in the Databricks environment.
Write the data to Kafka targets, HDFS complex files, HBase, Azure Event Hubs, Amazon S3, Azure Data Lake Storage, JMS, and Kinesis Firehose delivery streams.
Monitor the status of your processing jobs. You can view monitoring statistics for your processing jobs in the Monitoring tool.