Use the Developer tool to create and run streaming mappings in either Databricks or Hadoop run-time environment and process data that is in JSON, XML, CSV, or Avro format.
Develop a mapping to read, transform, and write data according to your business needs. When you create a streaming mapping, select the environment and the run-time engine. When you run a streaming mapping, the Data Integration Service pushes the processing to nodes on a Databricks Spark engine in the Databricks cluster or on a Spark engine in the Hadoop cluster.
Use the following steps as a guideline when you develop a streaming mapping:
Create connections that you want to use to access streaming data.
Create input, output, and reusable objects that you want to use in the mapping. Create physical data objects to use as mapping input or output.
Create reusable transformations that you want to use.
Create rules.
Create the streaming mapping.
Add objects to the mapping.
Link ports between mapping objects to create a flow of data from sources to targets, through transformations.
Configure the mapping
Validate the mapping to identify errors.
Save the mapping and run it to see the mapping output.