Table of Contents


  1. Preface
  2. Introduction to Data Engineering Streaming
  3. Data Engineering Streaming Administration
  4. Sources in a Streaming Mapping
  5. Targets in a Streaming Mapping
  6. Streaming Mappings
  7. Transformation in Streaming Mappings
  8. Window Transformation
  9. Appendix A: Connections
  10. Appendix B: Monitoring REST API Reference
  11. Appendix C: Sample Files

Streaming Mappings Overview

Streaming Mappings Overview

Use the Developer tool to create and run streaming mappings in either Databricks or Hadoop run-time environment and process data that is in JSON, XML, CSV, or Avro format.
Develop a mapping to read, transform, and write data according to your business needs. When you create a streaming mapping, select the environment and the run-time engine. When you run a streaming mapping, the Data Integration Service pushes the processing to nodes on a Databricks Spark engine in the Databricks cluster or on a Spark engine in the Hadoop cluster.
Use the following steps as a guideline when you develop a streaming mapping:
  1. Create connections that you want to use to access streaming data.
  2. Create input, output, and reusable objects that you want to use in the mapping. Create physical data objects to use as mapping input or output.
  3. Create reusable transformations that you want to use.
  4. Create rules.
  5. Create the streaming mapping.
  6. Add objects to the mapping.
  7. Link ports between mapping objects to create a flow of data from sources to targets, through transformations.
  8. Configure the mapping
  9. Validate the mapping to identify errors.
  10. Save the mapping and run it to see the mapping output.


We’d like to hear from you!