Use the Developer tool to create and run Intelligent Streaming mappings in the Hadoop run-time environment and process data that is in JSON, XML, CSV, or Avro format.
When you create a streaming mapping, select the Hadoop environment and the Spark engine. When you run a streaming mapping, the Data Integration Service pushes the processing to nodes on a Spark engine in the Hadoop cluster.
When you configure the mapping, you can specify the following configurations:
Mapping configuration. The Developer tool uses this configuration when you run mappings through the Run dialog box or from the command line. You must configure a Messaging connection for the mapping and the run-time properties for the Hadoop environment. You can configure the Spark engine properties in the Hadoop connection. You can also use parameters to represent properties in the Hadoop environment if you need to use constant values between mapping runs.
Run configuration. These configurations apply to mappings that you run through the Developer tool. You can configure the source properties for streaming mappings.
When you run the mapping, the Data Integration Service converts the mapping to a Scala program and package it in a JAR file and sends it to the Hadoop cluster. You can view the details in the Spark execution plan in the Developer tool or Administrator tool.