In the case of a mapping failure, to enable the mapping to start reading data from the time of failure, configure the
property. For example:
. The directory you specify is created under the directory you specify in the
Specify the following properties to configure how the data is processed:
Maximum Rows Read. Specify the maximum number of rows that are read before the mapping stops running. Default is
Read All Rows
Maximum Runtime Interval. Specify the maximum time to run the mapping before it stops. If you set values for this property and the Maximum Rows Read property, the mapping stops running after one of the criteria is met. Default is
. A value of
enables the mapping to run without stopping.
State Store. Specify the HDFS location on the cluster to store information about the state of the Spark Job. Default is
You can configure the state store as part of the configuration of execution options for the Data Integration Service.
You can use these properties to test the mapping.
Specify the following streaming properties:
Batch interval. The Spark engine processes the streaming data from sources and publishes the data in batches. The batch interval is number of seconds after which a batch is submitted for processing.
Cache refresh interval. You can cache a large lookup source or small lookup tables. When you cache the lookup source, the Data Integration Service queries the lookup cache instead of querying the lookup source for each input row. You can configure the interval for refreshing the cache used in a relational Lookup transformation.
The Developer tool applies configuration properties when you run streaming mappings. Set configuration properties for streaming mappings in the
Configure the following source properties:
Read all rows. Reads all rows from the source.
Read up to how many rows. The maximum number of rows to read from the source if you do not read all rows.
Maximum runtime interval. The maximum time to run the mapping before it stops. If you set values for this property and the
Maximum Rows Read
property, the mapping stops running after one of the criteria is met.
When you run the mapping, the Data Integration Service converts the mapping to a Scala program and package it in a JAR file and sends it to the Hadoop cluster. You can view the details in the Spark execution plan in the Developer tool or Administrator tool.
The following image shows the connection and run-time properties: