A streaming mapping receives data from unbounded data sources. An unbounded data source is one where data is continuously flowing in and there is no definite boundary. Sources stream data as events. The Spark engine processes the data and continuously updates the results to a result table.
The following image shows how the Spark engine receives data and publishes data in micro batches:
The Spark engine uses Spark Structured Streaming to process data that it receives in batches. Spark Structured Streaming receives data from streaming sources such as Kafka and divides the data into micro batches. The Spark engine continuously processes data streams as a series of small batch jobs with end-to-end latencies as low as 100 milliseconds with exactly-once fault tolerance guarantees.