You work at an organization that runs a majority of data processing jobs on the Spark engine. To ensure that Spark jobs have access to cluster resources, you direct Spark jobs to the queue
Spark_only
.
To set the YARN queue for Spark jobs, you can configure the following property in the Hadoop connection:
The following image shows how a scheduler directs Spark jobs to the queue that you configured:
The submitted Spark job is directed to the queue
Spark_only
. The submitted Blaze and Hive jobs are directed to the default queue.