Shared Content for Data Engineering All Products
Aborts all deployed mapping jobs that are configured to run on the Spark engine. You can choose to abort queued jobs, running jobs, or both.
Creates a cluster configuration through cluster parameters that you specify in the command line.
Lists mapping options in an application.
Purges all job information from the queue when you enable big data recovery for the Data Integration Service.
Updates mapping options in an application.
Updates optimization level for multiple mappings in an application.