Run jobs in separate processes
You can configure the Data Integration Service to run jobs in separate DTM processes or in the Data Integration Service process. Running jobs in separate processes optimizes stability because an unexpected interruption to one job does not affect all other jobs.
Maintain connection pools
You can configure whether the Data Integration Service maintains connection pools for database connections when the service processes jobs. When you configure connection pooling, the Data Integration Service maintains and reuses a pool of database connections. Reusing connections optimizes performance because it minimizes the amount of time and resources used to open and close multiple database connections.
If your license includes partitioning, you can enable the Data Integration Service to maximize parallelism when it runs mappings and profiles. When you maximize parallelism, the Data Integration Service dynamically divides the underlying data into partitions and processes all of the partitions concurrently. When the Data Integration Service adds partitions, it increases the number of processing threads, which can optimize mapping and profiling performance.
Cache result sets and data objects
You can configure the Data Integration Service to cache results for SQL data service queries and web service requests. You can also configure the service to use data object caching to access pre-built logical data objects and virtual tables. When the Data Integration Service caches result sets and data objects, subsequent jobs can take less time to run.
Persist virtual data in temporary tables
You can configure the Data Integration Service to persist virtual data in temporary tables. When business intelligence tools can retrieve data from the temporary table instead of the SQL data service, you can optimize SQL data service performance.