Hi, I'm Ask INFA!
What would you like to know?
ASK INFAPreview
Please to access Ask INFA.

Table of Contents

Search

  1. Preface
  2. Data Integration performance tuning overview
  3. Optimizing targets
  4. Optimizing sources
  5. Optimizing mappings
  6. Optimizing mapping tasks
  7. Optimizing advanced clusters
  8. Optimizing system performance

Data Integration Performance Tuning

Data Integration Performance Tuning

Enabling concurrent caches

Enabling concurrent caches

When
Data Integration
runs a mapping that contains Lookup transformations, it builds a cache in memory when it processes the first row of data in a cached Lookup transformation. If there are multiple Lookup transformations in a mapping,
Data Integration
creates the caches sequentially when the first row of data is processed by the Lookup transformation. This slows Lookup transformation processing.
You can enable concurrent caches to improve performance. When the number of additional concurrent pipelines is set to one or more,
Data Integration
builds caches concurrently rather than sequentially. Performance improves greatly when the tasks contain a number of active transformations that might take time to complete, such as Aggregator, Joiner, or Sorter transformations. When you enable multiple concurrent pipelines,
Data Integration
doesn't wait for active task runs to complete before it builds the cache. Other Lookup transformations in the pipeline also build caches concurrently.

0 COMMENTS

We’d like to hear from you!