Table of Contents

Search

  1. Preface
  2. Data Integration performance tuning overview
  3. Optimizing targets
  4. Optimizing sources
  5. Optimizing mappings
  6. Optimizing mapping tasks
  7. Optimizing advanced clusters
  8. Optimizing system performance

Data Integration Performance Tuning

Data Integration Performance Tuning

Optimizing the buffer block size

Optimizing the buffer block size

If the Secure Agent machine has limited physical memory and the mapping contains a large number of sources, targets, or partitions, you might need to decrease the buffer block size.
If you're manipulating unusually large rows of data, increase the buffer block size to improve performance. If you don't know the approximate size of the rows, determine the row size by completing the following steps:
  1. On the
    Explore
    page, open the mapping.
  2. Open the Target transformation.
  3. Click the
    Target Fields
    tab.
  4. Add the precision for all columns in the target.
  5. If you have more than one target in the mapping, repeat steps 2 - 4 for each additional target to calculate the precision for each target.
  6. Repeat steps 2 - 5 for each source definition in the mapping.
  7. Choose the largest precision of all the source and target precisions for the total precision in the buffer block size calculation.
The total precision represents the total bytes needed to move the largest row of data. For example, if the total precision equals 33,000, then
Data Integration
requires 33,000 bytes in the buffer block to move that row. If the buffer block size is only 64,000 bytes, then
Data Integration
can't move more than one row at a time.
To set the buffer block size, open the task and edit the
Default Buffer Block Size
advanced session property.
As with DTM buffer memory allocation, increasing buffer block size should improve performance. If you don't see an increase, then buffer block size isn't a factor in task performance.

0 COMMENTS

We’d like to hear from you!