Hi, I'm Ask INFA!
What would you like to know?
ASK INFAPreview
Please to access Ask INFA.

Table of Contents

Search

  1. Preface
  2. Data Integration performance tuning overview
  3. Optimizing targets
  4. Optimizing sources
  5. Optimizing mappings
  6. Optimizing mapping tasks
  7. Optimizing advanced clusters
  8. Optimizing system performance

Data Integration Performance Tuning

Data Integration Performance Tuning

Secure Agent machine sizing requirements

Secure Agent machine sizing requirements

Consider Secure Agent machine memory requirements for optimal performance.
A typical cloud data lake to cloud data warehouse mapping might require up to 3 CPU cores and 1 GB of JVM heap memory for a data size of approximately 7.5 GB. The default value for JVM heap memory is 64 MB. Additional DTM buffer block sizing and buffer pool sizing will increase memory footprint.
The following graph illustrates the physical memory (resident memory) usage for a flat file to cloud data warehouse passthrough mapping as it relates to the number of partitions. The default buffer block size is set to 100 MB and the JVM heap memory is set to 1 GB.
 The image shows that the amount of resident memory usage increases as the number of partitions increase. For example, with one partition, resident memory usage is 1536 MB. With eight partitions, resident memory usage is 9626 MB.
Adding partitions to a cloud data lake to cloud data warehouse mapping linearly increases the requirement for CPU cores. The following graph illustrates the CPU consumption in cores for a flat file to cloud data warehouse passthrough mapping, with an increasing number of partitions.
The image shows that the amount of resident CPU core usage increases as the number of partitions increase. For example, a mapping with one partition requires 1.8 CPU cores. A mapping with eight partitions requires 14.4 CPU cores.
To improve performance, configure the maxDTMProcesses custom property and the JVM options.

0 COMMENTS

We’d like to hear from you!