Step 2. Provide Docker and Kubernetes Auto Scaling Information
Step 2. Provide Docker and Kubernetes Auto Scaling Information
On the
Kubernetes Auto Scaling
page, provide the required information to connect to the docker image and
pod
.
In the
Docker and Kubernetes information
section, enter
the docker image name,
pod name,
path and file name of the license key, and the volume directory. Enter the existing directory to create tmpfs volume. Ensure that you specify different file paths for the volume directory and the license key directory.
Optionally, choose to set Kubernetes autoscaling.
The Horizontal Pod Autoscaler scales the number of pods in a deployment or replica set based on the CPU utilization mentioned for the CPU load factor.
Kubernetes creates horizontally scalable worker nodes that are added to the grid with the
Data Integration Service
process enabled.
If you selected Kubernetes autoscaling, enter the CPU utilization and worker node information to autoscale:
CPU load factor
. Kubernetes autoscales the Informatica worker nodes after monitoring the Kubernetes worker node and only when the Kubernetes worker node reaches the value set for the CPU percentage metrics. Default is 80 percent.
Maximum number of Informatica worker nodes
. The maximum number of worker nodes to create in the domain. Default is 3.
The minimum number of worker node in the domain is 1.
Optionally, choose to store data outside of the containers when you select the persistent volume.
If you selected persistent volume, you cannot join the domain.
If you selected persistent volume, specify the persistent volume claim name.
Optionally, choose to expose additional ports in the container, and enter them as comma separated values.