Step 2. Provide Kubernetes Auto Scaling Information
Step 2. Provide Kubernetes Auto Scaling Information
On the
Kubernetes Auto Scaling
page, provide the required information to connect to the docker image and
pod
.
In the
Docker and Kubernetes information
section, enter
the docker image name,
pod name,
path and file name of the license key, and the volume directory. Enter the existing directory to create tmpfs volume.
Ensure that you enter different values for the volume directory and the license key directory.
Optionally, choose to set Kubernetes autoscaling.
The Horizontal Pod Autoscaler scales the number of pods in a deployment or replica set based on the CPU utilization mentioned for the CPU load factor.
Kubernetes creates horizontally scalable worker nodes that are added to the grid with the
PowerCenter Integration Service
process enabled.
If you selected Kubernetes autoscaling, you must enter the CPU utilization and worker node information to autoscale:
CPU load factor
. Kubernetes autoscales the Informatica worker nodes after monitoring the Kubernetes worker node and only when the Kubernetes worker node reaches the value set for the CPU percentage metrics. Default is 80 percent.
Maximum number of Informatica worker nodes
. Maximum number of worker nodes to create in the domain. Default is 3.
Optionally, choose to expose additional ports in the container, and enter them as comma separated values.