→ As a business requirement, we will be writing logic in our backend operators to segregate node-groups based on customer requirements.
→ The initial approach is to create two separate node-groups in the provisioned EKS cluster, via terraform:
Config: Instance type: t3a.large Min: 1 Desired: 1 Max: 5
Labels:
type: scoutflo-internal
Config: Instance type: This will vary app to app Min: 1 Desired: 2 Max: 5
Labels:
type: scoutflo-customer
To add specific labels to a node-group via terraform, you can use the following block inside a node-group configuration:
labels = {
label_key_1 = "label_value_1"
label_key_2 = "label_value_2"
.
.
.
}
→ While deploying services to the EKS cluster, we will assign a particular node to a pod based on the nodeSelector configuration in the helm chart. For eg, it would look something like:
nodeSelector:
type: scoutflo-internal #kubecost, cert-manager, nginx-ingress-controller, all internal scoutflo actions (jobs)
nodeSelector:
type: scoutflo-customer #all other scoutflo-deploy apps triggered by the end user
List
| App | Variable Name |
|---|---|
| Prometheus | server.nodeSelector |
| Grafana | nodeSelector |
| SigNoz | otelCollector.nodeSelector |
| clickhouse.nodeSelector | |
| frontend.nodeSelector | |
| MongoDB | nodeSelector |
| PostgreSQL | primary.nodeSelector |
| Clickhouse | nodeSelector |
| Chatwoot | nodeSelector |
| Ghost | nodeSelector |