Sorry, specifically namespace resourceQuotas.
From that page:
When several users or teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share of resources.
Resource quotas are a tool for administrators to address this concern.
Here is an example ResourceQuota:
$ cat resourceQuota.yaml
Now imagine that I’m running ten pods (single java container per pod), and each only needs 1GB of heap to pre-allocate, now perform a red/black deploy: (pardon this is a V1 provider example)
- Execute a pipline that targets the namespace “test”
- A second
replicaset is created with a new
horizontalpodautoscaler set to desired replicas to of (duplicating the pool to a total of 20 pods, each requesting 1 cpu, 1G).
In red/black, I’d expect all of the pods to come up healthy before cutting over. The problem is that the
namespace has a quota and we’re essentially exceeding the quota by 2x the pods to perform a red/black deploy. This would be a non-issue if we did a 1:1 rolling update, but we’d prefer faster rollbacks when needed.
Assuming my cluster either has room for the doubling of the pods, or I’ve enabled the
Cluster Autoscaler, any recomendation on how I can scale up / down the
ResourceQuota in a pipeline?