We dont set anything to my knowledge – I certainly dont. I presumed that responsibility would be in halyard and conveyed through each service hal config. I think Jacob was doing some work baselining needs when he was doing the quota analysis work. I think travis at one time might have monkeyed with constraints on his internal long-running deployment so that orca and clouddriver deployed on different nodes.
I do think that an n1-standard-1 [*] is too small as a node, mostly because they have limited RAM. While it doesnt seem like you’d need a lot of RAM, we’re running a lot of independent JVMs, and java (especially with spring) uses a lot of RAM overhead. Clouddriver and orca use a lot of RAM as well.
We use a pool of n1-highmem-2 nodes in our validation process and deploy vanilla spinnaker via halyard, though halyard itself is deployed to a VM outside the k8s cluster for ease in troubleshooting catastrophic builds. I am guessing that you’d need at least two nodes (redundancy aside) – we use more because we’re doing other things. When we test VM deployments, we use n1-standard-4 or equivalent on other platforms, and that includes redis, halyard, and anything else needed. Those VMs have some RAM headspace so I’m guessing 2 nodes for the suite of microservices and halyard would be sufficient as a starting point even with some additional k8s overhead.
At the moment we are using 1.9.x though have used 1.8.x at one time without noticing an issue. We’ve run most versions over the past two years without noticing a compatability issue.
[*] for the sake of discussion:
n1-standard-1 = [1 core, 3.75G]
n1-highmem-2 = [2 cores, 13G]
n1-standard-4 = [4 cores, 15G]