Halyard with V2 metadata.annotations: Too long

#1

Hi all,

We’ve come across a strange limitation on deploying Spinnaker 1.10.5. on our GKE 1.10.9 cluster

We have been using Halyard successfully with a v1 provider account with our setup of about 300 namespaces.

When we try to implement horizontal scaling as described here https://www.spinnaker.io/setup/productionize/scaling/horizontal-scaling/ which requires us to use a v2 provider account for Halyard.

Halyard spits out this error

The Secret "spin-clouddriver-caching-files-711549625" is invalid: metadata.annotations: Too long: must have at most 262144 characters
at com.netflix.spinnaker.halyard.deploy.spinnaker.v1.service.distributed.kubernetes.v2.KubernetesV2Utils.apply(KubernetesV2Utils.java:214) ~[halyard-deploy.jar:na]
at com.netflix.spinnaker.halyard.deploy.spinnaker.v1.service.distributed.kubernetes.v2.KubernetesV2Utils.createSecret(KubernetesV2Utils.java:247) ~[halyard-deploy.jar:na]
at com.netflix.spinnaker.halyard.deploy.spinnaker.v1.service.distributed.kubernetes.v2.KubernetesV2Service.stageConfig(KubernetesV2Service.java:474) ~[halyard-deploy.jar:na]
at com.netflix.spinnaker.halyard.deploy.spinnaker.v1.service.distributed.kubernetes.v2.KubernetesV2Service.getResourceYaml(KubernetesV2Service.java:156) ~[halyard-deploy.jar:na]
at com.netflix.spinnaker.halyard.deploy.deployment.v1.KubectlDeployer.lambda$deploy$0(KubectlDeployer.java:77) ~[halyard-deploy.jar:na]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) ~[na:1.8.0_181]
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580) ~[na:1.8.0_181]
at com.netflix.spinnaker.halyard.deploy.deployment.v1.KubectlDeployer.deploy(KubectlDeployer.java:45) ~[halyard-deploy.jar:na]
at com.netflix.spinnaker.halyard.deploy.deployment.v1.KubectlDeployer.deploy(KubectlDeployer.java:37) ~[halyard-deploy.jar:na]
at com.netflix.spinnaker.halyard.deploy.services.v1.DeployService.deploy(DeployService.java:287) ~[halyard-deploy.jar:na]
at com.netflix.spinnaker.halyard.controllers.v1.DeploymentController.lambda$deploy$20(DeploymentController.java:262) ~[halyard-web.jar:na]
at com.netflix.spinnaker.halyard.core.DaemonResponse$StaticRequestBuilder.build(DaemonResponse.java:127) ~[halyard-core.jar:na]
at com.netflix.spinnaker.halyard.core.tasks.v1.TaskRepository.lambda$submitTask$1(TaskRepository.java:48) ~[halyard-core.jar:na]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_181]

What we see is that there’s a huge base64 encoded annotation that basically contains our whole halyard config with all the kubernetes and gcr provider accounts. I think the annotation is needed for some diffing of the secret

I couldn’t find a spinnaker issue related to this, but just wanted to ask if anyone else has seen this error. Are there ways to workaround this problem? When deploying with V1 provider account, there is no metadata.annotations field in the secrets.

In the v1 deployed clouddriver we can see that the secrets have been split into 3 separate secrets so there probably is already some kind of mechanism for splitting too large secrets for the deployment

1 Like

#2

We figured out that this is due to V2’s way of deploying through kubectl apply -f, which automatically sets the metadata.annotations.last-applied-config tag for Secrets and Configmaps.

In our case, with a lot of accounts in the clouddriver configuration, that field ends up being rather large. There’s a 1MB size limit for Secret/ConfigMap, but the metadata.annotations section is limited to 256kb.

To circumvent the issue for now, we’ve modified Halyard to not use kubectl apply -f while submitting resources to Kubernetes, but use kubectl replace --force --save-config=false -f - instead. This will delete the old resource and recreate it without the aforementioned field (and also works if the resource wasn’t present to begin with). Still testing if this works for all cases and situations though…

2 Likes

#3

Hi, I’m having the same issue here. I have a number of custom packer files and scripts that I want to “share” with Rosco, using the methods described here:


But my deploy is failing because of kubernetes’ limit on metadata annotations for the custom profiles (aka files):
"message" : "Failed to deploy manifest:\napiVersion: v1\nkind: Secret\nmetadata:\n name: spin-rosco-files-xxx

Is there any way to just use a “regular old filesystem” type volume, for the k8s pod/containers (instead of “secrets” volume) for this type of data? I don’t have anything in there that is necessarily secret, so I would rather have the flexibility of a regular volume, as opposed to having the privacy of using a secrets volume.

0 Likes

#4

Thank you for your input @Rem.co ! We run into that issue too with just 37 accounts :frowning:
It seems that the solution you posted is the only way to fix it. Did you manage to test it?

P.s. I also searched through Spinnaker’s issues and PRs on GitHub, but didn’t find any related to this issue.

0 Likes

#5

just FYI anyone looking into this. here’s the github issue https://github.com/spinnaker/spinnaker/issues/3959 and there’s a feature request linked to that issue

0 Likes

#6

@trissanen Than you for linking!

Meanwhile, we have send a PR which fixes this particular issue of a long annotation and it was merged yesterday: https://github.com/spinnaker/halyard/pull/1272

1 Like

#7

Excellent news. Well done! Makes upgrading halyard easier again

0 Likes