Unable to add additional provider accounts

#1

Hi all,

I am having trouble adding multiple provider accounts. Spinnaker (via Halyard) deploys just fine to one GKE cluster, but when I try to add a second GKE cluster, I receive this error on the Halyard deployment:

- Prep deployment
  Failure
Problems in
  default.provider.kubernetes.standard-cluster-1-google-account-2:
! ERROR Unable to communicate with your Kubernetes cluster: Failure
  executing: GET at: https://x.x.x.x/api/v1/namespaces. Message: Forbidden!
  User gke_my-gcp-project_us-central1-a_standard-cluster-1 doesn't have
  permission. namespaces is forbidden: User "system:anonymous" cannot list
  namespaces at the cluster scope..
? Unable to authenticate with your Kubernetes cluster. Try using
  kubectl to verify your credentials.

- Failed to prep Spinnaker deployment

I ran through these steps to add the provider account for the initial Spinnaker setup:

I tried using these steps to create the second (legacy) provider, but that did not appear to work:

echo -e "$DOCKER_PASSWORD\n" | hal config provider docker-registry account add \
    my-docker-hub-account \
    --address index.docker.io \
    --repositories $DOCKER_REPOSITORIES \
    --sort-tags-by-date true \
    --username $DOCKER_USERNAME \
    --password

CONTEXT=$(kubectl config current-context)  # New context for different GKE cluster

hal config provider kubernetes account add \
    standard-cluster-1-google-account-2 \
    --docker-registries my-docker-hub-account \
    --context $CONTEXT

The Docker Hub account works fine since I can see it and its repositories in new pipeline Configuration stages when adding an Automated Trigger of type Docker Registry.

If I try a provider version of v2, the deployment succeeds, but I do not see anything in the Spinnaker UI:

My goal is to deploy Spinnaker to a GKE cluster, which in turn manages multiple GKE (or other external Kubernetes) clusters. Am I doing something wrong?

Any help is greatly appreciated.

Thanks!

0 Likes

#2

Hey, welcome. The v1 and v2 providers use differing authentication mechanisms to the k8s cluster. This is because the v1 provider uses the fabric8 java client libs, while the v2 provider shells out to kubectl. And so I believe the v1 provider only supports legacy authentication, but I may be off on this, will poke around.

And you should check the various ways to set up auth in k8s, the defaults of which have shifted over time in GKE. Can you check and compare your authentication setting is on your 2 GKE clusters (e.g. is basic authentication on? is legacy auth enabled?)

0 Likes