Unable to fetch file artifact from on-prem gitlab instance


I’ve been trying all week to integrate spinnaker with our all-prem reality, and I’m having a hard time making it fit. I’ve been around fetching the k8s deploy manifest from gitlab, our on-prem gitlab, and this is all I get logged:

Jan 04 17:44:39 spinnaker spin-orca-85dc75fcb6-8jfk9:  com.netflix.spinnaker.kork.web.exceptions.InvalidRequestException: Unmatched expected artifact ExpectedArtifact(matchArtifact=Artifact(type=gitlab/file, name=deploy/dev.yaml, version=dev, location=null, reference=https://git.nosinovacao.pt/api/v4/projects/547/repository/files/deploy%2Fdev.yaml/raw, metadata=null, artifactAccount=null, provenance=null, uuid=null), usePriorArtifact=false, useDefaultArtifact=false, defaultArtifact=Artifact(type=null, name=null, version=null, location=null, reference=null, metadata=null, artifactAccount=null, provenance=null, uuid=null), id=996e92df-d881-4c47-9582-4ae3e31feba2, boundArtifact=null) could not be resolved.

That artifactAccount=null might be the culprit. I couldn’t find any option to integrate the gitlab config with halyard. But I found that I can set

  baseUrl: "https://git.nosinovacao.pt"
  privateToken: <redacted>
  commitDisplayLength: 8

at the igor.yml config file. Howerver I’m inclined to say this will only work for commit fetching, not artifact fetching. Is that the case? So, how does one do that?


There is the baseurl in for gitlab in igor, but I’m not sure if you really need it.

Artifact not resolved message usually means that you didn’t pass an artifact to the pipeline execution (it wasn’t in the webhook or the trigger). If you don’t pass it to your execution, maybe you could define the default artifact?.

The artifact account is selected in the deployment stage of your pipeline. In some versions of Spinnaker it’s not shown if you only have one gitlab artifact account but you can verify it if you view the json representation of the stage. If it’s not set you’d probably get 403 or 404 errors from gitlab.

As far as I understand, the whole thing about fetching artifacts from gitlab is just a HTTP GET with the gitlab token.


@trissanen that’s the thing I’ve been having the most difficult time wrapping around how spinnaker works.
What we want to achieve is something like this:

  1. Developers commit code to our onprem gitlab.
  2. Every time there’s a commit to dev or master, it will trigger a Jenkins build
  3. Jenkins will build the code and evaluate the quality gates with sonarqube
  4. If everything looks good, the resulting docker images (there are usually more than one for each repo) plus nugets and stuff will be uploaded to our onprem Artifactory (basically a private docker repo)
  5. Then Jenkins will trigger the deploy service, this time we want Spinnaker to kick in
  6. Spinnaker shall start a certain pipeline, that will read the docker tags from Jenkins, find the correct k8s deploy manifests from the manifests git repo (not the same repo from the code), and prepare to follow the deploy strategy
    6.1 the first step will always be deploying on our onprem DEV k8s cluster. Spinnaker shall then run the smoke and integration tests for those deployed images.
    6.2 if everything looks okay, it’ll then deploy to the two k8s clusters, one on each datacenter for QA/Staging. There should be run comprehensive integration tests. There will have to be a moment of waiting for the general full integration test pipeline to run; it runs every 4h.
    6.3 If the tests are fine and the full test pipeline reports no problems, it will be deployed to the PROD k8s cluster, on the alpha deployment (that only targets alpha clients). Manual, human testing begins.
    6.4 On demand, it might be promoted to the beta deployment.
    6.5 Then, finally, after careful manual review, it will reach all the targets in a 5%>20%>80%>100% manual approval sequence.

Now… Having Spinnaker do all this…