Kubernetes labels


#1

We use elastic for logs and we hit a problem collecting spinnaker logs. The spinnaker pods are using these labels:

sszabo@tor976568e1 [/home/sszabo] $ kubectl get -o yaml pods spin-clouddriver-69768cbb5b-pqthp | yq -y .metadata.labels
app: spin
app.kubernetes.io/managed-by: halyard
app.kubernetes.io/name: clouddriver
app.kubernetes.io/part-of: spinnaker
app.kubernetes.io/version: 1.9.5
cluster: spin-clouddriver
pod-template-hash: '2532476616'

The problem is that ‘app’ collides with ‘app.kubernetes’ when the logs hit elastic:

java.lang.IllegalArgumentException: mapper [kubernetes.labels.app] of different type, current_type [text], merged_type [ObjectMapper]


#2

If we remove the app label from the spinnaker pods will that cause any issues moving forward with upgrades etc?

sszabo@tor976568e1 [/home/sszabo] $ kubectl get -o json pods | jq -r '.items[].metadata | .name + ": " + .labels.app'
spin-clouddriver-69768cbb5b-jjdtl: spin
spin-deck-6bf6999d8f-7qbpr: spin
spin-echo-5599c75c86-c9mhm: spin
spin-fiat-69795c79c5-tb2px: spin
spin-front50-67f75b4487-sl2zw: spin
spin-gate-866c86ff5c-fjlk6: spin
spin-igor-5cf98c66d7-svsb6: spin
spin-orca-6578dc8bdb-zqpgp: spin
spin-redis-7479795669-mcmrg: spin
spin-rosco-7bfd8c9b95-lkf5h: spin

#3

Having the same problem.
The app: <appname> label is also set by prometheus, nginx, IBM Calico, etc.

I’m up against the same issue with shipping logs via Fluent-bit to Elastic Search.


For a particular namespace, ES might ingest a pod with app label and creates the index, then there’s a collision when a pod with app.kubernetes.io* is ingested, resulting in this error.

It seems we’re unable to change the existing index template ES app field from string to object (object allows sub fields) per this https://gist.github.com/nicolashery/6317643 and official docs.

{
  "error": {
    "root_cause": [
      {
        "type": "remote_transport_exception",
        "reason": "[061bdfc0-67d3-4482-8fca-6b1cd88d21a8][172.30.138.208:9300][indices:admin/mapping/put]"
      }
    ],
    "type": "illegal_argument_exception",
    "reason": "mapper [kubernetes.labels.app] of different type, current_type [text], merged_type [ObjectMapper]"
  },
  "status": 400
}

I’m going to try create a new index_template with the app field set as an object, then restart fluent-bit.
I have the luxury of being able to throw away existing logs, however it looks like it might be possible to just alias to new template and update fluent-bit (or your log forwarder) config?


#4

Creating the index_template with app having type: object resulted in app.kubernetes.io logs being ingested but not the logs of pods with app: xyz type labels.
I’m going to update the manifests for the apps that I can in the mean time.


#5

Don from fluent-bit issue made the good suggestion that we could use the Replace_Dots On config in Fluent-bit to replace dots with underscores.
Filebeat from memory has this option as well.
Might be a workaround in the interim.

Have created issue in Github here: https://github.com/spinnaker/spinnaker/issues/3483#issuecomment-431996407


#6

I was able to rename the label from logstash:

filter {

    if [kubernetes] {
              
        if [kubernetes][namespace] == "spinnaker" {
        mutate {
        rename => {"[kubernetes][labels][app]" => "[kubernetes][labels][spinapp]"}
        }
    
    }