If you're experiencing difficulties listing your pods with the command kubectl get pods after applying your configuration, you're not alone. This issue can arise even after successfully creating resources like ConfigMaps, DaemonSets, and Services. Below, we will explore a sample configuration and potential troubleshooting steps.

Example Configuration

The following YAML configuration demonstrates how to set up Linkerd in a DaemonSet, including a ConfigMap and a Service:

# Configuration for Linkerd in a DaemonSet
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25

    usage:
      orgId: linkerd-examples-daemonset

    routers:
    - protocol: http
      label: outgoing
      dtab: |
        /srv        => /#/io.l5d.k8s/default/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: default
          port: incoming
          service: l5d
      servers:
      - port: 4140
        ip: 0.0.0.0
      service:
        responseClassifier:
          kind: io.l5d.http.retryableRead5XX

    - protocol: http
      label: incoming
      dtab: |
        /srv        => /#/io.l5d.k8s/default/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
      servers:
      - port: 4141
        ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:1.0.0
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.4.0
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: outgoing
    port: 4140
  - name: incoming
    port: 4141
  - name: admin
    port: 9990

Troubleshooting Steps

  1. Check Resource Status: After applying your configuration, ensure that the DaemonSet and other resources are running correctly. You can do this by executing:

    kubectl get daemonsets
    kubectl get services
    kubectl get configmaps
  2. Inspect Pod Logs: If your pods are not listed, check the logs of the DaemonSet to identify any errors:

    kubectl logs daemonset/l5d
  3. Verify Namespace: Ensure that you are operating in the correct namespace. If your resources are deployed in a namespace other than default, specify it in your commands:

    kubectl get pods -n your-namespace
  4. Container Names: If your pod contains multiple containers (like the Linkerd proxy), remember to specify the container name when checking logs:

    kubectl logs <pod-name> -c <container-name>

By following these steps, you should be able to diagnose and resolve issues related to pod visibility in your Linkerd setup.