If you followed my previous article regarding Deploying the Elastic Stack on Google Kubernetes Engine (GKE), you should be asking yourself how do I actually send data to this newly created cluster. Well that is a fantastic question that I plan on covering in this post. Once again this post is meant to help readers learn how to deploy capabilities to GKE and is not representative of individual requirements for a production environment. We will be using the configurations provided by the Elastic team with a few modifications. Those configurations can be found here Elastic Beats on Kubernetes. Let's get started!

Elevate Cluster Privileges

Before we begin, we need to elevate our privileges on the cluster so we can create to appropriate accounts, roles, and set permissions required to deploy the Elastic Beats agents across our newly created cluster.

1.Create a new cluster-admin.yaml file.

> nano cluster-admin.yaml

2.Paste the following template into nano and save/close nano via ^X: Note: Be sure to add your ACCOUNT-NAME in the template

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cluster-admins
subjects:
- kind: User
  # Change your account details below
  name: ACCOUNT-NAME
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""

3.Apply the elevated privledges using the following command:

> kubectl apply -f cluster-admin.yaml

Deploying Metricbeats

Now that we have elevated our privledges, we can begin to start deploying Metricbeats.

1.Create a new metricbeats.yaml file.

> nano metricbeats.yaml

2.Paste the following template into nano and save/close nano via ^X:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-config
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false
    # Change your kibana hostname
    setup.kibana.host: kibana-0.kibana.default.svc.cluster.local:5601
    setup.dashboards.enabled: true
    processors:
      - add_cloud_metadata:
    # Passing in elasticsearch nodes via environment variables
    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-modules
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  system.yml: |-
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        #- core
        #- diskio
        #- socket
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5      # include top 5 processes by CPU
        by_memory: 5   # include top 5 processes by memory

    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - container
        - volume
        - state_node
        - state_deployment
        - state_replicaset
        - state_statefulset
        - state_pod
        - state_container
      period: 10s
      hosts: ["localhost:10255"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
    service: metricbeat
spec:
  selector:
    matchLabels:
      service: metricbeat
  template:
    metadata:
      labels:
        k8s-app: metricbeat
        service: metricbeat
    spec:
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:6.3.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
          "-system.hostfs=/hostfs",
        ]
        # Environment variables specific to your ES cluster
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-0.es-nodes.default.svc.cluster.local
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 25m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
        - name: dockersock
          mountPath: /var/run/docker.sock
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-daemonset-modules
      # We set an `emptyDir` here to ensure the manifest will deploy correctly.
      # It's recommended to change this to a `hostPath` folder, to ensure internal data
      # files survive pod changes (ie: version upgrade)
      - name: data
        emptyDir: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-modules
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  # This module requires `kube-state-metrics` up and running under `kube-system` namespace
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - state_node
        - state_deployment
        - state_replicaset
        - state_pod
        - state_container
        - event
      period: 10s
      hosts: ["kube-state-metrics:8080"]
---
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
    service: metricbeat
spec:
  selector:
    matchLabels:
      service: metricbeat
  template:
    metadata:
      labels:
        k8s-app: metricbeat
        service: metricbeat
    spec:
      serviceAccountName: metricbeat
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:6.3.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
        ]
        # Environment variables specific to your ES cluster
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-0.es-nodes.default.svc.cluster.local
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 25m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-deployment-modules
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: metricbeat
subjects:
- kind: ServiceAccount
  name: metricbeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: metricbeat
  labels:
    k8s-app: metricbeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - events
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
---

3.Apply the Metricbeats template using the following command:

> kubectl apply -f metricbeats.yaml

Deploying Filebeats

Now let's fetch container logs from the cluster through Filebeats and send them to our Elasticsearch cluster.

1.Create a new filebeats.yaml file.

> nano filebeats.yaml

2.Paste the following template into nano and save/close nano via ^X:

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        # Mounted `filebeat-inputs` configmap:
        path: ${path.config}/inputs.d/*.yml
        # Reload inputs configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false
    setup.kibana.host: kibana-0.kibana.default.svc.cluster.local:5601
    setup.dashboards.enabled: true
    processors:
      - add_cloud_metadata:


    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.3.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        # Environment variables specific to your ES cluster
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-0.es-nodes.default.svc.cluster.local
        - name: ELASTICSEARCH_PORT
          value: "9200"
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 25m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      # We set an `emptyDir` here to ensure the manifest will deploy correctly.
      # It's recommended to change this to a `hostPath` folder, to ensure internal data
      # files survive pod changes (ie: version upgrade)
      - name: data
        emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---

3.Apply the Metricbeats template using the following command:

> kubectl apply -f filebeats.yaml

Wrap Up

Now we have a Elastic cluster deployed to GKE as well as Beats fetching metric and log data from the Kubernetes cluster. Have fun exploring the cluster and deploying workloads and remember to clean up once you are done!