This is a quick guide on how to deploy the Elastic stack on GKE using Storage Classes, Headless Services, and Stateful Sets. I will provide walk-throughs for building some aspects of the cluster through the Google Cloud Shell (command line) as well as the Google Cloud Platform (GCP) console. This cluster is meant to help readers learn how to deploy capabilities to GKE and is not representative individual requirements for a production environment.

SSD Storage Class

GKE deployments provide a standard storage class, which does not provide sufficient IOPS to sustain an Elasticsearch node deployment. For this reason, we will create a new SSD storage class to handle the increase IOPS for hot data storage. Take a look at the Storage Options documentation for detail comparisons between pd-standard & pd-ssd.

1.Create a new storage.yaml file.

> nano storage.yaml

2.Paste the following template into nano (be mindful of the zone) and save/close nano via ^X:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  zone: us-central1-a

3.Apply the storage class using the following command:

> kubectl apply -f storage.yaml

Elastic Stack Services

In order for our Elasticsearch nodes nodes to discover each other, we need to create a service that helps identify nodes within the Kubernetes cluster.

1.Create a new service.yaml file

> nano service.yaml

2.Paste the following template into nano and save/close via ^X:

# Services will allow our Elasticsearch nodes discover themselves properly
apiVersion: v1
kind: Service
metadata:
  name: es-nodes
  labels:
    service: elasticsearch
spec:
  clusterIP: None
  ports:
  - port: 9200
    name: external
  - port: 9300
    name: internal
  selector:
    service: elasticsearch
---
# Kibana service that will help group scaled instances for future load balancing
apiVersion: v1
kind: Service
metadata:
  name: kibana
  labels:
    service: kibana
spec:
  clusterIP: None
  ports:
  - port: 5601
    name: external
  - port: 9300
    name: internal
  selector:
    service: kibana

3.Apply the service definition using the following command:

> kubectl apply -f service.yaml

Deploying The Stack

Finally we can get to the good stuff of actually deploying our Elasticsearch cluster and Kibana! Brace yourself for a large cluster StatefulSet down below, though I promise it will be a fun one!

1.Create a new cluster.yaml file.

> nano cluster.yaml

2.Paste the following template into nano and save/close nano via ^X:

# Elasticsearch node deployment and configuration
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: elasticsearch
  labels:
    service: elasticsearch
spec:
  serviceName: es-nodes
  replicas: 3
  selector:
    matchLabels:
      service: elasticsearch
  template:
    metadata:
      labels:
        service: elasticsearch
    spec:
      terminationGracePeriodSeconds: 300
      # Utilize Init containers to set ulimit, vm-max-map-count, and volume permissions
      initContainers: 
      - name: ulimit
        image: busybox
        command:
        - sh
        - -c
        - ulimit -n 65536
        securityContext:
          privileged: true
      - name: vm-max-map-count
        image: busybox
        command:
        - sysctl
        - -w
        - vm.max_map_count=262144
        securityContext:
          privileged: true
      - name: volume-permission
        image: busybox
        command:
        - sh
        - -c
        - chown -R 1000:1000 /usr/share/elasticsearch/data
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      # Containers defined for Elasticsearch nodes
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:6.3.0
        ports:
        - containerPort: 9200
          name: http
        - containerPort: 9300
          name: tcp
        resources:
          requests:
            memory: 400Mi
          limits:
            memory: 1Gi
        env:
          - name: cluster.name
            value: elasticsearch-cluster
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.zen.ping.unicast.hosts
            # Note that we are using "es-nodes" as defined in our service
            value: "elasticsearch-0.es-nodes.default.svc.cluster.local,elasticsearch-1.es-nodes.default.svc.cluster.local,elasticsearch-2.es-nodes.default.svc.cluster.local"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m -XX:-AssumeMP"
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: ssd
      resources:
        requests:
          storage: 10Gi
---
# Kibana node deployment
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: kibana
  labels:
    service: kibana
spec:
  serviceName: kibana
  replicas: 1
  selector:
    matchLabels:
      service: kibana
  template:
    metadata:
      labels:
        service: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:6.3.0
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch-0.es-nodes.default.svc.cluster.local:9200
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP

3.Apply the storage class using the following command:

> kubectl apply -f cluster.yaml

Setup Kibana Access

For the sake of this tutorial, we will setup a load balancer to our Kibana instance with an external facing IP address. We will also configure to redirect port 80 to 5601 via our load balancer. Since we only spun up a single Kibana instances, we can setup our connectivity relatively easily.

  1. Navigate to Workloads.
  2. Click the workload named Kibana.
  3. Scroll down until you see all the managed pods, click kibana-0.
  4. Now click on the Expose menu option.
  5. Set the target port to 5601 and Service Type to Load Balancer, click Expose
  6. Navigate to the Service menu and you should see your newly created load balancer along with a public endpoint *note it is unsecured.

Nice job you now have a three node instance of Elasticsearch & Kibana running on GKE!