(Elastic monitoring - 2/5) Install ElasticSearch and Kibana to store and visualize monitoring data
Greg Jeanmart
ElasticSearch cluster
As explained in the introduction of this article, to setup a monitoring stack with the Elastic technologies, we first need to deploy ElasticSearch that will act as a Database to store all the data (metrics, logs and traces). The database will be composed of three scalable nodes connected together into a Cluster as recommended for production.
Moreover, we will enable the authentication to make the stack more secure to potential attackers.
1. Setup the ElasticSearch master node
The first node of the cluster we’re going to setup is the master which is responsible of controlling the cluster.
The first k8s object we need is a ConfigMap which describes a YAML file containing all the necessary settings to configure the ElasticSearch master node into the cluster and enable security.
Secondly, we will deploy a Service which defines a network access to a set of pods. In the case of the master node, we only need to communicate through the port 9300 used for cluster communication.
configmap/elasticsearch-master-config created service/elasticsearch-master created deployment.apps/elasticsearch-master created
Check that everything is running with the command:
1 2 3 4
$ kubectl get pods -n monitoring
NAME READY STATUS RESTARTS AGE elasticsearch-master-6f8c975f9-8gdpb 1/1 Running 0 4m18s
2. Setup the ElasticSearch data node
The second node of the cluster we’re going to setup is the data which is responsible of hosting the data and executing the queries (CRUD, search, aggregation).
Like the master node, we need a ConfigMap to configure our node which looks similar to the master node but differs a little bit (see node.data: true)
And finally the ReplicaSet is similar to the deployment but involves storage, you can identify a volumeClaimTemplates at the bottom of the file to create a persistent volume of 50GB.
$ kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE elasticsearch-client-7c55b46d7f-gg9kx 1/1 Running 0 4s elasticsearch-data-0 1/1 Running 0 3m26s elasticsearch-master-9455d4865-42h45 1/1 Running 0 3m40s ````
After a couple of minutes, each node of the cluster should reconcile and the master node should log the following sentence `Cluster health status changed from [YELLOW] to [GREEN] `.
```shell $ kubectl logs -f -n monitoring \ $(kubectl get pods -n monitoring | grep elasticsearch-master | sed -n 1p | awk '{print $1}') \ | grep "Cluster health status changed from \[YELLOW\] to \[GREEN\]"
{ "type": "server", "timestamp": "2019-08-15T15:09:43,825+0000", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "cluster.uuid": "iWgG2n5WSAC05Hvpeq5m4A", "node.id": "LScYW6eZTQiUgwRDzCvxRQ", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-es-7-2019.08.15][0]] ...])." }
4. Generate a password and store in a k8s secret
We enabled the xpack security module to secure our cluster, so we need to initialise the passwords. Execute the following command which runs the program bin/elasticsearch-setup-passwords within the client node container (any node would work) to generate default users and passwords.
The second part of the article consists in deploying Kibana, the data visialization plugin for ElasticSearch which offers functionalities to manage an ElasticSeach cluster and visualise all the data.
In terms of setup in k8s, this is very similar to ElasticSearch, we first use ConfigMap to provide a config file to our deployment with all the required properties. This particularly includes the access to ElasticSearch (host, username and password) which are configured as environment variables.
The Service exposes Kibana default port 5601 to the environment and use NodePort to also expose a port directly on the static node IP so we can access it externally.
Finally, the Deployment part describes the container, the environment variables and volumes. For the env var ELASTICSEARCH_PASSWORD, we use secretKeyRef to read the password from the secret.
configmap/kibana-config created service/kibana created deployment.apps/kibana created
And after a couple of minutes, check the logs for Status changed from yellow to green.
1 2 3 4 5
$ kubectl logs -f -n monitoring $(kubectl get pods -n monitoring | grep kibana | sed -n 1p | awk '{print $1}') \ | grep "Status changed from yellow to green"
{"type":"log","@timestamp":"2019-08-16T08:56:04Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} {"type":"log","@timestamp":"2019-08-16T08:56:13Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
Once, the logs say “green”, you can access Kibana from your browser.
Run the command $ minikube ip to determine the IP of your node .
1 2
$ minikube ip 10.154.0.2
Also run the following command to find on which external port is mapped Kibana’s port 5601.
1 2 3
$ kubectl get service kibana -n monitoring NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kibana NodePort 10.111.154.92 <none> 5601:31158/TCP 41m
Login with username elastic and the password (generated before and stored in a secret) and you will be redirected to the index page:
Before moving forward, I recommend to forget the elastic user (only used for cross-service access) and create a dedicated user to access Kibana. Go to Management > Security > Users and click on Create User :
Finally, go to Stack Monitoring to visualise the health of the cluster.
In conclusion, we now have a ready-to-use ElasticSearch + Kibana stack which will serve us to store and visualize our infrastructure and application data (metrics, logs and traces).