In continuation from Part 1 where we saw introduction to ELK and its basic concepts. In this post we will look into the deployment of ELK stack for a production environment on a bunch of VM’s. These VM’s can be part of same Virtual network (VPC, VNet) where our production workloads are running and needs to be monitored using ELK. Since we are focusing on a production level setup we will configure security using PKI. In our example here we can use TLS by configuring SSL for encryption of traffic communications between components of ELK as well as the traffic flowing in and out of the cluster. We can also use our own CA certified certificate for the same. We will follow the below steps in order to setup ELK on Cent OS VM’s. The VM’s can be hosted on-premise or any hyperscalers such AWS, Azure or GCP
In this post we will only cover the ELK deployment however the Cent OS VM deployment and network configuration of VM is not covered here. We will need to setup three VM’s with Cent OS 7.x version and configure network interfaces so that they are available in the network and can communicate with each other and other workloads that are to be monitored. We will configure one VM as ELK master node which stores all the metadata and cluster state information and two Cent OS VM’s as our data nodes which will store all the Elasticsearch shards and indices, which will be used for querying various information. So we need to deploy three Cent OS VM’s and once the Cent OS VM’s are up and running we can follow the below steps to complete the ELK setup.
The following diagram shows the Elasticsearch architecture that we are deploying, In our scenario we will deploy Elasticsearch and Kibana on the same master node. Once both Kibana and Elasticsearch services are are up and running on master node we will configure TLS using SSL certificates for encrypting the traffic.
STEP 1: Import the Elasticsearch GPG Key on all the nodes:
Login to the Cent OS VM and import the GPG key in order to download the elastic artifacts:rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
STEP 2: Download and Install RPM manually:
Download the elasticsearch artifacts (version 7.10) to the local filesystem:wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-x86_64.rpm
Install the elasticsearch:
rpm --install elasticsearch-7.10.0-x86_64.rpm
STEP 3: Enabling elasticsearch to begin at restart:
We need to create symlink so that whenever the Operating System is rebooted or restarts abruptly the elasticsearch service will start automatically: systemctl enable elasticsearch
STEP 4: Configure and Start a Multi-Node Cluster with one Master and Two Data nodes:
Provide a name for the cluster and provide seed hosts which is the private IP address of the elasticsearch master, We are not using machine learning capabilities in this deployment.
Login to Elasticsearch master VM and we need to make the changes to elasticsearch.yaml:cd etc/elasticsearch/
vi elasticsearch.yml
The following fields must be edited according the your environment:cluster.name: elkcluster
node.name: elkmaster
network.host: [local, site]
discovery.seed_hosts: ["172.16.1.15"]
cluster.initial_master_nodes: ["172.16.1.15"]
network.publish_host:
node.master: true
node.data: false
node.ingest: false
node.ml: false
Provide the same name for the cluster and provide seed hosts which is the private IP address of the elasticsearch master in data nodes configuration:
Login to Elasticsearch Data VM’s and we need to make the changes to elasticsearch.yaml:cd etc/elasticsearch/
vi elasticsearch.yml
The following fields must be edited according the your environment:cluster.name: elkcluster
node.name: elkdata1 /elkdata2
network.host: [local, site]
discovery.seed_hosts: ["172.16.1.15"]
cluster.initial_master_nodes: ["172.16.1.15"]
network.publish_host:
node.master: false
node.data: true
node.ingest: true
node.ml: false
Decrease the size of jvm heap of master node as Kibana will use more memory:
In our architecture both Elasticsearch and Kibana are deployed on same VM so reducing the heap size to 768mc and 768MB from 1mc and 1 GB respectively to reduce performance related issues that can occur with Kibana:
Open the jvm.options file in /etc/elasticsearch/jvm and the edit as shown below: vi jvm.options
Set 768mc for CPUSet 768MB for memory
Start elkmaster:
Let us now begin the elk service and this shows elasticsearch has been installed and running successfully systemctl start elasticsearch
Check if the cluster is working:
Now let us try to access the cluster by using curl, this triggers a response and we should get back one. Also we can query the nodes and it should list the elasticsearch node curl -X GET 'http://localhost:9200'
curl localhost:9200/_cat/nodes/?v
STEP 5: Deploy Kibana in Elasticsearch master node:
Download the elasticsearch artifacts (version 7.10) to the local filesystem and install it:wget https://artifacts.elastic.co/downloads/kibana/kibana-7.10.0-x86_64.rpm
rpm --install kibana-7.10.0-x86_64.rpm
Edit kibana.yaml in /etc/kibana/ and access it:
We need to provide the private IP address of our elasticsearch master VM in kibana configuration file and this ensures Kibana can communicate with elasticsearch server.port: 5601
server.host: "172.16.1.15"
Access Kibana Dashboard:http:/<Public IP address of Kibana/Elasticsearch VM>/:5601
STEP 6: Add TLS and SSH to our Kibana and Elasticsearch:
A POC environment doesn’t need to have encryption enabled. However when it comes to production deployment we need to secure the traffic in and out of the elastocsearch cluster and traffic flowing between them.
By default elasticsearch provides Certificate Authority (CA) capabilities and we can navigate to bin directory and start creating a new CA:cd /usr/share/elasticsearch/bin/
Create a new CA using existing elastic certutil, we are using a passphrase “elastic_ca” for securing the access to CA:/usr/share/elasticsearch/bin/elasticsearch-certutil ca --out /etc/elasticsearch/ca --pass elastic_ca
Update DNS names in hosts files of all the elasticsearch VM’s using their private IP address in /etc/hosts :172.16.1.15 elkmaster.test.com
172.16.1.16 elkdata1.test.com
172.16.1.17 elkdaat2.test.com
Configure and generate CA signed certificates for elkmaster, elkdata1 and elkdaat2 by using elasticsearch certutil tool and passing the CA passphrase “elastic_ca” which was configured while CA creation:/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca /etc/elasticsearch/ca --ca-pass elastic_ca --name elkmaster.test.app.gotteron.ch --ip 172.16.1.15 --out /etc/elasticsearch/elkmastercert --pass elastic_master_ca
/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca /etc/elasticsearch/ca --ca-pass elastic_ca --name elkdata1.test.app.gotteron.ch --ip 172.16.1.16 --out /etc/elasticsearch/eldata1cert --pass elastic_data1_ca
/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca /etc/elasticsearch/ca --ca-pass elastic_ca --name elkdaat2.test.app.gotteron.ch --ip 172.16.1.17 --out /etc/elasticsearch/elkdata2cert --pass elastic_data2_ca
Change ownership from root to local admin user (in my case it is azureuser) and copy to respective certificates to appropriate data nodes:chown azureuser:azureuser eldata1cert
chown azureuser:azureuser elkdata2cert
Switch back to root user and move certificates to tmp directory of respective data nodes[root@elkmaster ~]# mv /etc/elasticsearch/eldata1cert /tmp/
[root@elkmaster ~]# mv /etc/elasticsearch//elkdata2cert /tmp/
Change username back to local admin to copy certificates data nodescp /tmp/eldata1cert azureuser@172.16.1.16:/etc/elasticsearch/
scp /tmp/elkdata2cert azureuser@172.16.1.17:/etc/elasticsearch/
Let us now change back the ownership to root and elasticsearch group as shown belowchown root:elasticsearch eldata1cert
chown root:elasticsearch elkdata2cert
STEP 7 : Secure traffic using TLS:
Provide elasticsearch group read access to our certificates:chmod 640 elkmastercert
chmod 640 eldata1cert
chmod 640 elkdata2cert
Add the following xpack parameters for transport layer in all nodes in elasticsearch.yaml file.
The below configuration will enable ssl and its related configurations:xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full/certificate
xpack.security.transport.ssl.keystore.path: elkmastercert
xpack.security.transport.ssl.truststore.path: elkmastercert
Add the passphrase to elasticsearch keystore on all the nodes and provide elastic_master_ca/elastic_data1_ca/elastic_data2_ca as passwords:/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_passwordsystemctl restart elasticsearch
To setup the passwords we can use setup passwords file and use interactive method to set passwords:/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
elastic : elastic@123
apm_system : apm_system@123
kibana_system : kibana_system@123
logstash_system : logstash_system@1223
beats_system : beats_system@123
remote_monitoring_user : remote_monitoring_user@123
We will get the following prompts indicating the passwords have been successfully set:Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
STEP 8 : Encrypt the client network:
Add xpack parameters for http layer in all nodes in elasticsearch.yaml file:xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: elkmastercert
xpack.security.http.ssl.truststore.path: elkmastercer
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: eldata1cert
xpack.security.http.ssl.truststore.path: eldata1cert
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: elkdata2cert
xpack.security.http.ssl.truststore.path: elkdata2cert
Add the passphrase to elasticsearch keystore on all the nodes and provide elastic_master_ca/elastic_data1_ca/elastic_data2_ca as passwords:/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.truststore.secure_password
Edit the kibana.yaml file and change the following settings:elasticsearch.hosts: ["http://localhost:9200"] to elasticsearch.hosts: ["https://localhost:9200"]
elasticsearch.ssl.verificationMode: nonesystemctl restart kibana
Check by curling:curl -u elastic:elastic@123 https://localhost:9200 --insecure
This completes the TLS configuration on our elasticsearch cluster and there by encrypting all the traffic flowing in and out of the cluster and the traffic flow between the components