
If you manage multiple servers or collection of multiple services on many nodes within a company server infrastructure, you know the pain of dealing with logs scattered to multiple locations across systems. It is really crazy and takes up a lot of time and drains energy.
One server shows nothing, another rotated logs yesterday, and your app logs are buried somewhere in /var/log/app.
A central logging server solves this problem, as all logs collected, stored, and accessible in one single place.
In this article will present shortly how to build one using ELK Stack + Beats (lightweight agents) on a Linux server.
1. Architecture Overview
Here’s the typical flow looks like this:
[ Servers / Apps ] –> [ Filebeat / Metricbeat ] –> [ Logstash ] –> [ Elasticsearch ] –> [ Kibana / Grafana (Visualization) ]
- Beats → Lightweight log shippers installed on all machines.
- Logstash → Optional pipeline for parsing, filtering, and enriching logs.
- Elasticsearch → Storage and search engine.
- Kibana / Grafana → Visualization dashboards.
2. Prepare Your Central Logging Server
Requirements:
- Debian Linux 12 recommended / Ubuntu or Fedora RHEL
- At least 4 GB RAM (8+ GB for production ELK)
- Plan enough SSD storage (logs grow fast)
- Open ports: 5044 for Beats, 9200 for Elasticsearch, 5601 for Kibana
Install Prerequisites
# apt update && sudo apt install openjdk-17-jdk wget curl apt-transport-https -y
ELK requires Java, OpenJDK 17 should work fine.
3. Install Elasticsearch
# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.11.1-amd64.deb
# dpkg -i elasticsearch-8.11.1-amd64.deb
# systemctl enable elasticsearch
# systemctl start elasticsearch
Check ElasticSearch server is running:
# curl -X GET "localhost:9200/"
That should see the Cluster info in JSON format.
4. Install Kibana
# wget https://artifacts.elastic.co/downloads/kibana/kibana-8.11.1-amd64.deb
# dpkg -i kibana-8.11.1-amd64.deb
# systemctl enable kibana
# systemctl start kibana
Access Kibana URL in browser:
http://<server-ip>:5601
5. Install Logstash to Process logs before sending to Elasticserch
# wget https://artifacts.elastic.co/downloads/logstash/logstash-8.11.1.deb
# dpkg -i logstash-8.11.1.deb
# systemctl enable logstash
# systemctl start logstash
Logstash allows filtering and structuring logs before sending them to Elasticsearch. Example simple pipeline:
# vim /etc/logstash/conf.d/syslog.conf
input {
beats {
port => 5044
}
}
filter {
grok { match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} %{DATA:program}: %{GREEDYDATA:message}" } }
}
output {
elasticsearch {
hosts => [“localhost:9200”]
index => "central-logs-%{+YYYY.MM.dd}"
}
}
Start Logstash
# systemctl restart logstash
6. Install Beats on Client Machines
On each server you want to monitor:
# apt install filebeat metricbeat -y
Configure Filebeat
Edit config
# vim /etc/filebeat/filebeat.yml
Set the output to your central server:
output.logstash:
hosts: ["
:5044"]
Start the agent:
systemctl enable filebeat
systemctl start filebeat
Do the same for Metricbeat if you want metrics like CPU, memory, disk.
7. Create Dashboards in Kibana or Grafana
- In Kibana, use Discover to view logs.
- Create visualizations for errors, warnings, top endpoints, etc.
- Use Grafana if you want multi-source dashboards, combining logs and metrics.
8. Optional: Secure Your Logging Server
- Enable TLS/SSL in Beats and Elasticsearch.
- Use firewall rules to restrict access.
- Create dedicated users in Elasticsearch for log access.
9. Maintenance Tips
- Index Lifecycle Management → Rotate daily and delete old logs automatically.
- Monitor disk usage → Logs grow fast. SSDs are better.
- Filter noise → Don’t ship debug logs unless needed.
- Backup Elasticsearch → Especially if logs are critical.
Sum Up, how it Works
- All logs are centralized → easier troubleshooting.
- Scalable → add new servers, Beats handle shipping automatically.
- Searchable → find errors instantly using Elasticsearch.
- Visual → dashboards in Kibana/Grafana give real-time insight.









