ELK日志平台初体验

ELK 是elastic公司提供的一套完整的日志收集以及展示的解决方案,是三个产品的首字母缩写,分别是ElasticSearch、Logstash 和 Kibana

安装Elasticsearch(162)

安装RPM包

1
$ rpm -ivh elasticsearch-7.4.1-x86_64.rpm

创建目录

1
2
3
$ mkdir /service/elk/elasticsearch/data -p
$ mkdir /service/elk/elasticsearch/logs
$ chown -R elk.elk /service/elk/elasticsearch/

修改配置参数

1
2
3
4
5
6
7
8
$  vi /etc/elasticsearch/elasticsearch.yml
node.name: es-node
path.data: /service/elk/elasticsearch/data
path.logs: /service/elk/elasticsearch/logs
network.host: 0.0.0.0
http.port: 9200
bootstrap.memory_lock: false
cluster.initial_master_nodes: ["es-node"]

修改权限(es不允许root启动)

1
2
3
$ chown -R elk.elk /usr/share/elasticsearch
$ chown -R elk.elk /etc/elasticsearch
$ chown -R elk.elk /etc/sysconfig/elasticsearch

启动服务

1
$ /usr/share/elasticsearch/bin/elasticsearch &

安装kibana(162)

安装RPM包

1
$ rpm -ivh kibana-7.4.1-x86_64.rpm 

编辑参数

1
2
3
4
5
$ vi /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://10.0.139.162:9200"]
i18n.locale: "zh-CN"

启动服务

1
/usr/share/kibana/bin/kibana &

安装zookeeper(162)

解压安装包

1
2
3
$ tar -xvf apache-zookeeper-3.5.5-bin.tar.gz 
$ mv apache-zookeeper-3.5.5-bin /usr/local/zookeeper
$ chown -R elk.elk /usr/local/zookeeper

加入环境变量

1
2
3
$ echo "export PATH=$PATH:/usr/local/zookeeper/bin" >> /etc/profile
$ echo "export ZOOKEEPER_HOME=/usr/local/zookeeper"
$ source /etc/profile

创建数据目录

1
2
3
$ mkdir /service/elk/zookeeper/data -p
$ mkdir /service/elk/zookeeper/logs -p 
$ chown -R elk.elk /service/elk

编辑配置文件

1
2
3
4
5
6
7
$ cat /usr/local/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/service/elk/zookeeper/data
dataLogDir=/service/elk/zookeeper/logs
clientPort=2181

编辑服务

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
$ cat /etc/systemd/system/zookeeper.service
[Unit]
Description=zookeeper
After=syslog.target network.target

[Service]
Type=forking
ExecStart=/usr/local/zookeeper/bin/zkServer.sh start
ExecStop=/usr/local/zookeeper/bin/zkServer.sh stop
Restart=always
User=elk
Group=elk

[Install]
WantedBy=multi-user.target

启动服务

1
$ systemctl start zookeeper

安装Kafka(162)

解压安装包

1
2
3
$ tar -xvf kafka_2.12-2.3.0.tgz
$ mv kafka_2.12-2.3.0 /usr/local/kafka
$ chown -R elk.elk /usr/local/kafka/

加入环境变量

1
2
$ echo "export PATH=$PATH:/usr/local/kafka/bin" >> /etc/profile
$ source /etc/profile

修改配置文件

1
2
$ cat /usr/local/kafka/config/server.properties
zookeeper.connect=10.0.139.162:2181

启动服务

1
$ kafka-server-start.sh /usr/local/kafka/config/server.properties &

安装filebeat(163)

解压安装包

1
2
$ tar -xvf filebeat-7.4.1-linux-x86_64.tar.gz
$ mv filebeat-7.4.1-linux-x86_64 /usr/local/filebeat

编辑配置文件

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
$ vi /usr/local/filebeat/filebeat.yml
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /service/mysql/data/mysqld.log
#-------------------------- Kafka output ------------------------------
output.kafka:
  enabled: true
  hosts: ["10.0.139.162:9092"]
  topic: test

Tips: 需要注释Elasticsearch output模块

启动服务

1
$ /usr/local/filebeat/filebeat -c /usr/local/filebeat/filebeat.yml &

kafka建立一个消费者测试

1
$ kafka-console-consumer.sh --bootstrap-server 10.0.139.162:9092 --topic test --from-beginning

安装Logstash(161)

安装RPM包

1
$ rpm -ivh logstash-7.4.1.rpm 

编辑参数文件

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
$ vi /etc/logstash/logstash-simple.conf
input {
  kafka {
    bootstrap_servers => ["10.0.139.162:9092"]
    group_id => "logstash"
    topics => ["test"]
    decorate_events => true
    consumer_threads => 5
    codec => "json"
  }
}

output {S
  elasticsearch {
    hosts => ["10.0.139.162:9200"]
    index => "kafka-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }
}

启动服务

1
/usr/share/logstash/bin/logstash -f /etc/logstash/logstash-simple.conf --config.reload.automatic --path.data=/service/elk/logstash/data &

配置Kibana(162)

创建索引模式 create_index

查看日志信息 LOG_INFO_IMG

参考链接

1. Logstash 最佳实践 2. ELK+grok收集mysql慢查询日志

Licensed under CC BY-NC-SA 4.0
comments powered by Disqus