目录
结合之前的博客中的实验
| 主机名 | ip地址 | 主要软件 |
|---|---|---|
| es01 | 192.168.80.101 | ElasticSearch |
| es02 | 192.168.80.102 | ElasticSearch |
| es03 | 192.168.80.103 | ElasticSearch、Kibana |
| nginx01 | 192.168.80.104 | nginx、Logstash |
| NA | 192.168.80.105 | nginx、Filebeat |
| NA | 192.168.80.106 | Zookeeper、Kafka |
| NA | 192.168.80.107 | Zookeeper、Kafka |
| NA | 192.168.80.108 | Zookeeper、Kafka |
- cd /usr/local/filebeat
-
- vim filebeat.yml
- 注释162、164行内容
- 163行起添加
- output.kafka:
- enabled: true
- hosts: ["192.168.80.106:9092","192.168.80.107","192.168.80.108"] #指定 Kafka 集群配置
- topic: "nginx" #指定 Kafka 的 topic
-
- 启动 filebeat
- ./filebeat -e -c filebeat.yml
- cd /etc/logstash/conf.d/
- vim kafka.conf
- input {
- kafka {
- bootstrap_server => "192.168.80.106:9092,192.168.80.107:9092,192.168.80.108:9092"
- topics => "nginx"
- type => "nginx_kafka"
- auto_offset_reset => "latest"
- }
- }
-
- #filter {}
-
- output {
- elasticsearch {
- hosts => ["192.168.80.101:9200", "192.168.80.102:9200", "192.168.80.103:9200"]
- index => "nginx_kafka-%{+yyyy.MM.dd}"
- }
- }
-
-
-
- logstash -t -f kafka.conf
- logstash -f kafka.conf
浏览器访问 http://192.168.80.103:5601 登录 Kibana,单击【管理】按钮【创建索引模式】,搜
索【nginx_kafka-*】单击 【下一步】按钮创建,选择【@timestamp】 按钮,【创建索引模
式】;可查看图表信息及日志信息

