• ELK单机版部署踩坑及与Springboot整合


    部署ELK单机版,要将所有的Springboot业务应用的日志对接ELK。记录下详细的部署过程,以及一个问题的排查记录。

    这张图比较好的反映出了三个组件之间的关系,下面的配置过程也是配这些组件两两的关系。

    部署环境

    服务器:CentOS7.8,4核32GB,IP:192.188.1.246

    elk三个软件版本要一致:

    elasticsearch:elasticsearch-7.9.3-x86_64.rpm

    下载地址

    logstash:kibana-7.9.3-x86_64.rpm

    下载地址

    kibina:kibana-7.9.3-x86_64.rpm

    下载地址

    部署流程

    (1)关闭selinux,设置主机名

    root用户登录,关闭防火墙
    # hostnamectl set-hostname elk-server

    (2)安装JDK,elasticsearch ,logstash,kibana
    # yum -y install java-1.8.0-openjdk*
    # yum -y install elasticsearch-7.9.3-x86_64.rpm
    # yum -y install kibana-7.9.3-x86_64.rpm
    # yum -y install logstash-7.9.3-x86_64.rpm

    (3)修改elasticsearch配置文件

    vi /etc/elasticsearch/elasticsearch.yml

    1. cluster.name: elk-server (集群名字,ELK单机版可以随便取)
    2. node.name: elk-server (ELK本机主机名)
    3. path.data: /home/elk/elasticsearch
    4. path.logs: /var/log/elasticsearch
    5. network.host: 192.188.1.246 (ELK本机IP)
    6. http.port: 9200
    7. discovery.seed_hosts: ["elk-server"] (ELK本机主机名)
    8. cluster.initial_master_nodes: ["192.188.1.246"] (ELK本机IP)

    (3)配置路径及用户

    # mkdir -p /home/elk/elasticsearch

    # chown -R elasticsearch:elasticsearch /home/elk/elasticsearch

    (5)启动elasticsearch
    # systemctl start elasticsearch
    # systemctl enable elasticsearch
    # systemctl status elasticsearch


    (6)修改logstash配置文件
    # vim /etc/logstash/logstash.yml

    1. node.name: elk-server (ELK本机主机名)
    2. path.data: /home/elk/logstash (logstash数据存储路径)
    3. pipeline.ordered: auto
    4. path.config: /etc/logstash/conf.d (配置文件路径)
    5. log.level: info
    6. path.logs: /var/log/logstash
    7. xpack.monitoring.enabled: true
    8. xpack.monitoring.elasticsearch.hosts: ["http://192.188.1.246:9200"]

    (7)修改kibana配置文件
    # vim /etc/kibana/kibana.yml

    1. server.port: 5601
    2. server.host: "192.188.1.246"
    3. server.name: "elk-server"
    4. elasticsearch.hosts: ["http://192.188.1.246:9200"] (kibana连接elasticsearch的地址)
    5. kibana.index: ".kibana"
    6. i18n.locale: "zh-CN"

    (8)设置logstash的输入输出

    # vim /etc/logstash/conf.d/apps.conf (文件名可任意)

    1. #输入在5044端口侦听TCP连接
    2. input{
    3. tcp {
    4. host => "192.188.1.246"
    5. port => 5044
    6. codec => json_lines
    7. }
    8. }
    9. #输出到es中
    10. output{
    11. elasticsearch{
    12. hosts => ["192.188.1.246:9200"]
    13. index => "applog"
    14. }
    15. }

    (9)启动kibana和logstash
    [root@elk-log-server ~]# systemctl start kibana
    [root@elk-log-server ~]# systemctl enable kibana
    [root@elk-log-server ~]# systemctl status kibana
    [root@elk-log-server ~]# systemctl start logstash
    [root@elk-log-server ~]# systemctl enable logstash
    [root@elk-log-server ~]# systemctl status logstash

    elk单机环境搭建完成,可用浏览器访问:http://1192.188.1.246:5601

    这时候,ES中还没有任何数据,添加不了索引模式。

     与Springboot整合

    pom文件:

    1. <dependency>
    2. <groupId>net.logstash.logbackgroupId>
    3. <artifactId>logstash-logback-encoderartifactId>
    4. <version>5.3version>
    5. dependency>

    resources/logback-spring.xml文件中添加logstash的配置

    1. "1.0" encoding="UTF-8"?>
    2. <configuration debug="false" scan="true" scanPeriod="10 seconds">
    3. <include resource="org/springframework/boot/logging/logback/defaults.xml" />
    4. <include resource="org/springframework/boot/logging/logback/console-appender.xml" />
    5. <springProperty scope="context" name="file_basePath" source="logging.file_basePath" defaultValue="./logs"/>
    6. <springProperty scope="context" name="file_prefix" source="logging.file_prefix" defaultValue="application"/>
    7. <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
    8. <append>trueappend>
    9. <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    10. <fileNamePattern>${file_basePath}/${file_prefix}/${file_prefix}-%d{yyyy-MM-dd}.%i.logfileNamePattern>
    11. <maxHistory>30maxHistory>
    12. <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
    13. <maxFileSize>50MBmaxFileSize>
    14. timeBasedFileNamingAndTriggeringPolicy>
    15. rollingPolicy>
    16. <encoder>
    17. <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%npattern>
    18. encoder>
    19. appender>
    20. <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
    21. <destination>192.188.1.246:5044destination>
    22. <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
    23. <providers>
    24. <timestamp>
    25. <timeZone>UTCtimeZone>
    26. timestamp>
    27. <pattern>
    28. <pattern>
    29. {
    30. "logLevel": "%level",
    31. "serviceName": "${springAppName:-}",
    32. "pid": "${PID:-}",
    33. "thread": "%thread",
    34. "class": "%logger{40}",
    35. "rest": "%message"
    36. }
    37. pattern>
    38. pattern>
    39. providers>
    40. encoder>
    41. appender>
    42. <root level="info">
    43. <appender-ref ref="CONSOLE" />
    44. <appender-ref ref="file" />
    45. <appender-ref ref="LOGSTASH" />
    46. root>
    47. configuration>

    踩坑

    程序中打上几个log.info(),发送一些日志,再到网页上添加索引,还是没有数据。

    打开http://192.188.1.246:9200/,有东西,表示es正常,

    打开http://192.188.1.246:9200/_cat/indices?v

     没有logstash配置文件/etc/logstash/conf.d/apps.conf里面,配置的applog这个索引。

    再打开logstash的日志/var/log/logstash,目录下的logstash-plain.log文件

     提示,/home/elk/logstash没有写入权限,尝试改变读写权限

    chown -R logstash:logstash/home/elk/logstash

     再发送几个日志,此时可在kibana页面中添加索引模式

    在Discovery页面可以看到日志了

     可以通了,后面就是详细的日志查看页面的配置了。

  • 相关阅读:
    类和对象学习笔记
    二叉搜索树、B-树、B+树
    【专利】一种光伏加工产品缺陷检测方法
    Orin 调试GMSL camera 96712手册重点
    快速做出原型
    WPF基础:在Canvas上绘制图形
    远程桌面另一台服务器连接不上,局域网IP如何访问另一台服务器
    Spring的事务传播机制
    c# sqlsugar,hisql,freesql orm框架全方位性能测试对比 sqlserver 性能测试
    344. Reverse String
  • 原文地址:https://blog.csdn.net/xuruilll/article/details/128192033