• 【日志系统最全】Spring Cloud Sleuth使用ELK收集&分析日志


    TIPS

    本文基于Spring Cloud Greenwich SR2,理论兼容Spring Cloud所有版本。

    应用整合

    • 加依赖:

      1. <dependency>
      2. <groupId>org.springframework.cloud</groupId>
      3. <artifactId>spring-cloud-starter-sleuth</artifactId>
      4. </dependency>
      5. <dependency>
      6. <groupId>net.logstash.logback</groupId>
      7. <artifactId>logstash-logback-encoder</artifactId>
      8. <version>6.1</version>
      9. </dependency>

      注意, logstash-logback-encoder 的版本务必和Logback兼容,否则会导致应用启动不起来,而且不会打印任何日志!可前往 https://github.com/logstash/logstash-logback-encoder 查看和Logback的兼容性。

    • 在 resources 目录下创建配置文件:logback-spring.xml ,内容如下:

      1. <?xml version="1.0" encoding="UTF-8"?>
      2. <configuration>
      3. <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
      4. <springProperty scope="context" name="springAppName" source="spring.application.name"/>
      5. <!-- Example for logging into the build folder of your project -->
      6. <property name="LOG_FILE" value="/Users/reno/Desktop/未命名文件夹/elk/logs/${springAppName}"/>
      7. <!-- You can override this to have a custom pattern -->
      8. <property name="CONSOLE_LOG_PATTERN"
      9. value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
      10. <!-- Appender to log to console -->
      11. <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
      12. <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
      13. <!-- Minimum logging level to be presented in the console logs-->
      14. <level>DEBUG</level>
      15. </filter>
      16. <encoder>
      17. <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      18. <charset>utf8</charset>
      19. </encoder>
      20. </appender>
      21. <!-- Appender to log to file -->
      22. <appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
      23. <file>${LOG_FILE}</file>
      24. <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      25. <fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern>
      26. <maxHistory>7</maxHistory>
      27. </rollingPolicy>
      28. <encoder>
      29. <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      30. <charset>utf8</charset>
      31. </encoder>
      32. </appender>
      33. <!-- Appender to log to file in a JSON format -->
      34. <appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
      35. <file>${LOG_FILE}.json</file>
      36. <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      37. <fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.gz</fileNamePattern>
      38. <maxHistory>7</maxHistory>
      39. </rollingPolicy>
      40. <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
      41. <providers>
      42. <timestamp>
      43. <timeZone>UTC</timeZone>
      44. </timestamp>
      45. <pattern>
      46. <pattern>
      47. {
      48. "severity": "%level",
      49. "service": "${springAppName:-}",
      50. "trace": "%X{X-B3-TraceId:-}",
      51. "span": "%X{X-B3-SpanId:-}",
      52. "parent": "%X{X-B3-ParentSpanId:-}",
      53. "exportable": "%X{X-Span-Export:-}",
      54. "pid": "${PID:-}",
      55. "thread": "%thread",
      56. "class": "%logger{40}",
      57. "rest": "%message"
      58. }
      59. </pattern>
      60. </pattern>
      61. </providers>
      62. </encoder>
      63. </appender>
      64. <root level="INFO">
      65. <appender-ref ref="console"/>
      66. <!-- uncomment this to have also JSON logs -->
      67. <appender-ref ref="logstash"/>
      68. <!--<appender-ref ref="flatfile"/>-->
      69. </root>
      70. </configuration>
    • 新建 bootstrap.yml ,并将application.yml 中的以下属性移到bootstrap.yml 中。

      1. spring:
      2. application:
      3. name: user-center

      由于上面的 logback-spring.xml 含有变量(例如 springAppName ),故而 spring.application.name 属性必须设置在 bootstrap.yml 文件中,否则,logback-spring.xml 将无法正确读取属性。

    测试

    • 启动应用

    • 日志会打印到 /Users/reno/Desktop/未命名文件夹/elk/logs/目录中 ,并且文件名称为 user-center.json ,内容类似如下:

      1. {"@timestamp":"2019-08-29T02:38:42.468Z","severity":"DEBUG","service":"microservice-provider-user","trace":"5cf9479e966fb5ec","span":"5cf9479e966fb5ec","parent":"","exportable":"false","pid":"13144","thread":"http-nio-8000-exec-1","class":"o.s.w.s.m.m.a.RequestResponseBodyMethodProcessor","rest":"Using 'application/json;q=0.8', given [text/html, application/xhtml+xml, image/webp, image/apng, application/signed-exchange;v=b3, application/xml;q=0.9, */*;q=0.8] and supported [application/json, application/*+json, application/json, application/*+json]"}
      2. {"@timestamp":"2019-08-29T02:38:42.469Z","severity":"DEBUG","service":"microservice-provider-user","trace":"5cf9479e966fb5ec","span":"5cf9479e966fb5ec","parent":"","exportable":"false","pid":"13144","thread":"http-nio-8000-exec-1","class":"o.s.w.s.m.m.a.RequestResponseBodyMethodProcessor","rest":"Writing [Optional[User(id=1, username=account1, name=张三, age=20, balance=100.00)]]"}
      3. {"@timestamp":"2019-08-29T02:38:42.491Z","severity":"DEBUG","service":"microservice-provider-user","trace":"5cf9479e966fb5ec","span":"5cf9479e966fb5ec","parent":"","exportable":"false","pid":"13144","thread":"http-nio-8000-exec-1","class":"o.s.o.j.s.OpenEntityManagerInViewInterceptor","rest":"Closing JPA EntityManager in OpenEntityManagerInViewInterceptor"}
      4. {"@timestamp":"2019-08-29T02:38:42.492Z","severity":"DEBUG","service":"microservice-provider-user","trace":"5cf9479e966fb5ec","span":"5cf9479e966fb5ec","parent":"","exportable":"false","pid":"13144","thread":"http-nio-8000-exec-1","class":"o.s.web.servlet.DispatcherServlet","rest":"Completed 200 OK"}
      5. {"@timestamp":"2019-08-29T02:38:58.141Z","severity":"ERROR","service":"microservice-provider-user","trace":"","span":"","parent":"","exportable":"","pid":"13144","thread":"ThreadPoolTaskScheduler-1","class":"o.s.c.alibaba.nacos.discovery.NacosWatch","rest":"Error watching Nacos Service change"}

      下面,只需要让Logstash收集到这个JSON文件,就可以在Kibana上检索日志啦!

    ELK搭建

    简单起见,本文使用Docker搭建ELK;其他搭建方式,请看官自行百度,比较简单,但很耗时。

    • 创建 docker-compose.yml 文件,内容如下:

      1. version: '3'
      2. services:
      3. elasticsearch:
      4. image: elasticsearch:7.3.1
      5. environment:
      6. discovery.type: single-node
      7. ports:
      8. - "9200:9200"
      9. - "9300:9300"
      10. logstash:
      11. image: logstash:7.3.1
      12. command: logstash -f /etc/logstash/conf.d/logstash.conf
      13. volumes:
      14. # 挂载logstash配置文件
      15. - ./config:/etc/logstash/conf.d
      16. - /Users/reno/Desktop/未命名文件夹/elk/logs/:/opt/build/
      17. ports:
      18. - "5000:5000"
      19. kibana:
      20. image: kibana:7.3.1
      21. environment:
      22. - ELASTICSEARCH_URL=http://elasticsearch:9200
      23. ports:
      24. - "5601:5601"

      需要注意,上面的 /Users/reno/Desktop/未命名文件夹/elk/logs/ 需要改成你应用的打印路径。

    • 在docker-compose.yml文件所在目录创建 config/logstash.conf ,内容如下:

      1. input {
      2. file {
      3. codec => json
      4. path => "/opt/build/*.json" # 改成你项目打印的json日志文件。
      5. }
      6. }
      7. filter {
      8. grok {
      9. match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}\s+---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" }
      10. }
      11. }
      12. output {
      13. elasticsearch {
      14. hosts => "elasticsearch:9200" # 改成你的Elasticsearch地址
      15. }
      16. }
    • 启动ELK

      docker-compose up
      

    测试Sleuth & ELK

    • 访问你微服务的API,让它生成一些日志(如果产生日志比较少,可将 org.springframework 包的日志级别设为 debug )

    • 访问 http://localhost:5601 (Kibana地址),可看到类似如下的界面,按照如图配置Kibana。

    • 输入条件,即可分析日志:

    原理分析

    原理比较简单:

    • 让Sleuth打印JSON格式的日志;
    • 然后在Logstash的配置文件中,配置grok语法,解析并收集JSON格式的日志,并存储到Elasticsearch中去;
    • Kibana可视化分析日志。

     

  • 相关阅读:
    MSDC 4.3 接口规范(24)
    Hbase分布式集群部署
    C#进程间通信-匿名管道通信
    统信UOS离线安装nginx
    docker-compose
    JS高级(数据类型,数据_变量_内存)
    Unity的GPUSkinning进一步介绍
    黄健翔质疑半自动越位技术?用「技术流」解读卡塔尔世界杯
    ThreadPoolExecutor BlockingQueue讲解
    基于Docker搭建Redis集群并进行扩容、缩容教程
  • 原文地址:https://blog.csdn.net/wufaqidong1/article/details/128130939