• 分布式之日志系统平台ELK


    ELK解决了什么问题

    我们开发完成后发布到线上的项目出现问题时(中小型公司),我们可能需要获取服务器中的日志文件进行定位分析问题。但在规模较大或者更加复杂的分布式场景下就显得力不从心。因此急需通过集中化的日志管理,将所有服务器上的日志进行收集汇总。所以ELK应运而生,它通过一系列开源框架提供了一整套解决方案,将所有节点上的日志统一收集、存储、分析、可视化等。

    • 注意:ELK不仅仅适用于日志分析,它还可以支持其它任何数据分析(比如我们之前项目开发中通过ES实现了用户在不同周期内访问系统的活跃度)和收集的场景,日志分析和收集只是更具有代表性而并非唯一性

    概念

    ELK是Elasticsearch(存储、检索数据)、Logstash(收集、转换、筛选数据)、Kibana(可视化数据) 三大开源框架的首字母大写简称,目前通常也被称为Elastic Stack(在 ELK的基础上增加了 Beats)

    单机版的日志系统架构

    Elasticsearch(ES,Port:9200)

    • 开源分布式搜索引擎,提供搜集、分析、存储数据功能
    • ES是面向文档document存储,它可以存储整个对象或文档(document)。同时也可以对文档进行索引、搜索、排序、过滤。这种理解数据的方式与传统的关系型数据库完全不同,因此这也是ES能够执行复杂的全文搜索的原因之一,document可以类比为RDMS中的行
    • ES使用JSON作为文档序列化格式,统一将document数据转换为json格式进行存储,JSON已经成为NoSQL领域的标准格式
    • ES是基于Lucene实现的全文检索,底层是基于倒排的索引方法,用来存储在全文检索下某个单词在一个/组文档中的存储位置[倒排索引是ES具有高检索性能的本质原因]
    • ES-head插件(Port:9100)

      • 查看我们导入的数据是否正常生成索引
      • 数据删除操作

      • 数据浏览

    • Index: Index是文档的容器,在ES的早期版本中的index(类似于RDMS中的库)中还包含有type(类似于RDMS中的表)的概念。但在后续版本中,type被逐渐取消而index同时具有数据库和表的概念

    • 查询DSL: 使用JSON格式并基于RESTful API进行通信,提供了全文搜索、范围查询、布尔查询、聚合查询等不同的搜索需求

    Logstash(Port:5044)

    • ELK的数据流引擎,从不同目标(文件/数据存储/MQ)收集的不同格式数据,经过滤后输出到ES
    • Filebeat是一个轻量级的日志收集处理工具(Agent),占用资源少,可在各服务器上搜集信息后传输给Logastash(官方推荐)
    • logback+rabbitmq整合日志模式下,我们在logstash.conf配置input从rabbitmq绑定的队列读取日志->filter解析和处理(可选)->output输出日志到目标ES

    Kibana(Port:5601)

    • 分析和可视化数据:利用工具分析 es中的数据,编制图表仪表板,利用仪表、地图和其他可视化显示发现的内容
    • 搜索、观察和保护数据:向应用和网站添加搜索框,分析日志和指标,并发现安全漏洞
    • 管理、监控和保护 Elastic Stack:监控和管理 es集群、kibana等 elastic stack的运行状况,并控制用户访问特征和数据
    • 常用核心功能
      • Discover: 浏览ES索引中的数据,还可以添加筛选条件进而查看感兴趣的数据
    Discover
      • Dev_Tool: 用ES支持的语法编写查询条件查询,也可用于测试代码中的查询条件
    用于运维/开发查询
      • 管理:索引管理和索引模式
        •  索引管理:查看索引的运行状况、状态、主分片和副本分片等信息
        • 索引模式:用于匹配命名符合一定规律的单个或多个索引,便于在 discover界面查看和分析目标索引的数据
      • Monitoring: 查看ES集群版本、运行时间、节点状态情况和索引情况
    实际项目开发中单节点部署的ES/Kibana信息
      • 可视化: 根据需求可创建条形图、饼图、云图(词云图)进行个性化定制

    项目实战

    在公司项目实际开发中我们基于logback -> rabbitmq -> elk 工作模式进行日志收集,实现了日志的集中管理。在此基础上通过ES搜索建立系统可视化看板来显示用户在不同周期内访问系统的活跃度

    注意:logback是日志框架(log4j也是一种日志框架),而slf4j是日志门面接口

    具体相关核心实现流程

    1. Maven导入Logback、ElasticSearch依赖
      1. <dependency>
      2. <groupId>net.logstash.logbackgroupId>
      3. <artifactId>logstash-logback-encoderartifactId>
      4. <version>5.1version>
      5. dependency>
      6. <dependency>
      7. <groupId>net.logstash.log4jgroupId>
      8. <artifactId>jsonevent-layoutartifactId>
      9. <version>1.7version>
      10. dependency>
      11. <dependency>
      12. <groupId>org.elasticsearchgroupId>
      13. <artifactId>elasticsearchartifactId>
      14. <version>6.3.1version>
      15. dependency>
      16. <dependency>
      17. <groupId>org.elasticsearch.clientgroupId>
      18. <artifactId>transportartifactId>
      19. <version>6.3.1version>
      20. dependency>
      21. <dependency>
      22. <groupId>org.elasticsearch.plugingroupId>
      23. <artifactId>transport-netty4-clientartifactId>
      24. <version>6.3.1version>
      25. dependency>
    2. 定义logback-prod.xml配置文件
      1. # 首先需要在application.yml文件配置log日志相关属性配置
      2. logging:
      3. config: classpath:logback-prod.xml #配置logback文件,本地开发不需要配置
      4. file: logs/${logback.log.file} #存储日志的文件
      5. #我们logback采用Rabbitmq方式收集日志时消息服务配置信息
      6. logback:
      7. log:
      8. path: "./logs/"
      9. file: logback_amqp.log
      10. amqp:
      11. host: 10.225.225.225
      12. port: 5672
      13. username: admin
      14. password: admin
      1. "1.0" encoding="UTF-8"?>
      2. <configuration scan="true" scanPeriod="60 seconds" debug="false">
      3. <include resource="org/springframework/boot/logging/logback/base.xml" />
      4. <contextName>logbackcontextName>
      5. <springProperty scope="context" name="log.path" source="logback.log.path" />
      6. <springProperty scope="context" name="log.file" source="logback.log.file" />
      7. <springProperty scope="context" name="logback.amqp.host" source="logback.amqp.host"/>
      8. <springProperty scope="context" name="logback.amqp.port" source="logback.amqp.port"/>
      9. <springProperty scope="context" name="logback.amqp.username" source="logback.amqp.username"/>
      10. <springProperty scope="context" name="logback.amqp.password" source="logback.amqp.password"/>
      11. <appender name="stash-amqp" class="org.springframework.amqp.rabbit.logback.AmqpAppender">
      12. <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
      13. <providers>
      14. <pattern>
      15. <pattern>
      16. {
      17. "time": "%date{ISO8601}",
      18. "thread": "%thread",
      19. "level": "%level",
      20. "class": "%logger{60}",
      21. "message": "%message",
      22. "application": "application"
      23. }
      24. pattern>
      25. pattern>
      26. providers>
      27. encoder>
      28. <host>${logback.amqp.host}host>
      29. <port>${logback.amqp.port}port>
      30. <username>${logback.amqp.username}username>
      31. <password>${logback.amqp.password}password>
      32. <declareExchange>truedeclareExchange>
      33. <exchangeType>fanoutexchangeType>
      34. <exchangeName>ex_common_application_LogexchangeName>
      35. <generateId>truegenerateId>
      36. <charset>UTF-8charset>
      37. <durable>truedurable>
      38. <deliveryMode>PERSISTENTdeliveryMode>
      39. appender>
      40. <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
      41. <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
      42. <pattern>%d{HH:mm:ss.SSS} %contextName [%thread] %-5level %logger{36} - %msg%npattern>
      43. encoder>
      44. appender>
      45. <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
      46. <file>${log.path}/${log.file}file>
      47. <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
      48. <fileNamePattern>${log.path}/%d{yyyy/MM}/${log.file}.%i.zipfileNamePattern>
      49. <MaxHistory>30MaxHistory>
      50. <totalSizeCap>5GBtotalSizeCap>
      51. <maxFileSize>300MBmaxFileSize>
      52. rollingPolicy>
      53. <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
      54. <pattern>%d{HH:mm:ss.SSS} %contextName [%thread] %-5level %logger{36} - %msg%npattern>
      55. encoder>
      56. appender>
      57. <root level="info">
      58. <appender-ref ref="file" />
      59. <appender-ref ref="stash-amqp" />
      60. root>
      61. configuration>
    3. SpringBoot集成Elasticsearch
      1. # 所属应用的yml文件配置elasticsearch信息
      2. elasticsearch:
      3. protocol: http
      4. hostList: 10.225.225.225:9200 # elasticsearch集群-单节点
      5. connectTimeout: 5000
      6. socketTimeout: 5000
      7. connectionRequestTimeout: 5000
      8. maxConnectNum: 10
      9. maxConnectPerRoute: 10
      10. username: # 帐号为空
      11. password: # 密码为空

      Elasticsearch配置类

      1. package com.bierce;
      2. import java.io.IOException;
      3. import java.util.concurrent.TimeUnit;
      4. import org.apache.commons.lang3.StringUtils;
      5. import org.apache.http.HttpHost;
      6. import org.apache.http.auth.AuthScope;
      7. import org.apache.http.auth.UsernamePasswordCredentials;
      8. import org.apache.http.client.CredentialsProvider;
      9. import org.apache.http.client.config.RequestConfig.Builder;
      10. import org.apache.http.impl.client.BasicCredentialsProvider;
      11. import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
      12. import org.elasticsearch.client.RestClient;
      13. import org.elasticsearch.client.RestClientBuilder;
      14. import org.elasticsearch.client.RestClientBuilder.HttpClientConfigCallback;
      15. import org.elasticsearch.client.RestHighLevelClient;
      16. import org.elasticsearch.client.sniff.ElasticsearchHostsSniffer;
      17. import org.elasticsearch.client.sniff.HostsSniffer;
      18. import org.elasticsearch.client.sniff.SniffOnFailureListener;
      19. import org.elasticsearch.client.sniff.Sniffer;
      20. import org.springframework.beans.factory.annotation.Value;
      21. import org.springframework.context.annotation.Bean;
      22. import org.springframework.context.annotation.Configuration;
      23. /**
      24. *
      25. * @ClassName: ElasticSearchConfiguration
      26. * @Description: ES配置类
      27. */
      28. @Configuration
      29. public class ElasticSearchConfiguration {
      30. @Value("${elasticsearch.protocol}") // 基于Http协议
      31. private String protocol;
      32. @Value("${elasticsearch.hostlist}") // 集群地址,如果有多个用“,”隔开
      33. private String hostList;
      34. @Value("${elasticsearch.connectTimeout}") // 连接超时时间
      35. private int connectTimeout;
      36. @Value("${elasticsearch.socketTimeout}") // Socket 连接超时时间
      37. private int socketTimeout;
      38. @Value("${elasticsearch.connectionRequestTimeout}") // 获取请求连接的超时时间
      39. private int connectionRequestTimeout;
      40. @Value("${elasticsearch.maxConnectNum}") // 最大连接数
      41. private int maxConnectNum;
      42. @Value("${elasticsearch.maxConnectPerRoute}") // 最大路由连接数
      43. private int maxConnectPerRoute;
      44. @Value("${elasticsearch.username:}")
      45. private String username;
      46. @Value("${elasticsearch.password:}")
      47. private String password;
      48. // 配置restHighLevelClient,
      49. // 当Spring容器关闭时,应该调用RestHighLevelClient类的close方法来执行清理工作
      50. @Bean(destroyMethod="close")
      51. public RestHighLevelClient restHighLevelClient() {
      52. String[] split = hostList.split(",");
      53. HttpHost[] httphostArray = new HttpHost[split.length];
      54. SniffOnFailureListener sniffOnFailureListener = new SniffOnFailureListener();
      55. //获取集群地址进行ip和端口后放入数组
      56. for(int i=0; i
      57. String hostName = split[i];
      58. httphostArray[i] = new HttpHost(hostName.split(":")[0], Integer.parseInt(hostName.split(":")[1]), protocol);
      59. }
      60. // 构建连接对象
      61. // 为RestClient 实例设置故障监听器
      62. RestClientBuilder builder = RestClient.builder(httphostArray).setFailureListener(sniffOnFailureListener);
      63. // 异步连接延时配置
      64. builder.setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() {
      65. @Override
      66. public Builder customizeRequestConfig(Builder requestConfigBuilder) {
      67. requestConfigBuilder.setConnectTimeout(connectTimeout);
      68. requestConfigBuilder.setSocketTimeout(socketTimeout);
      69. requestConfigBuilder.setConnectionRequestTimeout(connectionRequestTimeout);
      70. return requestConfigBuilder;
      71. }
      72. });
      73. // 连接认证
      74. CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
      75. if( StringUtils.isNotBlank( username ) && StringUtils.isNotBlank(password )) {
      76. credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(username, password));
      77. }
      78. // 异步连接数配置
      79. builder.setHttpClientConfigCallback(new HttpClientConfigCallback() {
      80. @Override
      81. public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
      82. httpClientBuilder.setMaxConnTotal(maxConnectNum);
      83. httpClientBuilder.setMaxConnPerRoute(maxConnectPerRoute);
      84. // 设置帐号密码
      85. if(credentialsProvider != null
      86. && StringUtils.isNotBlank( username ) && StringUtils.isNotBlank(password )) {
      87. httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
      88. }
      89. return httpClientBuilder;
      90. }
      91. });
      92. RestHighLevelClient restHighLevelClient = new RestHighLevelClient(builder);
      93. RestClient restClient = restHighLevelClient.getLowLevelClient();
      94. HostsSniffer hostsSniffer = new ElasticsearchHostsSniffer(restClient,
      95. ElasticsearchHostsSniffer.DEFAULT_SNIFF_REQUEST_TIMEOUT,
      96. ElasticsearchHostsSniffer.Scheme.HTTP);
      97. try {
      98. /* 故障后嗅探,不仅意味着每次故障后会更新节点,也会添加普通计划外的嗅探行为,
      99. * 默认情况是故障之后1分钟后,假设节点将恢复正常,那么我们希望尽可能快的获知。
      100. * 如上所述,周期可以通过 `setSniffAfterFailureDelayMillis`
      101. * 方法在创建 Sniffer 实例时进行自定义设置。需要注意的是,当没有启用故障监听时,
      102. * 这最后一个配置参数不会生效
      103. */
      104. Sniffer sniffer = Sniffer.builder(restClient)
      105. .setSniffAfterFailureDelayMillis(30000)
      106. .setHostsSniffer(hostsSniffer)
      107. .build();
      108. // 将嗅探器关联到嗅探故障监听器上
      109. sniffOnFailureListener.setSniffer(sniffer);
      110. sniffer.close();
      111. } catch (IOException e) {
      112. e.printStackTrace();
      113. }
      114. return restHighLevelClient;
      115. }
      116. }
    4. 通过ES提供的API搜索相关数据
      1. package com.bierce;
      2. /**
      3. *
      4. * @ClassName: UserVisitInfo
      5. * @Description: 用戶访问系统相关信息类
      6. *
      7. */
      8. public class UserVisitInfo {
      9. private String dayOfWeek; // 星期几
      10. private Long docCount; //访问次数
      11. public UserVisitInfo() {
      12. }
      13. public UserVisitInfo(String dayOfWeek, Long docCount) {
      14. super();
      15. this.dayOfWeek = dayOfWeek;
      16. this.docCount = docCount;
      17. }
      18. public String getDayOfWeek() {
      19. return dayOfWeek;
      20. }
      21. public void setDayOfWeek(String dayOfWeek) {
      22. this.dayOfWeek = dayOfWeek;
      23. }
      24. public Long getDocCount() {
      25. return docCount;
      26. }
      27. public void setDocCount(Long docCount) {
      28. this.docCount = docCount;
      29. }
      30. @Override
      31. public String toString() {
      32. return "UserVisitInfo [dayOfWeek=" + dayOfWeek + ", docCount=" + docCount + "]";
      33. }
      34. }
      1. package com.bierce;
      2. import java.util.ArrayList;
      3. import java.util.List;
      4. import com.bierce.UserVisitInfo;
      5. import org.elasticsearch.action.search.SearchResponse;
      6. import org.elasticsearch.client.RestHighLevelClient;
      7. import org.elasticsearch.index.query.QueryBuilder;
      8. import org.elasticsearch.index.query.QueryBuilders;
      9. import org.elasticsearch.script.Script;
      10. import org.elasticsearch.search.aggregations.AggregationBuilders;
      11. import org.elasticsearch.search.aggregations.Aggregations;
      12. import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
      13. import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder;
      14. import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
      15. import org.elasticsearch.search.builder.SearchSourceBuilder;
      16. import org.springframework.beans.factory.annotation.Autowired;
      17. import org.springframework.stereotype.Service;
      18. /**
      19. *
      20. * @ClassName: VisitUserCountSearchTemplate
      21. * @Description: 获取用户不同周期内活跃度
      22. *
      23. */
      24. @Service
      25. public class VisitUserCountSearchTemplate {
      26. private static final String INDEX_PREFIX = "user-visit-";
      27. @Autowired
      28. private RestHighLevelClient restHighLevelClient;
      29. /**
      30. *
      31. * @Title: getUserActivityInfo
      32. * @Description: 获取用户访问系统活跃度
      33. * @param startDate
      34. * @param endDate
      35. * @return
      36. */
      37. public List getUserActivityInfo(String startDate, String endDate) {
      38. SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
      39. searchSourceBuilder.size(0);
      40. searchSourceBuilder.query(QueryBuilders.matchAllQuery());
      41. // 设置聚合查询相关參數
      42. String aggregationName = "timeslice";
      43. String rangeField = "@timestamp";
      44. String termField = "keyword";
      45. Script script = new Script("doc['@timestamp'].value.dayOfWeek");
      46. String[] igonredAppCode = {"AMQP", "Test"};
      47. QueryBuilder timeQueryBuilder = QueryBuilders.boolQuery().must(QueryBuilders.rangeQuery(rangeField).gte(startDate).lte(endDate))
      48. .mustNot(QueryBuilders.termsQuery(termField, igonredAppCode))
      49. .filter(QueryBuilders.existsQuery(termField));
      50. // 按照星期几搜索对应数据
      51. HistogramAggregationBuilder dayOfWeekAggregationBuilder = AggregationBuilders.histogram(aggregationName).script(script).interval(1).extendedBounds(1, 7);
      52. searchSourceBuilder.query(timeQueryBuilder);
      53. searchSourceBuilder.aggregation(dayOfWeekAggregationBuilder);
      54. SearchResponse searchResponse = ElasticsearchUtils.buildSearchSource(INDEX_PREFIX + "*", searchSourceBuilder, client);
      55. List userActivityInfoList = new ArrayList<>();
      56. Aggregations aggregations = searchResponse.getAggregations();
      57. Histogram dayOfWeekHistogram = aggregations.get(aggregationName);
      58. Listextends Histogram.Bucket> buckets = dayOfWeekHistogram.getBuckets();
      59. for(Histogram.Bucket bucket: buckets) {
      60. String dayOfWeek = bucket.getKeyAsString();
      61. long docCount = bucket.getDocCount();
      62. UserVisitInfo item = new UserVisitInfo(dayOfWeek, docCount);
      63. userActivityInfoList.add(item);
      64. }
      65. return userActivityInfoList;
      66. }
      67. }
    5. 将数据返回给前台进行页面渲染,最终实现的效果
    可按条件筛选显示用户在不同周期内访问系统的活跃度
  • 相关阅读:
    2022 ICPC网络预选赛1
    windows C 开发
    Java常见注解及其使用汇总
    flink集群与资源@k8s源码分析-运行时
    数据库----- 数据库高级
    VR开发(一)——SteamVR实现摇杆移动
    华为智能高校出口安全解决方案(2)
    mysql中如何使用乐观锁和悲观锁
    SQLAlchemy学习-1.环境准备与基础使用
    优化 - 不要在for循环中调用数据库
  • 原文地址:https://blog.csdn.net/qq_34020761/article/details/139716096