目录
不对日志进行全文索引。通过存储压缩非结构化日志和仅索引元数据, Loki 操作起来会更简单,更省成本。通过使用与 Prometheus 相同的标签记录流对日志进行索引和分组,这使得日志的扩展和操作效率更高。特别适合储存 Kubernetes Pod 日志 ; 诸如 Pod 标签之类的元数据会被自动删除和编入索引
各日志收集组件简单对比
|
名称
|
安装的组件
|
优点
|
|
ELK/EFK
|
elasticsearch
、
logstash
,
kibana
、
filebeat
、
kafka/redis
|
支持自定义
grok
正则解析复杂日志内容;
dashboard
支持主富的可视化展示
|
|
Loki
|
grafana
、
loki
、
promtail
|
占用资源小;
grafana
原生支持;查询速度快
|
官方地址:https://grafana.com/oss/loki/
文档地址: https://grafana.com/docs/grafana/latest/features/datasources/loki/git 地址: https://github.com/grafana/loki/blob/master/docs/README.md下载地址: Releases · grafana/loki (github.com)
简单介绍下 Loki :Grafana Loki 是一个日志聚合工具,它是功能齐全的日志堆栈的核心。Loki 是一个为有效保存日志数据而优化的数据存储。日志数据的高效索引将 Loki 与其他日志系统区分开来,与其他日志系统不同, Loki 索引是根据标签构建的,原始日志消息未编入索引。
- yum install -y https://github.com/grafana/loki/releases/download/v2.9.8/loki-2.9.8.x86_64.rpm
-
- # 配置文件详解
- [root@k8s-master01 ~]# cat /etc/loki/config.yml
- auth_enabled: false
-
- server:
- http_listen_port: 3100 #http访问端口
- grpc_listen_port: 9096 #rpc访问端口
-
- common:
- instance_addr: 192.168.186.100 #修改为自己的IP或localhost
- path_prefix: /tmp/loki
- storage:
- filesystem:
- chunks_directory: /tmp/loki/chunks #记录块存储目录,默认chunks块上的日志数量或
- 到期后,将chunks数据打标签后存储
- rules_directory: /tmp/loki/rules #规则配置目录
- replication_factor: 1
- ring:
- kvstore:
- store: inmemory
-
- query_range: #查询规则
- results_cache: #结果缓存
- cache:
- embedded_cache: #默认开启后会有提示,未配置缓存项,可以暂不开启
- enabled: true
- max_size_mb: 100
-
- schema_config: #配置索引信息
- configs:
- - from: 2020-10-24
- store: boltdb-shipper
- object_store: filesystem
- schema: v11
- index:
- prefix: index_ #索引前缀
- period: 24h #索引时长
-
- ruler:
- alertmanager_url: http://localhost:9093
-
- # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
- # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
- #
- # Statistics help us better understand how Loki is used, and they show us performance
- # levels for most users. This helps us prioritize features and documentation.
- # For more information on what's sent, look at
- # https://github.com/grafana/loki/blob/main/pkg/usagestats/stats.go
- # Refer to the buildReport method to see what goes into a report.
- #
- # If you would like to disable reporting, uncomment the following lines:
- #analytics:
- # reporting_enabled: false
-
- # 启动服务
- systemctl enable --now loki
- yum install -y https://github.com/grafana/loki/releases/download/v2.9.8/promtail-2.9.8.x86_64.rpm
-
- # 配置文件详解 /etc/promtail/config.yml
- [root@k8s-master01 ~]# cat /etc/promtail/config.yml
- # This minimal config scrape only single log file.
- # Primarily used in rpm/deb packaging where promtail service can be started during system init process.
- # And too much scraping during init process can overload the complete system.
- # https://github.com/grafana/loki/issues/11398
-
- server:
- http_listen_port: 9080
- grpc_listen_port: 0
-
- positions:
- filename: /tmp/positions.yaml #用于记录每次读取日志文件的索引行数,如:promtail重启后从该配置中恢复日志文件的读取位置
-
- clients:
- - url: http://192.168.186.100:3100/loki/api/v1/push #推送日志流到Loki中的api
-
- scrape_configs: #发现日志文件的位置并从中提取标签
- - job_name: system #任务名称
- static_configs: # 目录配置
- - targets: # 标签
- - localhost
- labels:
- job: varlogs #子任务名称,通常以项目命令
- #NOTE: Need to be modified to scrape any additional logs of the system.
- __path__: /var/log/messages #要读取的日志文件的位置,允许使用通配符/*log或/**/*.log
- - targets:
- - localhost
- labels:
- job: securelogs
- #NOTE: Need to be modified to scrape any additional logs of the system.
- __path__: /var/log/secure # 定义不同的日志文件路径
-
- #赋予权限
- [root@k8s-master01 ~]# setfacl -m u:promtail:r /var/log/secure
- [root@k8s-master01 ~]# setfacl -m u:promtail:r /var/log/messages
-
-
- # 启动服务
- systemctl enable --now promtail
- # 检查 promtial 配置
- http://IP:9080/targets
- yum install -y https://dl.grafana.com/enterprise/release/grafana-enterprise-
- 10.0.2-1.x86_64.rpm
- # 启动服务
- systemctl enable --now grafana-server
访问 granafa http://IP:3000 默认用户密码为 admin修改语言和时区



= :完全相同。!= :不平等。=~ :正则表达式匹配。!~ :不要正则表达式匹配。过滤表达式
{job=“mysql”} |= “error”{name=“kafka”} |~ “tsdb-ops.*io:2003”{instance=~“kafka-[23]”,name=“kafka”} != kafka.server:type=ReplicaManager支持多个过滤:{job=“mysql”} |= “error” != “timeout”目前支持的操作符|= line 包含字符串。!= line 不包含字符串。|~ line 匹配正则表达式。!~ line 与正则表达式不匹配。
为了方便grafana dashbord展示,我们把日志格式修改为json
- log_format json escape=json '{'
- '"remote_addr": "$remote_addr", '
- '"request_uri": "$request_uri", '
- '"request_length": "$request_length", '
- '"request_time": "$request_time", '
- '"request_method": "$request_method", '
- '"status": "$status", '
- '"body_bytes_sent": "$body_bytes_sent", '
- '"http_referer": "$http_referer", '
- '"http_user_agent": "$http_user_agent", '
- '"http_x_forwarded_for": "$http_x_forwarded_for", '
- '"http_host": "$http_host", '
- '"server_name": "$server_name", '
- '"upstream": "$upstream_addr", '
- '"upstream_response_time": "$upstream_response_time", '
- '"upstream_status": "$upstream_status", '
- #'"geoip_country_code": "$geoip2_data_country_code", '
- #'"geoip_country_name": "$geoip2_data_country_name", '
- #'"geoip_city_name": "$geoip2_data_city_name"'
- '}';
- access_log /var/log/nginx/json_access.log json;
-
- 参数 描述
- remote_addr 客户端的IP地址
- request_uri 客户端请求的URI
- request_length 请求的内容长度
- request_time 请求处理时间
- request_method 请求方法(GET、POST等)
- status HTTP响应状态码
- body_bytes_sent 发送给客户端的字节数
- http_referer 请求中的Referer头部
- http_user_agent 客户端的User-Agent头部
- http_x_forwarded_for X-Forwarded-For头部,客户端真实IP
- http_host 请求的Host头部
- server_name 服务器名称
- upstream 后端服务器的地址
- upstream_response_time 后端服务器响应时间
- upstream_status 后端服务器响应的HTTP状态码
- geoip_country_code GeoIP国家代码(已注释)
- geoip_country_name GeoIP国家名称(已注释)
- geoip_city_name GeoIP城市名称(已注释)
yum install -y https://github.com/grafana/loki/releases/download/v2.9.8/promtail- 2.9.8.x86_64.rpm[root@k8s-node02 ~]# cat /etc/promtail/config.yml
# This minimal config scrape only single log file.
# Primarily used in rpm/deb packaging where promtail service can be started during system init process.
# And too much scraping during init process can overload the complete system.
# https://github.com/grafana/loki/issues/11398server:
http_listen_port: 9080
grpc_listen_port: 0positions:
filename: /tmp/positions.yamlclients:
- url: http://192.168.186.100:3100/loki/api/v1/pushscrape_configs:
- job_name: nginx
static_configs:
- targets:
- localhost
labels:
job: nginxlogs
host: 192.168.186.100
#NOTE: Need to be modified to scrape any additional logs of the system.
__path__: /var/log/nginx/*.log# 注意,日志目录权限[root@localhost ~] # setfacl -R -m u:promtail:rx /var/log/nginx/[root@localhost ~] # systemctl restart promtail