[root@host3 ~]# vim 04-nginx-to-es.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log*
tags: ["host3","nginx","access"] # 打上标签,便于后期输出
- type: log
paths:
- /var/log/nginx/error.log*
tags: ["host3","nginx","error"]
fields:
hostname: host3.test.com
service: http
fields_under_root: true # 将fields字段提到最高级别
output.elasticsearch:
hosts: ["http://192.168.19.101:9200","http://192.168.19.102:9200","http://192.168.19.103:9200"]
indices:
- index: "host3-nginx-access-%{+yyyy.MM.dd}"
when.contains:
tags: "access"
- index: "host3-nginx-error-%{+yyyy.MM.dd}"
when.contains:
tags: "error"
# 关闭索引的生命周期,若开启则上面的index配置会被忽略
setup.ilm.enabled: false
# 设置索引模板的名称
setup.template.name: "host3-nginx"
# 设置索引模板的匹配模式
setup.template.pattern: "host3-nginx-*"
# 覆盖已有的索引模板
setup.template.overwrite: false
setup.template.settings:
index.number_of_shards: 3
index.number_of_replicas: 1
从nginx端输出的日志就已经分割好了,filebeat只需要接收并提取json格式的信息,发送给Elasticsearch
首先要修改输入端,调整nginx日志输出格式
[root@host3 ~]# vim /etc/nginx/nginx.conf
http {
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
log_format filebeat_json '{"@timestap":"$time_iso8601",'
'"host":"$server_addr",'
'"clientip":"$remote_addr",'
'"size":$body_bytes_sent,'
'"responstime":$request_time,'
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",'
'"http_host":"$host",'
'"uri":"$uri",'
'"domain":"$host",'
'"xff":"$http_x_forward_for",'
'"referer":"$http_referer",'
'"tcp_xff":"$proxy_protocol_addr",'
'"http_user_agent":"$http_user_agent",'
'"status":"$status"}';
#access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/access.log filebeat_json;
...
}
测试下日志格式是否正确
[root@host3 ~]# tail -n1 /var/log/nginx/access.log
{"@timestap":"2022-09-12T17:09:39+08:00","host":"192.168.19.103","clientip":"192.168.19.1","size":0,"responstime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.19.103","uri":"/index.html","domain":"192.168.19.103","xff":"-","referer":"-","tcp_xff":"-","http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36","status":"304"}
可以使用上例中的配置文件,但是得到的日志同样是把抓取的日志都放到message里面,这就失去了调整格式的意义,因此我们需要将消息中的字段全部打散成多个键值,然后放到Elasticsearch中,在Kibana中查看时,可以查看任意的键值。
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log*
tags: ["host3","nginx","access"]
json.keys_under_root: true
在Kibana中查看:
采用filebeat的模块将日志分割成json格式,不修改nginx的原生日志格式
模块的存放位置
[root@host3 ~]# ls /etc/filebeat/modules.d/
activemq.yml.disabled haproxy.yml.disabled osquery.yml.disabled
apache.yml.disabled ibmmq.yml.disabled panw.yml.disabled
auditd.yml.disabled icinga.yml.disabled pensando.yml.disabled
awsfargate.yml.disabled iis.yml.disabled postgresql.yml.disabled
aws.yml.disabled imperva.yml.disabled proofpoint.yml.disabled
修改配置文件:
# 去掉原来所有的input内容
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml # 指定模块配置文件的路径,path.config是filebeat的安装目录
reload.enabled: true # 开启热加载功能,默认为false
查看所有的可用模块
[root@host3 ~]# filebeat -c ~/05-nginx-to-es.yml modules list
Enabled:
Disabled:
activemq
apache
auditd
aws
...
启用模块
[root@host3 ~]# filebeat -c ~/05-nginx-to-es.yml modules enable nginx
Enabled nginx
[root@host3 ~]# filebeat -c ~/05-nginx-to-es.yml modules list
Enabled:
nginx
模块开启后需要配置nginx模块对应的配置文件:
[root@host3 ~]# vim /etc/filebeat/modules.d/nginx.yml
- module: nginx
# Access logs
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Error logs
error:
enabled: true
var.paths: ["/var/log/nginx/error.log*"]
Kibana中查看:
[root@host3 ~]# filebeat -c ~/06-tomcat-to-es.yml modules enable tomcat
Enabled tomcat
[root@host3 ~]# vim /etc/filebeat/modules.d/tomcat.yml
- module: tomcat
log:
enabled: true
# Set which input to use between udp (default), tcp or file.
var.input: file
# var.syslog_host: localhost
# var.syslog_port: 9501
# Set paths for the log files when file input is used.
# var.paths:
- /var/log/tomcat/*.txt
将原生日志调整为json格式输出
[root@host3 ~]# vim /etc/tomcat/server.xml
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true">
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt"
pattern="{"requestTime":"%t","clientIP":"%h","threadID":"%I","protocol":"%H","requestMethod":"%r","requestStatus":"%s","sendBytes":"%b","queryString":"%q","responseTime":"%Dms","partner":"%{Referer}i","agentVersion":"%{User-Agent}i"}"
/>
[root@host3 ~]# tail -n1 /var/log/tomcat/localhost_access_log2022-09-13.txt
{"requestTime":"[13/Sep/2022:15:13:07 +0800]","clientIP":"192.168.19.102","threadID":"http-bio-8080-exec-1","protocol":"HTTP/1.1","requestMethod":"GET / HTTP/1.1","requestStatus":"404","sendBytes":"-","queryString":"","responseTime":"0ms","partner":"-","agentVersion":"curl/7.29.0"}
Kibana中查看:
字段解释:
%a 这是记录访问者的IP,
%A 这是记录本地服务器的IP,
%b 发送信息的字节数,不包括http头,如果字节数为0的话,显示为-
%B 发送信息的字节数,不包括http头。
%h 服务器的名称。如果resolveHosts为false的话,这里就是IP地址了,
%H 访问者的协议,这里是HTTP/1.0
%l 官方解释:Remote logical username from identd (可能这样翻译:记录浏览者进行身份验证时提供的名字)(always returns '-')
%m 访问的方式,是GET还是POST
%p 本地接收访问的端口
%q 比如你访问的是aaa.jsp?bbb=ccc,那么这里就显示?bbb=ccc,就是querystring的意思
%r First line of the request (method and request URI) 请求的方法和URL
%s http的响应状态码
%S 用户的session ID,这个session ID大家可以另外查一下详细的解释,反正每次都会生成不同的session ID
%t 请求时间
%u 得到了验证的访问者,否则就是"-"
%U 访问的URL地址,我这里是/rightmainima/leftbott4.swf
%v 服务器名称,可能就是你url里面写的那个吧,我这里是localhost
%D Time taken to process the request,in millis,请求消耗的时间,以毫秒记
%T Time taken to process the request,in seconds,请求消耗的时间,以秒记
在tomcat的错误日志中,可能会出现信息量较大,然后分多行显示的情况,当这些日志还以逐行采集的方式呈现到Kibana中时,日志的可读性就会变差,如下面的错误日志:
13-Sep-2022 10:15:15 下午 org.apache.tomcat.util.digester.Digester fatalError
严重: Parse Fatal Error at line 142 column 13: 元素类型 "Engine" 的结束标记必须以 '>' 分隔符结束。
org.xml.sax.SAXParseException; systemId: file:/usr/share/tomcat/conf/server.xml; lineNumber: 142; columnNumber: 13; 元素类型 "Engine" 的结束标记必须以 '>' 分隔符结束。
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:203)
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:400)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:327)
at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1472)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1755)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2967)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643)
at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1576)
at org.apache.catalina.startup.Catalina.load(Catalina.java:616)
at org.apache.catalina.startup.Catalina.start(Catalina.java:681)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:294)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:428)
因此需要进行多行匹配,将原本分多行显示的日志整合到一起
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/tomcat/catalina.*.log
tags: ["host3","tomcat","error"]
multiline.type: pattern
multiline.pattern: '^\d{2}' # 连续两个数字开头
multiline.negate: true # 遇到数字就换行
multiline.match: after
使用filestream输入从活动日志文件中读取行。它是log的替代方案。它对现有输入进行了各种改进:
filebeat.inputs:
- type: filestream
enabled: true
paths:
- /var/log/nginx/access.log*
parsers:
- ndjson: # 识别json格式日志
keys_under_root: true
message_key: msg
- multiline: # 多行匹配,写法和log类型类似
type: pattern
pattern: '^\d{2}'
negate: true
match: after
这种模式可以将多个服务器上产生的日志发送到一台服务器上,该服务器上安装filebeat收集日志并发送给Elasticsearch
filebeat.inputs:
- type: tcp
max_message_size: 10MiB # 最大接收的尺寸,默认是20M
host: "0.0.0.0:9000" # 监听的端口
除了在console显示和发送给Elasticsearch外,filebeat还支持文件类型的输出,可以将日志收集起来放在本地磁盘中。
filebeat.inputs:
- type: tcp
max_message_size: 20MiB
host: "0.0.0.0:9000"
output.file:
path: "/tmp/filebeat" # 目录会自动创建
filename: filebeat
rotate_every_kb: 20000 # 日志轮转的尺寸,默认是10M
number_of_files: 10 # 默认是7
permission:0644 # 日志的权限,默认就是0600
output.redis
hosts: ["192.168.19.101:6379"]
password: "bruce123" # redis的访问密码
key: "filebeat-test" # 键名
db: 5 # 数据库名
timeout: 5 # 超时时间
得到的数据是列表类型