• 【云原生 | Kubernetes 系列】K8s日志收集


    日志收集

    本次实验的目的是将K8s Tomcat Pod中产生的accesslog和catalina日志通过filebeat转发至kafka,再使用logstash将kafka中的日志转发到elasticsearch中.最后使用kibana将日志进行展示.

    本次实验涉及到的服务器较多,如果换成kubeadmin可以适当节省部分节点

    序号机器名IP地址作用
    1k8s-master-01192.168.31.101k8s Master节点
    2k8s-master-02192.168.31.102k8s Master节点
    3k8s-master-03192.168.31.103k8s Master节点
    4k8s-node-01192.168.31.111k8s Node节点
    5k8s-node-02192.168.31.112k8s Node节点
    6k8s-node-03192.168.31.113k8s Node节点
    7k8s-harbor192.168.31.104Harbor仓库
    8etcd-1192.168.31.106etcd节点
    9etcd-2192.168.31.107etcd节点
    9etcd-3192.168.31.108etcd节点
    10es-1192.168.31.41els节点+kibana
    11es-2192.168.31.42els节点
    12es-3192.168.31.43els节点
    13logstash192.168.31.126logstash节点
    14zookeeper-1192.168.31.121kafka+zookeeper节点
    15zookeeper-2192.168.31.122kafka+zookeeper节点
    16zookeeper-3192.168.31.123kafka+zookeeper节点
    17k8s-haprox-1192.168.31.109k8s haproxy节点
    18k8s-haprox-2192.168.31.110k8s haproxy节点

    1. 准备工作

    Zookeeper部署:
    https://blog.csdn.net/qq_29974229/article/details/126477955?spm=1001.2014.3001.5501
    Kafka部署:
    https://blog.csdn.net/qq_29974229/article/details/126477994?spm=1001.2014.3001.5502

    1.1 确定日志文件路径

    在k8s-master-01服务器或者其他Master服务器上操作

    进入需要收集日志的tomcat pod(该POD配置详见https://blog.csdn.net/qq_29974229/article/details/126250939?spm=1001.2014.3001.5502 第10章)确认accesslog和catalina日志具体存放位置.

    [root@wework-tomcat-app1-deployment-d7f8488b8-s8qdx /]ll /apps/tomcat/logs/catalina.out
    -rw-rw-r-- 1 nginx nginx 7864 Aug 22 09:27 /apps/tomcat/logs/catalina.out
    [root@wework-tomcat-app1-deployment-d7f8488b8-s8qdx /]# ll /apps/tomcat/logs/localhost_access_log.*.txt
    -rw-rw-r-- 1 nginx nginx 0 Aug 22 09:27 /apps/tomcat/logs/localhost_access_log.2022-08-22.txt
    [root@wework-tomcat-app1-deployment-d7f8488b8-s8qdx /]# cat /apps/tomcat/logs/localhost_access_log.*.txt
    192.168.31.111 - - [22/Aug/2022:09:36:24 +0800] "GET / HTTP/1.1" 404 1078
    192.168.31.111 - - [22/Aug/2022:09:36:27 +0800] "GET / HTTP/1.1" 404 1078
    172.100.76.128 - - [22/Aug/2022:09:38:53 +0800] "GET / HTTP/1.1" 404 1078
    172.100.76.128 - - [22/Aug/2022:09:38:53 +0800] "GET /favicon.ico HTTP/1.1" 404 1078
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    1.2 更新镜像文件

    将刚才获取到accesslog和catalina日志路径写入yaml文件中,供镜像中filebeat转发使用.
    其中catalina打上tag:tomcat-catalina
    accesslog打tag:tomcat-accesslog
    发往kafka:192.168.31.121:9092
    topic为wework-tomcat-app1
    filebeat.yml内容如下

    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /apps/tomcat/logs/catalina.out
      fields:
        type: tomcat-catalina
    - type: log
      enabled: true
      paths:
        - /apps/tomcat/logs/localhost_access_log.*.txt 
      fields:
        type: tomcat-accesslog
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    setup.template.settings:
      index.number_of_shards: 1
    setup.kibana:
    
    output.kafka:
      hosts: ["192.168.31.121:9092"]
      required_acks: 1
      topic: "wework-tomcat-app1"
      compression: gzip
      max_message_bytes: 1000000
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26

    Dockerfile中将filebeat.yml加入到镜像,并通过run_tomcat.sh中的shell启动filebeat,最终实现filebeat的日志收集和转发.

    FROM harbor.intra.com/pub-images/tomcat-base:v8.5.43
    
    ADD catalina.sh /apps/tomcat/bin/catalina.sh
    ADD server.xml /apps/tomcat/conf/server.xml
    #ADD myapp/* /data/tomcat/webapps/myapp/
    ADD app1.tar.gz /data/tomcat/webapps/myapp/
    ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
    ADD filebeat.yml /etc/filebeat/filebeat.yml 
    RUN chown  -R nginx.nginx /data/ /apps/
    #ADD filebeat-7.5.1-x86_64.rpm /tmp/
    #RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb
    
    EXPOSE 8080 8443
    
    CMD ["/apps/tomcat/bin/run_tomcat.sh"]
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    run_tomcat.sh

    root@k8s-master-01:/opt/k8s-data/dockerfile/web/wework/tomcat-app1# cat run_tomcat.sh 
    #!/bin/bash
    /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
    su - nginx -c "/apps/tomcat/bin/catalina.sh start"
    tail -f /etc/hosts
    
    • 1
    • 2
    • 3
    • 4
    • 5

    此时目录下有以下文件

    root@k8s-master-01:/opt/k8s-data/dockerfile/web/wework/tomcat-app1# ll
    total 23584
    drwxr-xr-x  2 root root     4096 Aug 22 16:13 ./
    drwxr-xr-x 10 root root     4096 Aug 12 14:21 ../
    -rw-r--r--  1 root root      143 Aug  8 15:29 app1.tar.gz
    -rwxr-xr-x  1 root root      145 Aug  8 15:32 build-command.sh*
    -rwxr-xr-x  1 root root    23611 Jun 22  2021 catalina.sh*
    -rw-r--r--  1 root root      531 Aug 22 14:59 Dockerfile
    -rw-r--r--  1 root root 24086235 Jun 22  2021 filebeat-7.5.1-x86_64.rpm
    -rw-r--r--  1 root root      727 Aug 22 16:13 filebeat.yml
    -rwxr-xr-x  1 root root      371 Aug 22 14:58 run_tomcat.sh*
    -rw-r--r--  1 root root     6462 Oct 10  2021 server.xml
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    重新构建镜像,给新镜像打上v2的tag.这样一会替换yaml中的版本号就能完成发布.

    root@k8s-master-01:/opt/k8s-data/dockerfile/web/wework/tomcat-app1# ./build-command.sh v2
    Sending build context to Docker daemon  24.13MB
    Step 1/9 : FROM harbor.intra.com/pub-images/tomcat-base:v8.5.43
     ---> 8ea246a48b19
    Step 2/9 : ADD catalina.sh /apps/tomcat/bin/catalina.sh
     ---> Using cache
     ---> cea5baadac4d
    Step 3/9 : ADD server.xml /apps/tomcat/conf/server.xml
     ---> Using cache
     ---> 58f377ffd9bb
    Step 4/9 : ADD app1.tar.gz /data/tomcat/webapps/myapp/
     ---> Using cache
     ---> 22022b6ad43b
    Step 5/9 : ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
     ---> 510136ee16a5
    Step 6/9 : ADD filebeat.yml /etc/filebeat/filebeat.yml
     ---> 367ef4b0d006
    Step 7/9 : RUN chown  -R nginx.nginx /data/ /apps/
     ---> Running in 17ac97ba2364
    Removing intermediate container 17ac97ba2364
     ---> 0d48b4ae2e4f
    Step 8/9 : EXPOSE 8080 8443
     ---> Running in 9ca69be946a5
    Removing intermediate container 9ca69be946a5
     ---> 2d118e7b8eee
    Step 9/9 : CMD ["/apps/tomcat/bin/run_tomcat.sh"]
     ---> Running in e4c5ea6f6abf
    Removing intermediate container e4c5ea6f6abf
     ---> 56f4aa923c24
    Successfully built 56f4aa923c24
    Successfully tagged harbor.intra.com/wework/tomcat-app1:v2
    The push refers to repository [harbor.intra.com/wework/tomcat-app1]
    10bfa2a51096: Pushed 
    027a7241542a: Pushed 
    580e043b3292: Pushed 
    14f65bcfbf17: Layer already exists 
    524d0b6013b3: Layer already exists 
    e03b1f42acaa: Layer already exists 
    dd8f6a0cdeaa: Layer already exists 
    3447904f79c4: Layer already exists 
    7adc429e9dda: Layer already exists 
    aadaa9679cb8: Layer already exists 
    fc305a4ba468: Layer already exists 
    ab93afc6a659: Layer already exists 
    d7f831641e18: Layer already exists 
    f4b52134c525: Layer already exists 
    0533300cca03: Layer already exists 
    30a12549c4a3: Layer already exists 
    ce1fb445c72c: Layer already exists 
    174f56854903: Layer already exists 
    v2: digest: sha256:d40be883f8d82991ab340a183dd5560deba9099822423e371d56c53ae04e5a29 size: 4086
    root@k8s-master-01:/opt/k8s-data/dockerfile/web/wework/tomcat-app1# docker images |grep tomcat-app1
    harbor.intra.com/wework/tomcat-app1                      v2              56f4aa923c24   7 seconds ago   1.53GB
    harbor.intra.com/wework/tomcat-app1                      v1              87152ed32f8c   2 weeks ago     1.53GB
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54

    启动镜像检查是否可以正常连接.

    docker run -it --rm harbor.intra.com/wework/tomcat-app1:v2
    
    • 1

    1.3 安装es

    这里直接使用deb安装

    ## 3台都安装
    dpkg -i elasticsearch-7.12.1-amd64.deb
    ## es1 安装
    dpkg -i kibana-7.6.2-amd64.deb
    mkdir /elasticsearch/{logs,data} -p
    chown elasticsearch.elasticsearch -R /elasticsearch
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    elasticsearch配置文件/etc/elasticsearch/elasticsearch.yml
    需要注意的是,这里的node.name 3台服务器都不能重复
    path.data和path.log尽量别放在默认的/tmp下避免误删造成数据丢失

    cluster.name: pana-elk-cluster1
    ## nodename 3台服务器设置成不同
    node.name: es1
    ## 数据持久化目录
    path.data: /elasticsearch/data
    path.logs: /elasticsearch/logs
    ## 服务器ip,3台各自设定
    network.host: 192.168.31.41
    http.port: 9200
    ## 将集群中3台地址依次填如
    discovery.seed_hosts: ["192.168.31.41", "192.168.31.42", "192.168.31.43"]
    ## 将集群中3台地址依次填如
    cluster.initial_master_nodes: ["192.168.31.41", "192.168.31.42", "192.168.31.43"]
    ## 集群中服务器数/2+1
    gateway.recover_after_nodes: 2
    action.destructive_requires_name: true
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16

    kibana配置文件/etc/kibana/kibana.yml
    需要指定elasticsearch 9200的地址
    如果需要中文版就将i18n.locale值写为zh-CN

    server.port: 5601
    server.host: "192.168.31.41"
    elasticsearch.hosts: ["http://192.168.31.41:9200"]
    i18n.locale: "zh-CN"
    
    • 1
    • 2
    • 3
    • 4

    启动es和kibana,kibana启动会比较慢,可以先做其他配置.

    systemctl restart elasticsearch.service
    systemctl start kibana
    
    • 1
    • 2

    1.4 安装logstash

    logstash服务器上安装

    dpkg -i logstash-7.12.1-amd64.deb
    
    • 1

    2. 日志收集

    2.1 重新部署Tomcat服务

    Master服务器上操作

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        app: wework-tomcat-app1-deployment-label
      name: wework-tomcat-app1-deployment
      namespace: wework
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: wework-tomcat-app1-selector
      template:
        metadata:
          labels:
            app: wework-tomcat-app1-selector
        spec:
          containers:
          - name: wework-tomcat-app1-container
            image: harbor.intra.com/wework/tomcat-app1:v2
            ports:
            - containerPort: 8080
              protocol: TCP
              name: http
            env:
            - name: "password"
              value: "123456"
            - name: "age"
              value: "18"
            resources:
              limits:
                cpu: 1
                memory: "512Mi"
              requests:
                cpu: 500m
                memory: "512Mi"
            volumeMounts:
            - name: wework-images
              mountPath: /usr/local/nginx/html/webapp/images
              readOnly: false
            - name: wework-static
              mountPath: /usr/local/nginx/html/webapp/static
              readOnly: false
          volumes:
          - name: wework-images
            nfs:
              server: 192.168.31.109
              path: /data/k8s/wework/images
          - name: wework-static
            nfs:
              server: 192.168.31.104
              path: /data/k8s/wework/static
    ---
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        app: wework-tomcat-app1-service-label
      name: wework-tomcat-app1-service
      namespace: wework
    spec:
      type: NodePort
      ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: 8080
        nodePort: 30092
      selector:
        app: wework-tomcat-app1-selector
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70

    使用刚才构建的v2版本镜像,重新部署tomcat

    root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl apply -f tomcat-app1.yaml 
    deployment.apps/wework-tomcat-app1-deployment configured
    service/wework-tomcat-app1-service unchanged
    
    • 1
    • 2
    • 3

    检查tomcat-app1容器中filebeat是否正常启动

    [root@wework-tomcat-app1-deployment-5b776b7f4c-b7k86 /]# ps -ef |grep filebeat
    root          7      1  0 16:17 ?        00:00:00 /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat
    root        103     83  0 16:20 pts/0    00:00:00 grep --color=auto filebeat
    [root@wework-tomcat-app1-deployment-5b776b7f4c-b7k86 /]# [root@wework-tomcat-app1-deployment-5b776b7f4c-b7k86 /]# tail -f /apps/tomcat/logs/localhost_access_log.*.txt
    172.100.76.128 - - [22/Aug/2022:17:09:09 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:12 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:15 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:18 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:21 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:24 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:27 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:30 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:33 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:36 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:39 +0800] "GET /myapp/ HTTP/1.1" 200 23
    172.100.76.128 - - [22/Aug/2022:17:09:42 +0800] "GET /myapp/ HTTP/1.1" 200 23
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16

    此时可以通过kafka客户端查看到数据,如果这里看不到数据,请检查镜像配置或容器中日志是否正常生成.

    请添加图片描述

    2.2 配置logstash

    logstash服务器

    编辑logstash配置文件

    vi /etc/logstash/conf.d/kafka-to-es.conf
    
    • 1

    input是从kafka,output是到es服务器.

    input {
      kafka {
        bootstrap_servers => "192.168.31.121:9092,192.168.31.122:192.168.31.123:9092"
        topics => ["wework-tomcat-app1"]
        codec => "json"
      }
    }
    
    output {
      if [fields][type] == "tomcat-accesslog" {
        elasticsearch {
          hosts => ["192.168.31.41:9200","192.168.31.42:9200","192.168.31.43:9200"]
          index => "wework-tomcat-app1-accesslog-%{+YYYY.MM.dd}"
        }
      }
    
      if [fields][type] == "tomcat-catalina" {
        elasticsearch {
          hosts => ["192.168.31.41:9200","192.168.31.42:9200","192.168.31.43:9200"]
          index => "wework-tomcat-app1-catalinalog-%{+YYYY.MM.dd}"
        }
      }
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23

    第一次把kafka和es的集群搞反了一直有这样的报错.改回来了就好了.正常情况下都是INFO的信息.
    日志位置/var/log/logstash/logstash-plain.log

    [2022-08-23T00:37:29,160][WARN ][org.apache.kafka.clients.NetworkClient][main][c03144fe3d097cca98814f7486eb4f3a0283b8964384dc70c6da2db816938794] [Consumer clientId=logstash-0, groupId=logstash] Connection to node -1 (/192.168.31.41:9092) could not be established. Broker may not be available.
    
    [2022-08-23T00:40:43,188][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://192.168.31.121:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://192.168.31.121:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
    
    • 1
    • 2
    • 3

    2.3 启动logstash

    root@logstash:~# systemctl restart logstash
    
    • 1

    确认启动成功

    root@logstash:~# tail -f /var/log/logstash/logstash-plain.log
    [2022-08-23T00:47:26,008][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator][main][262898fd5d4fc97f22b1ff57075d8840de87647f79e44b3e30251b3afecb47b1] [Consumer clientId=logstash-0, groupId=logstash] Found no committed offset for partition wework-tomcat-app1-0
    [2022-08-23T00:47:26,010][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator][main][262898fd5d4fc97f22b1ff57075d8840de87647f79e44b3e30251b3afecb47b1] [Consumer clientId=logstash-0, groupId=logstash] Found no committed offset for partition wework-tomcat-app1-2
    [2022-08-23T00:47:26,011][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator][main][262898fd5d4fc97f22b1ff57075d8840de87647f79e44b3e30251b3afecb47b1] [Consumer clientId=logstash-0, groupId=logstash] Found no committed offset for partition wework-tomcat-app1-1
    [2022-08-23T00:47:26,040][INFO ][org.apache.kafka.clients.consumer.internals.SubscriptionState][main][262898fd5d4fc97f22b1ff57075d8840de87647f79e44b3e30251b3afecb47b1] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition wework-tomcat-app1-1 to offset 529.
    [2022-08-23T00:47:26,060][INFO ][org.apache.kafka.clients.consumer.internals.SubscriptionState][main][262898fd5d4fc97f22b1ff57075d8840de87647f79e44b3e30251b3afecb47b1] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition wework-tomcat-app1-2 to offset 447.
    [2022-08-23T00:47:26,063][INFO ][org.apache.kafka.clients.consumer.internals.SubscriptionState][main][262898fd5d4fc97f22b1ff57075d8840de87647f79e44b3e30251b3afecb47b1] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition wework-tomcat-app1-0 to offset 447.
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    打开es head确认数据已经收到
    请添加图片描述

    3. Kibana导入索引

    依次导入accesslog和catalina

    请添加图片描述
    请添加图片描述
    请添加图片描述
    请添加图片描述
    至此K8s日志收集实现完毕

  • 相关阅读:
    细粒度图像分类论文研读-2014
    Modern CSV:大型 CSV 文件编辑器/查看器 Crack
    Linux设备树OF操作函数
    容器方式安装 nexus3 并作为yum私服
    基于leetcode的算法训练:Day1
    蓝桥杯每日一题2032.10.24
    第15章_锁
    Matlab实现SUSAN角点检测
    Linux中网络排查命令traceroute
    openfeign原理
  • 原文地址:https://blog.csdn.net/qq_29974229/article/details/126479798