• 【云原生 | Kubernetes 系列】--Envoy基于gRPC订阅


    1. 基于gRPC订阅

    LDS配置格式

    dynamic_resources:
      lds_config:
        api_config_source:
          api_type: ... # API可以是REST,gRPC,delta_gRPC 三者之一,必须明确
          resource_api_version: ... # v3
          rate_limit_settings: {...} # 速率限制
          grpc_services: 	# 提供grpc服务的一到多个服务源
            transport_api_version: ... # xDS传输协议使用的API版本v3
            envoy_grpc:		#envoy内建的grpc客户端,envoy_grpc或google_grpc二选一
              cluster_name: ... # grpc集群的名称
            google_grpc: # google的c++ grpc客户端
            timeout: # 超时时长
    

    2. 基于gRPC动态配置实现

    2.1 docker-compose.yaml

    六个Service:

    • envoy: Front Proxy,地址为172.31.15.2
    • webserver01: 第一个后端服务
    • webserver01-sidecat: 第一个后端服务的sidecar Proxy,地址为172.31.15.11
    • webserver02: 第二个后端服务
    • webserver02-sidecar: 第二个后端服务的sidecar Proxy,地址为172.31.15.12
    • xdsserver: xDS Manager Server 地址为:172.31.15.5

    xdsserver定义了配置文件从./resources获取即resources/config.yaml的内容

    version: '3.3'
    
    services:
      envoy:
        image: envoyproxy/envoy-alpine:v1.21.5
        environment:
          - ENVOY_UID=0
          - ENVOY_GID=0
        volumes:
        - ./front-envoy.yaml:/etc/envoy/envoy.yaml
        networks:
          envoymesh:
            ipv4_address: 172.31.15.2
            aliases:
            - front-proxy
        depends_on:
        - webserver01
        - webserver02
        - xdsserver
    
      webserver01:
        image: ikubernetes/demoapp:v1.0
        environment:
          - PORT=8080
          - HOST=127.0.0.1
        network_mode: "service:webserver01-sidecar"
        depends_on:
        - webserver01-sidecar
    
      webserver01-sidecar:
        image: envoyproxy/envoy-alpine:v1.21.5
        volumes:
        - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
        hostname: webserver01
        networks:
          envoymesh:
            ipv4_address: 172.31.15.11
            aliases:
            - webserver01-sidecar
    
      webserver02:
        image: ikubernetes/demoapp:v1.0
        environment:
          - PORT=8080
          - HOST=127.0.0.1
        network_mode: "service:webserver02-sidecar"
        depends_on:
        - webserver02-sidecar
    
      webserver02-sidecar:
        image: envoyproxy/envoy-alpine:v1.21.5
        volumes:
        - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
        hostname: webserver02
        networks:
          envoymesh:
            ipv4_address: 172.31.15.12
            aliases:
            - webserver02-sidecar
    
      xdsserver:
        image: ikubernetes/envoy-xds-server:v0.1
        environment:
          - SERVER_PORT=18000
          - NODE_ID=envoy_front_proxy		# 一定要和front-envoy.yaml的node Id对应
          - RESOURCES_FILE=/etc/envoy-xds-server/config/config.yaml
        volumes:
        - ./resources:/etc/envoy-xds-server/config/  # 将目录挂载到本地
        networks:
          envoymesh:
            ipv4_address: 172.31.15.5
            aliases:
            - xdsserver
            - xds-service
        expose:
        - "18000"
    
    networks:
      envoymesh:
        driver: bridge
        ipam:
          config:
            - subnet: 172.31.15.0/24
    

    2.2 front-envoy.yaml

    一共2个cluster:

    1. 通过GRPC加载LDS实现Listener和VirtualHost,Route的动态发现
      通过GRPC加载CDS实现Cluster和Endpoint的动态发现
    2. 通过STRICT_DNS参数解析加载所有xdsserver的IP
    node:
      id: envoy_front_proxy
      cluster: webcluster
    
    admin:
      profile_path: /tmp/envoy.prof
      access_log_path: /tmp/admin_access.log
      address:
        socket_address:
           address: 0.0.0.0
           port_value: 9901
    
    dynamic_resources:
      lds_config:
        resource_api_version: V3
        api_config_source:
          api_type: GRPC
          transport_api_version: V3
          grpc_services:
          - envoy_grpc:
              cluster_name: xds_cluster
    
      cds_config:
        resource_api_version: V3
        api_config_source:
          api_type: GRPC
          transport_api_version: V3
          grpc_services:
          - envoy_grpc:
              cluster_name: xds_cluster
    
    static_resources:
      clusters:
      - name: xds_cluster
        connect_timeout: 0.25s
        type: STRICT_DNS
        # The extension_protocol_options field is used to provide extension-specific protocol options for upstream connections. 
        typed_extension_protocol_options:
          envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
            "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
            explicit_http_config:
              http2_protocol_options: {}
        lb_policy: ROUND_ROBIN
        load_assignment:
          cluster_name: xds_cluster
          endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: xdsserver
                    port_value: 18000
    

    2.3 resources/config.yaml

    定义了Listener和Endpoint.这里避免修改一半出发配置下发,
    先修改中间文件,再同步到配置文件上.

    # cat resources/config.yaml-v1
    # cat resources/config.yaml-v1 > resources/config.yaml
    name: myconfig
    spec:
      listeners:
    
      - name: listener_http
        address: 0.0.0.0
        port: 80
        routes:
        - name: local_route
          prefix: /
          clusters:
          - webcluster
        clusters:
      - name: webcluster
        endpoints:
        - address: 172.31.15.11
          port: 80
    

    2.4 运行测试

    2.4.1 启动后状态

    此时有2个cluster,1个是基于配置文件静态发现的xds_cluster,一个是基于xds_cluster动态发现的webcluster
    webcluster集群只有172.31.15.11:80一个endpoint
    访问172.31.15.2 也只有172.31.15.11应答

    # docker-compose up
    ## 查看集群状态
    root@k8s-node-1:~# curl 172.31.15.2:9901/clusters
    xds_cluster::observability_name::xds_cluster
    xds_cluster::default_priority::max_connections::1024
    xds_cluster::default_priority::max_pending_requests::1024
    xds_cluster::default_priority::max_requests::1024
    xds_cluster::default_priority::max_retries::3
    xds_cluster::high_priority::max_connections::1024
    xds_cluster::high_priority::max_pending_requests::1024
    xds_cluster::high_priority::max_requests::1024
    xds_cluster::high_priority::max_retries::3
    xds_cluster::added_via_api::false
    xds_cluster::172.31.15.5:18000::cx_active::1
    xds_cluster::172.31.15.5:18000::cx_connect_fail::0
    xds_cluster::172.31.15.5:18000::cx_total::1
    xds_cluster::172.31.15.5:18000::rq_active::4
    xds_cluster::172.31.15.5:18000::rq_error::0
    xds_cluster::172.31.15.5:18000::rq_success::0
    xds_cluster::172.31.15.5:18000::rq_timeout::0
    xds_cluster::172.31.15.5:18000::rq_total::4
    xds_cluster::172.31.15.5:18000::hostname::xdsserver
    xds_cluster::172.31.15.5:18000::health_flags::healthy
    xds_cluster::172.31.15.5:18000::weight::1
    xds_cluster::172.31.15.5:18000::region::
    xds_cluster::172.31.15.5:18000::zone::
    xds_cluster::172.31.15.5:18000::sub_zone::
    xds_cluster::172.31.15.5:18000::canary::false
    xds_cluster::172.31.15.5:18000::priority::0
    xds_cluster::172.31.15.5:18000::success_rate::-1.0
    xds_cluster::172.31.15.5:18000::local_origin_success_rate::-1.0
    webcluster::observability_name::webcluster
    webcluster::default_priority::max_connections::1024
    webcluster::default_priority::max_pending_requests::1024
    webcluster::default_priority::max_requests::1024
    webcluster::default_priority::max_retries::3
    webcluster::high_priority::max_connections::1024
    webcluster::high_priority::max_pending_requests::1024
    webcluster::high_priority::max_requests::1024
    webcluster::high_priority::max_retries::3
    webcluster::added_via_api::true
    webcluster::172.31.15.11:80::cx_active::4
    webcluster::172.31.15.11:80::cx_connect_fail::0
    webcluster::172.31.15.11:80::cx_total::4
    webcluster::172.31.15.11:80::rq_active::0
    webcluster::172.31.15.11:80::rq_error::0
    webcluster::172.31.15.11:80::rq_success::9
    webcluster::172.31.15.11:80::rq_timeout::0
    webcluster::172.31.15.11:80::rq_total::9
    webcluster::172.31.15.11:80::hostname::
    webcluster::172.31.15.11:80::health_flags::healthy
    webcluster::172.31.15.11:80::weight::1
    webcluster::172.31.15.11:80::region::
    webcluster::172.31.15.11:80::zone::
    webcluster::172.31.15.11:80::sub_zone::
    webcluster::172.31.15.11:80::canary::false
    webcluster::172.31.15.11:80::priority::0
    webcluster::172.31.15.11:80::success_rate::-1.0
    webcluster::172.31.15.11:80::local_origin_success_rate::-1.0
    root@k8s-node-1:~# curl 172.31.15.2
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
    root@k8s-node-1:~# curl 172.31.15.2
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
    root@k8s-node-1:~# curl 172.31.15.2
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
    root@k8s-node-1:~# curl 172.31.15.2
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
    root@k8s-node-1:~# curl -s 172.31.15.2:9901/config_dump | jq '.configs[1].dynamic_active_clusters'
    [
      {
        "version_info": "411",
        "cluster": {
          "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
          "name": "webcluster",
          "type": "EDS",
          "eds_cluster_config": {
            "eds_config": {
              "api_config_source": {
                "api_type": "GRPC",
                "grpc_services": [
                  {
                    "envoy_grpc": {
                      "cluster_name": "xds_cluster"
                    }
                  }
                ],
                "set_node_on_first_message_only": true,
                "transport_api_version": "V3"
              },
              "resource_api_version": "V3"
            }
          },
          "connect_timeout": "5s",
          "dns_lookup_family": "V4_ONLY"
        },
        "last_updated": "2022-09-26T12:24:44.285Z"
      }
    ]
    root@k8s-node-1:~# curl -s 172.31.15.2:9901/config_dump?resource=dynamic_listeners| jq '.configs[0].active_state.listener.address'
    {
      "socket_address": {
        "address": "0.0.0.0",
        "port_value": 80
      }
    }
    

    2.4.2 修改resources/config.yaml

    修改resources/config.yaml在配置中的endpoint中加入172.31.15.12

    name: myconfig
    spec:
      listeners:
      - name: listener_http
        address: 0.0.0.0
        port: 8081
        routes:
        - name: local_route
          prefix: /
          clusters:
          - webcluster
      clusters:
      - name: webcluster
        endpoints:
        - address: 172.31.15.11
          port: 80
        - address: 172.31.15.12
          port: 80
    

    此时不用重启,再次查看cluster状态和访问listener地址

    root@k8s-node-1:~# curl -s 172.31.15.2:9901/listeners
    listener_http::0.0.0.0:8081
    root@k8s-node-1:~# curl -s 172.31.15.2:9901/clusters
    webcluster::observability_name::webcluster
    webcluster::default_priority::max_connections::1024
    webcluster::default_priority::max_pending_requests::1024
    webcluster::default_priority::max_requests::1024
    webcluster::default_priority::max_retries::3
    webcluster::high_priority::max_connections::1024
    webcluster::high_priority::max_pending_requests::1024
    webcluster::high_priority::max_requests::1024
    webcluster::high_priority::max_retries::3
    webcluster::added_via_api::true
    webcluster::172.31.15.11:80::cx_active::0
    webcluster::172.31.15.11:80::cx_connect_fail::0
    webcluster::172.31.15.11:80::cx_total::0
    webcluster::172.31.15.11:80::rq_active::0
    webcluster::172.31.15.11:80::rq_error::0
    webcluster::172.31.15.11:80::rq_success::0
    webcluster::172.31.15.11:80::rq_timeout::0
    webcluster::172.31.15.11:80::rq_total::0
    webcluster::172.31.15.11:80::hostname::
    webcluster::172.31.15.11:80::health_flags::healthy
    webcluster::172.31.15.11:80::weight::1
    webcluster::172.31.15.11:80::region::
    webcluster::172.31.15.11:80::zone::
    webcluster::172.31.15.11:80::sub_zone::
    webcluster::172.31.15.11:80::canary::false
    webcluster::172.31.15.11:80::priority::0
    webcluster::172.31.15.11:80::success_rate::-1.0
    webcluster::172.31.15.11:80::local_origin_success_rate::-1.0
    webcluster::172.31.15.12:80::cx_active::0
    webcluster::172.31.15.12:80::cx_connect_fail::0
    webcluster::172.31.15.12:80::cx_total::0
    webcluster::172.31.15.12:80::rq_active::0
    webcluster::172.31.15.12:80::rq_error::0
    webcluster::172.31.15.12:80::rq_success::0
    webcluster::172.31.15.12:80::rq_timeout::0
    webcluster::172.31.15.12:80::rq_total::0
    webcluster::172.31.15.12:80::hostname::
    webcluster::172.31.15.12:80::health_flags::healthy
    webcluster::172.31.15.12:80::weight::1
    webcluster::172.31.15.12:80::region::
    webcluster::172.31.15.12:80::zone::
    webcluster::172.31.15.12:80::sub_zone::
    webcluster::172.31.15.12:80::canary::false
    webcluster::172.31.15.12:80::priority::0
    webcluster::172.31.15.12:80::success_rate::-1.0
    webcluster::172.31.15.12:80::local_origin_success_rate::-1.0
    xds_cluster::observability_name::xds_cluster
    xds_cluster::default_priority::max_connections::1024
    xds_cluster::default_priority::max_pending_requests::1024
    xds_cluster::default_priority::max_requests::1024
    xds_cluster::default_priority::max_retries::3
    xds_cluster::high_priority::max_connections::1024
    xds_cluster::high_priority::max_pending_requests::1024
    xds_cluster::high_priority::max_requests::1024
    xds_cluster::high_priority::max_retries::3
    xds_cluster::added_via_api::false
    xds_cluster::172.31.15.5:18000::cx_active::1
    xds_cluster::172.31.15.5:18000::cx_connect_fail::0
    xds_cluster::172.31.15.5:18000::cx_total::1
    xds_cluster::172.31.15.5:18000::rq_active::4
    xds_cluster::172.31.15.5:18000::rq_error::0
    xds_cluster::172.31.15.5:18000::rq_success::0
    xds_cluster::172.31.15.5:18000::rq_timeout::0
    xds_cluster::172.31.15.5:18000::rq_total::4
    xds_cluster::172.31.15.5:18000::hostname::xdsserver
    xds_cluster::172.31.15.5:18000::health_flags::healthy
    xds_cluster::172.31.15.5:18000::weight::1
    xds_cluster::172.31.15.5:18000::region::
    xds_cluster::172.31.15.5:18000::zone::
    xds_cluster::172.31.15.5:18000::sub_zone::
    xds_cluster::172.31.15.5:18000::canary::false
    xds_cluster::172.31.15.5:18000::priority::0
    xds_cluster::172.31.15.5:18000::success_rate::-1.0
    xds_cluster::172.31.15.5:18000::local_origin_success_rate::-1.0
    root@k8s-node-1:~# curl 172.31.15.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
    root@k8s-node-1:~# curl 172.31.15.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.15.12!
    root@k8s-node-1:~# curl 172.31.15.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.15.12!
    root@k8s-node-1:~# curl 172.31.15.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.15.11!
    root@k8s-node-1:~# curl 172.31.15.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.15.12!
    

    3. 基于ADS gRPC动态配置实现

    ADS的作用就是避免了多个资源分发时,因先后顺序造成配置丢弃.而ADS允许单一Manager Server通过单个gRPC流提供所有API更新

    3.1 docker-compose.yaml

    六个Service:

    • envoy: Front Proxy,地址为172.31.16.2
    • webserver01: 第一个后端服务
    • webserver01-sidecat: 第一个后端服务的sidecar Proxy,地址为172.31.16.11
    • webserver02: 第二个后端服务
    • webserver02-sidecar: 第二个后端服务的sidecar Proxy,地址为172.31.16.12
    • xdsserver: xDS Manager Server 地址为:172.31.16.5

    xdsserver定义了配置文件从./resources获取即resources/config.yaml的内容

    version: '3.3'
    
    services:
      envoy:
        image: envoyproxy/envoy-alpine:v1.21.5
        environment:
          - ENVOY_UID=0
          - ENVOY_GID=0
        volumes:
        - ./front-envoy.yaml:/etc/envoy/envoy.yaml
        networks:
          envoymesh:
            ipv4_address: 172.31.16.2
            aliases:
            - front-proxy
        depends_on:
        - webserver01
        - webserver02
        - xdsserver
    
      webserver01:
        image: ikubernetes/demoapp:v1.0
        environment:
          - PORT=8080
          - HOST=127.0.0.1
        network_mode: "service:webserver01-sidecar"
        depends_on:
        - webserver01-sidecar
    
      webserver01-sidecar:
        image: envoyproxy/envoy-alpine:v1.21.5
        volumes:
        - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
        hostname: webserver01
        networks:
          envoymesh:
            ipv4_address: 172.31.16.11
            aliases:
            - webserver01-sidecar
    
      webserver02:
        image: ikubernetes/demoapp:v1.0
        environment:
          - PORT=8080
          - HOST=127.0.0.1
        network_mode: "service:webserver02-sidecar"
        depends_on:
        - webserver02-sidecar
    
      webserver02-sidecar:
        image: envoyproxy/envoy-alpine:v1.21.5
        volumes:
        - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
        hostname: webserver02
        networks:
          envoymesh:
            ipv4_address: 172.31.16.12
            aliases:
            - webserver02-sidecar
    
      xdsserver:
        image: ikubernetes/envoy-xds-server:v0.1
        environment:
          - SERVER_PORT=18000
          - NODE_ID=envoy_front_proxy
          - RESOURCES_FILE=/etc/envoy-xds-server/config/config.yaml
        volumes:
        - ./resources:/etc/envoy-xds-server/config/
        networks:
          envoymesh:
            ipv4_address: 172.31.16.5
            aliases:
            - xdsserver
            - xds-service
        expose:
        - "18000"
    
    networks:
      envoymesh:
        driver: bridge
        ipam:
          config:
            - subnet: 172.31.16.0/24
    

    3.2 front-envoy.yaml

    一共2个cluster:

    1. 通过GRPC加载LDS实现CDS和LDS的按需加载,避免了因为先后顺序的原因造成一些定义的内容被丢弃.

      envoy_grpc来自于xds_cluster集群,当前xds_cluster集群只有一个container即172.31.16.5
      cds_config定义从ads获取配置
      lds_config定义从ads获取配置

    2. 通过STRICT_DNS参数解析加载所有xdsserver的IP

    node:
      id: envoy_front_proxy
      cluster: webcluster
    
    admin:
      profile_path: /tmp/envoy.prof
      access_log_path: /tmp/admin_access.log
      address:
        socket_address:
           address: 0.0.0.0
           port_value: 9901
    
    dynamic_resources:
      ads_config:
        api_type: GRPC
        transport_api_version: V3
        grpc_services:
        - envoy_grpc:
            cluster_name: xds_cluster
        set_node_on_first_message_only: true
      cds_config:
        resource_api_version: V3
        ads: {}
      lds_config:
        resource_api_version: V3
        ads: {}
    
    static_resources:
      clusters:
      - name: xds_cluster
        connect_timeout: 0.25s
        type: STRICT_DNS
        # The extension_protocol_options field is used to provide extension-specific protocol options for upstream connections. 
        typed_extension_protocol_options:
          envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
            "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
            explicit_http_config:
              http2_protocol_options: {}
        lb_policy: ROUND_ROBIN
        load_assignment:
          cluster_name: xds_cluster
          endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: xdsserver
                    port_value: 18000
    

    3.3 config.yaml

    定义了listeners和clusters

    name: myconfig
    spec:
      listeners:
      - name: listener_http
        address: 0.0.0.0
        port: 80
        routes:
        - name: local_route
          prefix: /
          clusters:
          - webcluster
      clusters:
      - name: webcluster
        endpoints:
        - address: 172.31.16.11
          port: 80
    

    3.4 运行测试

    3.4.1 启动后状态

    启动docker-compose后测试

    1. 访问envoy 被调度到cluster:webcluster上,因为webcluster只有一个endpoint,所以只有172.31.16.11响应.

    2. 监听启动在172.31.16.2上,以0.0.0.0:80 对外监听

    3. 一共2个集群:

      1. xds_cluster只有1个服务器172.31.16.5

      2. webcluster只有1个服务器172.31.16.11

    # docker-compose up
    ## 访问测试
    root@k8s-node-1:~# curl 172.31.16.2
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
    root@k8s-node-1:~# curl 172.31.16.2
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
    root@k8s-node-1:~# curl 172.31.16.2
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
    root@k8s-node-1:~# curl 172.31.16.2:9901/listeners
    listener_http::0.0.0.0:80
    root@k8s-node-1:~# curl 172.31.16.2:9901/clusters
    xds_cluster::observability_name::xds_cluster
    xds_cluster::default_priority::max_connections::1024
    xds_cluster::default_priority::max_pending_requests::1024
    xds_cluster::default_priority::max_requests::1024
    xds_cluster::default_priority::max_retries::3
    xds_cluster::high_priority::max_connections::1024
    xds_cluster::high_priority::max_pending_requests::1024
    xds_cluster::high_priority::max_requests::1024
    xds_cluster::high_priority::max_retries::3
    xds_cluster::added_via_api::false
    xds_cluster::172.31.16.5:18000::cx_active::1
    xds_cluster::172.31.16.5:18000::cx_connect_fail::0
    xds_cluster::172.31.16.5:18000::cx_total::1
    xds_cluster::172.31.16.5:18000::rq_active::3
    xds_cluster::172.31.16.5:18000::rq_error::0
    xds_cluster::172.31.16.5:18000::rq_success::0
    xds_cluster::172.31.16.5:18000::rq_timeout::0
    xds_cluster::172.31.16.5:18000::rq_total::3
    xds_cluster::172.31.16.5:18000::hostname::xdsserver
    xds_cluster::172.31.16.5:18000::health_flags::healthy
    xds_cluster::172.31.16.5:18000::weight::1
    xds_cluster::172.31.16.5:18000::region::
    xds_cluster::172.31.16.5:18000::zone::
    xds_cluster::172.31.16.5:18000::sub_zone::
    xds_cluster::172.31.16.5:18000::canary::false
    xds_cluster::172.31.16.5:18000::priority::0
    xds_cluster::172.31.16.5:18000::success_rate::-1.0
    xds_cluster::172.31.16.5:18000::local_origin_success_rate::-1.0
    webcluster::observability_name::webcluster
    webcluster::default_priority::max_connections::1024
    webcluster::default_priority::max_pending_requests::1024
    webcluster::default_priority::max_requests::1024
    webcluster::default_priority::max_retries::3
    webcluster::high_priority::max_connections::1024
    webcluster::high_priority::max_pending_requests::1024
    webcluster::high_priority::max_requests::1024
    webcluster::high_priority::max_retries::3
    webcluster::added_via_api::true
    webcluster::172.31.16.11:80::cx_active::3
    webcluster::172.31.16.11:80::cx_connect_fail::0
    webcluster::172.31.16.11:80::cx_total::3
    webcluster::172.31.16.11:80::rq_active::0
    webcluster::172.31.16.11:80::rq_error::0
    webcluster::172.31.16.11:80::rq_success::5
    webcluster::172.31.16.11:80::rq_timeout::0
    webcluster::172.31.16.11:80::rq_total::5
    webcluster::172.31.16.11:80::hostname::
    webcluster::172.31.16.11:80::health_flags::healthy
    webcluster::172.31.16.11:80::weight::1
    webcluster::172.31.16.11:80::region::
    webcluster::172.31.16.11:80::zone::
    webcluster::172.31.16.11:80::sub_zone::
    webcluster::172.31.16.11:80::canary::false
    webcluster::172.31.16.11:80::priority::0
    webcluster::172.31.16.11:80::success_rate::-1.0
    webcluster::172.31.16.11:80::local_origin_success_rate::-1.0
    

    3.4.2 修改listeener和endpoit

    为了避免修改时意外同步,先将配置复制出来,在新的文件修改后,将配置同步到原文件触发生效

    修改内如如下:

    1. 修改配置文件,将监听由80改为8081
    2. endpoint追加172.31.16.12
    # docker exec -it adsgrpc_xdsserver_1 sh
    ### 修改配置文件,将监听由80改为8081,endpoint追加172.31.16.12
    / # cd /etc/envoy-xds-server/config/
    /etc/envoy-xds-server/config # cat config.yaml
    name: myconfig
    spec:
      listeners:
      - name: listener_http
        address: 0.0.0.0
        port: 80
        routes:
        - name: local_route
          prefix: /
          clusters:
          - webcluster
      clusters:
      - name: webcluster
        endpoints:
        - address: 172.31.16.11
          port: 80
    /etc/envoy-xds-server/config # cp config.yaml config2.yaml
    /etc/envoy-xds-server/config # vi config2.yaml 
    /etc/envoy-xds-server/config # cat config2.yaml 
    name: myconfig
    spec:
      listeners:
      - name: listener_http
        address: 0.0.0.0
        port: 8081
        routes:
        - name: local_route
          prefix: /
          clusters:
          - webcluster
      clusters:
      - name: webcluster
        endpoints:
        - address: 172.31.16.11
          port: 80
        - address: 172.31.16.12
          port: 80
    ### 同步配置
    /etc/envoy-xds-server/config # cat config2.yaml > config.yaml
    /etc/envoy-xds-server/config # cat config.yaml
    name: myconfig
    spec:
      listeners:
      - name: listener_http
        address: 0.0.0.0
        port: 8081
        routes:
        - name: local_route
          prefix: /
          clusters:
          - webcluster
      clusters:
      - name: webcluster
        endpoints:
        - address: 172.31.16.11
          port: 80
        - address: 172.31.16.12
          port: 80
    

    再次测试

    1. 监听地址已经变为8081
    2. webcluster内endpoint追加了172.31.16.12
    3. 访问172.31.16.2:8081可以在2个endpoint之间做轮询
    root@k8s-node-1:~# curl 172.31.16.2:9901/listeners
    listener_http::0.0.0.0:8081
    root@k8s-node-1:~# curl 172.31.16.2:9901/clusters
    xds_cluster::observability_name::xds_cluster
    xds_cluster::default_priority::max_connections::1024
    xds_cluster::default_priority::max_pending_requests::1024
    xds_cluster::default_priority::max_requests::1024
    xds_cluster::default_priority::max_retries::3
    xds_cluster::high_priority::max_connections::1024
    xds_cluster::high_priority::max_pending_requests::1024
    xds_cluster::high_priority::max_requests::1024
    xds_cluster::high_priority::max_retries::3
    xds_cluster::added_via_api::false
    xds_cluster::172.31.16.5:18000::cx_active::1
    xds_cluster::172.31.16.5:18000::cx_connect_fail::0
    xds_cluster::172.31.16.5:18000::cx_total::1
    xds_cluster::172.31.16.5:18000::rq_active::3
    xds_cluster::172.31.16.5:18000::rq_error::0
    xds_cluster::172.31.16.5:18000::rq_success::0
    xds_cluster::172.31.16.5:18000::rq_timeout::0
    xds_cluster::172.31.16.5:18000::rq_total::3
    xds_cluster::172.31.16.5:18000::hostname::xdsserver
    xds_cluster::172.31.16.5:18000::health_flags::healthy
    xds_cluster::172.31.16.5:18000::weight::1
    xds_cluster::172.31.16.5:18000::region::
    xds_cluster::172.31.16.5:18000::zone::
    xds_cluster::172.31.16.5:18000::sub_zone::
    xds_cluster::172.31.16.5:18000::canary::false
    xds_cluster::172.31.16.5:18000::priority::0
    xds_cluster::172.31.16.5:18000::success_rate::-1.0
    xds_cluster::172.31.16.5:18000::local_origin_success_rate::-1.0
    webcluster::observability_name::webcluster
    webcluster::default_priority::max_connections::1024
    webcluster::default_priority::max_pending_requests::1024
    webcluster::default_priority::max_requests::1024
    webcluster::default_priority::max_retries::3
    webcluster::high_priority::max_connections::1024
    webcluster::high_priority::max_pending_requests::1024
    webcluster::high_priority::max_requests::1024
    webcluster::high_priority::max_retries::3
    webcluster::added_via_api::true
    webcluster::172.31.16.11:80::cx_active::3
    webcluster::172.31.16.11:80::cx_connect_fail::0
    webcluster::172.31.16.11:80::cx_total::3
    webcluster::172.31.16.11:80::rq_active::0
    webcluster::172.31.16.11:80::rq_error::0
    webcluster::172.31.16.11:80::rq_success::5
    webcluster::172.31.16.11:80::rq_timeout::0
    webcluster::172.31.16.11:80::rq_total::5
    webcluster::172.31.16.11:80::hostname::
    webcluster::172.31.16.11:80::health_flags::healthy
    webcluster::172.31.16.11:80::weight::1
    webcluster::172.31.16.11:80::region::
    webcluster::172.31.16.11:80::zone::
    webcluster::172.31.16.11:80::sub_zone::
    webcluster::172.31.16.11:80::canary::false
    webcluster::172.31.16.11:80::priority::0
    webcluster::172.31.16.11:80::success_rate::-1.0
    webcluster::172.31.16.11:80::local_origin_success_rate::-1.0
    webcluster::172.31.16.12:80::cx_active::0
    webcluster::172.31.16.12:80::cx_connect_fail::0
    webcluster::172.31.16.12:80::cx_total::0
    webcluster::172.31.16.12:80::rq_active::0
    webcluster::172.31.16.12:80::rq_error::0
    webcluster::172.31.16.12:80::rq_success::0
    webcluster::172.31.16.12:80::rq_timeout::0
    webcluster::172.31.16.12:80::rq_total::0
    webcluster::172.31.16.12:80::hostname::
    webcluster::172.31.16.12:80::health_flags::healthy
    webcluster::172.31.16.12:80::weight::1
    webcluster::172.31.16.12:80::region::
    webcluster::172.31.16.12:80::zone::
    webcluster::172.31.16.12:80::sub_zone::
    webcluster::172.31.16.12:80::canary::false
    webcluster::172.31.16.12:80::priority::0
    webcluster::172.31.16.12:80::success_rate::-1.0
    webcluster::172.31.16.12:80::local_origin_success_rate::-1.0
    root@k8s-node-1:~# curl 172.31.16.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
    root@k8s-node-1:~# curl 172.31.16.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
    root@k8s-node-1:~# curl 172.31.16.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
    root@k8s-node-1:~# curl 172.31.16.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
    root@k8s-node-1:~# curl 172.31.16.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver01, ServerIP: 172.31.16.11!
    

    3.4.3 再次调整endpoint

    删除endpoint172.31.16.11

    /etc/envoy-xds-server/config # vi config2.yaml 
    /etc/envoy-xds-server/config # cat config2.yaml 
    name: myconfig
    spec:
      listeners:
      - name: listener_http
        address: 0.0.0.0
        port: 8081
        routes:
        - name: local_route
          prefix: /
          clusters:
          - webcluster
      clusters:
      - name: webcluster
        endpoints:
        - address: 172.31.16.12
          port: 80
    /etc/envoy-xds-server/config # cat config2.yaml > config.yaml
    

    访问测试

    1. 由于没有修改监听,listeners并没有发生改变
    2. webcluster只剩下了172.31.16.12
    3. 访问listener只会往172.31.16.12发起调度
    root@k8s-node-1:~# curl 172.31.16.2:9901/listeners
    listener_http::0.0.0.0:8081
    root@k8s-node-1:~# curl 172.31.16.2:9901/clusters
    xds_cluster::observability_name::xds_cluster
    xds_cluster::default_priority::max_connections::1024
    xds_cluster::default_priority::max_pending_requests::1024
    xds_cluster::default_priority::max_requests::1024
    xds_cluster::default_priority::max_retries::3
    xds_cluster::high_priority::max_connections::1024
    xds_cluster::high_priority::max_pending_requests::1024
    xds_cluster::high_priority::max_requests::1024
    xds_cluster::high_priority::max_retries::3
    xds_cluster::added_via_api::false
    xds_cluster::172.31.16.5:18000::cx_active::1
    xds_cluster::172.31.16.5:18000::cx_connect_fail::0
    xds_cluster::172.31.16.5:18000::cx_total::1
    xds_cluster::172.31.16.5:18000::rq_active::3
    xds_cluster::172.31.16.5:18000::rq_error::0
    xds_cluster::172.31.16.5:18000::rq_success::0
    xds_cluster::172.31.16.5:18000::rq_timeout::0
    xds_cluster::172.31.16.5:18000::rq_total::3
    xds_cluster::172.31.16.5:18000::hostname::xdsserver
    xds_cluster::172.31.16.5:18000::health_flags::healthy
    xds_cluster::172.31.16.5:18000::weight::1
    xds_cluster::172.31.16.5:18000::region::
    xds_cluster::172.31.16.5:18000::zone::
    xds_cluster::172.31.16.5:18000::sub_zone::
    xds_cluster::172.31.16.5:18000::canary::false
    xds_cluster::172.31.16.5:18000::priority::0
    xds_cluster::172.31.16.5:18000::success_rate::-1.0
    xds_cluster::172.31.16.5:18000::local_origin_success_rate::-1.0
    webcluster::observability_name::webcluster
    webcluster::default_priority::max_connections::1024
    webcluster::default_priority::max_pending_requests::1024
    webcluster::default_priority::max_requests::1024
    webcluster::default_priority::max_retries::3
    webcluster::high_priority::max_connections::1024
    webcluster::high_priority::max_pending_requests::1024
    webcluster::high_priority::max_requests::1024
    webcluster::high_priority::max_retries::3
    webcluster::added_via_api::true
    webcluster::172.31.16.12:80::cx_active::3
    webcluster::172.31.16.12:80::cx_connect_fail::0
    webcluster::172.31.16.12:80::cx_total::3
    webcluster::172.31.16.12:80::rq_active::0
    webcluster::172.31.16.12:80::rq_error::0
    webcluster::172.31.16.12:80::rq_success::3
    webcluster::172.31.16.12:80::rq_timeout::0
    webcluster::172.31.16.12:80::rq_total::3
    webcluster::172.31.16.12:80::hostname::
    webcluster::172.31.16.12:80::health_flags::healthy
    webcluster::172.31.16.12:80::weight::1
    webcluster::172.31.16.12:80::region::
    webcluster::172.31.16.12:80::zone::
    webcluster::172.31.16.12:80::sub_zone::
    webcluster::172.31.16.12:80::canary::false
    webcluster::172.31.16.12:80::priority::0
    webcluster::172.31.16.12:80::success_rate::-1.0
    webcluster::172.31.16.12:80::local_origin_success_rate::-1.0
    root@k8s-node-1:~# curl 172.31.16.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
    root@k8s-node-1:~# curl 172.31.16.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
    root@k8s-node-1:~# curl 172.31.16.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
    root@k8s-node-1:~# curl 172.31.16.2:8081
    iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: webserver02, ServerIP: 172.31.16.12!
    
  • 相关阅读:
    FISCOBCOS入门(十)Truffle测试helloworld智能合约
    车载网络诊断应如何测试?
    c# 自定义分表
    【机器学习】什么是特征缩放?如何去实现特征缩放?
    蓝牙beacon
    BGP的基础知识
    Linux --必备指令集
    [动态规划] (九) 路径问题:LeetCode 64.最小路径和
    list容器(20221117)
    让学前端不再害怕英语单词(二)
  • 原文地址:https://blog.csdn.net/qq_29974229/article/details/127101186