• 11、Kubernetes核心技术 - Service


    目录

    一、概述

    二、Endpoint

    三、Service资源清单

    四、Service 类型

    4.1、ClusterIP

    4.2、NodePort

    4.3、LoadBalancer

    4.4、ExternalName

    五、Service使用

    5.1、ClusterIP

    5.1.1、定义Pod资源清单

    5.1.2、创建Pod

    5.1.3、定义Service资源清单

    5.1.4、创建Service

    5.2、NodePort

    5.2.1、集群内通过ClusterIP+虚端口访问

    5.2.2、集群外通过NodeIP+nodePort端口访问

    5.2.3、集群内通过Pod IP + 虚端口访问

    5.3、ExternalName

    六、Service代理模式

    6.1、userspace 代理模式

    6.2、iptables 代理模式

    6.3、IPVS 代理模式

    6.3.1、加载ipvs相关内核模块 

    6.3.2、编辑配置文件

    6.3.3、删除原有的kube-proxy代理 

    6.3.4、查看IPVS

    七、HeadLiness services

    7.1、创建Pod

    7.2、创建Service

    7.3、进入容器,查看服务域名 


    一、概述

    在k8s中,Pod是应用程序的载体,我们可以通过Pod的IP地址来访问应用程序,但是当Pod宕机后重建后,其IP地址等状态信息可能会变动,也就是说Pod的IP地址不是固定的,这也就意味着不方便直接采用Pod的IP地址对服务进行访问。

    为了解决这个问题,k8s提供了Service资源,Service会对提供同一个服务的多个Pod进行聚合,并且提供一个统一的入口地址,通过访问Service的入口地址就能访问到后面的Pod服务 ,并且将请求负载分发到后端的各个容器应用上 。

    Service引入主要是解决Pod的动态变化,提供统一的访问入口

    • 1、防止Pod失联,准备找到提供同一服务的Pod(服务发现)
    • 2、定义一组Pod的访问策略(负载均衡)

    Service通常是通过Label Selector访问一组Pod,如下图:

    其实 Service 只是一个概念,真正起到转发数据的是 kube-proxy 服务进程,kube-proxy 是集群中每个节点上运行的网络代理,每个节点之上都运行一个 kube-proxy 服务进程。当创建 Service 的时候,首先API Server向 etcd 写入 Service 的信息,然后kube-proxy会监听API Server,监听到etcd中 Service 的信息,然后根据Service的信息生成对应的访问规则。

    二、Endpoint

    Endpoint是kubernetes中的一个资源对象,存储在etcd中,用来记录一个Service对应的所有Pod的访问地址,它是根据Service配置文件中的label selector(标签选择器)生成的。

    一个Service由一组Pod组成,这些Pod通过Endpoints暴露出来,Endpoints是实现实际服务的端点集合。换言之,Service和Pod之间的联系是通过Endpoints实现的。

    当我们创建Service的时候,如果Service有指定Label Selector(标签选择器)的话,Endpoints控制器会自动创建一个名称跟Service名称一致的Endpoint对象。可简单理解为,Endpoint是一个中间对象,用来存放Service与Pod的映射关系,在Endpoint中,可以看到,当前能正常提供服务的Pod以及不能正常提供的Pod。

    例如:

    1. $ kubectl get pod -o wide --show-labels
    2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
    3. nginx-7f456874f4-c7224 1/1 Running 0 20m 192.168.1.3 node01 app=nginx,pod-template-hash=7f456874f4
    4. nginx-7f456874f4-v8pnz 1/1 Running 0 20m 192.168.0.6 controlplane app=nginx,pod-template-hash=7f456874f4
    5. $ kubectl get svc -o wide
    6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    7. kubernetes ClusterIP 10.96.0.1 443/TCP 14d
    8. nginx ClusterIP 10.109.113.53 80/TCP 12m app=nginx
    9. $ kubectl get ep
    10. NAME ENDPOINTS AGE
    11. kubernetes 172.30.1.2:6443 14d
    12. nginx 192.168.0.6:80,192.168.1.3:80 12m
    13. # 查看endpoint资源清单
    14. $ kubectl get ep nginx -o yaml
    15. apiVersion: v1
    16. kind: Endpoints
    17. metadata:
    18. annotations:
    19. endpoints.kubernetes.io/last-change-trigger-time: "2023-01-06T06:33:26Z"
    20. creationTimestamp: "2023-01-06T06:33:27Z"
    21. name: nginx # Endpoint名称,跟service的名称一样
    22. namespace: default
    23. resourceVersion: "2212"
    24. uid: 5c1d6639-c27a-4ab5-8832-64c9d618a209
    25. subsets:
    26. - addresses: # Ready addresses表示能正常提供服务的Pod, 还有一个NotReadyAddress表示不能提供正常服务的Pod
    27. - ip: 192.168.0.6 # Pod的IP地址
    28. nodeName: controlplane # Pod被调度到的节点
    29. targetRef:
    30. kind: Pod
    31. name: nginx-7f456874f4-v8pnz
    32. namespace: default
    33. uid: b248209d-a854-48e8-8162-1abd4e1b498f
    34. - ip: 192.168.1.3 # Pod的IP地址
    35. nodeName: node01 # Pod被调度到的节点
    36. targetRef:
    37. kind: Pod
    38. name: nginx-7f456874f4-c7224
    39. namespace: default
    40. uid: f45e4aa2-ddfd-48cd-a7d1-5bfb28d0f31f
    41. ports:
    42. - port: 80
    43. protocol: TCP

    Service与Endpoints、Pod的关系大体如下:

    三、Service资源清单

    yaml 格式的 Service 定义文件 :kubectl get svc nginx -o yaml

    1. apiVersion: v1 # 版本号
    2. kind: Service # 资源类型,固定值Service
    3. metadata: # 元数据
    4. creationTimestamp: "2022-12-29T08:46:28Z"
    5. labels: # 自定义标签属性列表
    6. app: nginx # 标签:app=nginx
    7. name: nginx # Service名称
    8. namespace: default # Service所在的命名空间,默认值为default
    9. resourceVersion: "2527"
    10. uid: 1770ab42-bd33-4455-a190-5753e8eac460
    11. spec:
    12. clusterIP: 10.102.82.102 # 虚拟服务的IP地址,当spec.type=ClusterIP时,如果不指定,则系统进行自动分配,也可以手工指定。当spec.type=LoadBalancer时,则需要指定
    13. clusterIPs:
    14. - 10.102.82.102
    15. externalTrafficPolicy: Cluster
    16. internalTrafficPolicy: Cluster
    17. ipFamilies:
    18. - IPv4
    19. ipFamilyPolicy: SingleStack
    20. ports: # Service需要暴露的端口列表
    21. - nodePort: 30314 # 当spec.type=NodePort时,指定映射到物理机的端口号
    22. port: 80 # 服务监听的端口号
    23. protocol: TCP # 端口协议,支持TCP和UDP,默认值为TCP
    24. targetPort: 80 # 需要转发到后端Pod的端口号
    25. selector: # label selector配置,将选择具有指定label标签的pod作为管理范围
    26. app: nginx # 选择具有app=nginx的pod进行管理
    27. sessionAffinity: None # session亲和性,可选值为ClientIP,表示将同一个源IP地址的客户端访问请求都转发到同一个后端Pod。默认值为空【None】
    28. type: NodePort # Service的类型,指定service的访问方式,默认值为ClusterIP。取值范围:[ClusterIP、NodePort、LoadBalancer]
    29. status:
    30. loadBalancer: {} # 当spec.type=LoadBalancer时,设置外部负载均衡器的地址,用于公有云环境

    四、Service 类型

    Service在k8s中有以下三种常用类型:

    4.1、ClusterIP

    只能在集群内部访问,默认类型,自动分配一个仅Cluster内部可以访问的虚拟IP(即VIP)。

    4.2、NodePort

    在每个主机节点上启用一个端口来暴露服务,可以在集群外部访问,也会分配一个稳定内部集群IP地址。

    访问地址:<任意Node节点的IP地址>: NodePort端口

    默认NodePort端口范围:30000-32767

    4.3、LoadBalancer

    与NodePort类似,在每个节点上启用一个端口来暴露服务。除此之外,Kubernetes会请求底层云平台(例如阿里云、腾讯云、AWS等)上的负载均衡器,将每个Node ([NodeIP]:[NodePort])作为后端添加进去,此模式需要外部云环境的支持,适用于公有云。

    4.4、ExternalName

    ExternalName Service用于引入集群外部的服务,它通过externalName属性指定一个服务的地址,然后在集群内部访问此Service就可以访问到外部的服务了。

    五、Service使用

    5.1、ClusterIP

    5.1.1、定义Pod资源清单

    vim clusterip-pod.yaml

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: nginx
    5. spec:
    6. replicas: 2
    7. selector:
    8. matchLabels:
    9. app: nginx
    10. template:
    11. metadata:
    12. labels:
    13. app: nginx
    14. spec:
    15. containers:
    16. - name: nginx
    17. image: nginx
    18. ports:
    19. - containerPort: 80
    20. protocol: TCP

    5.1.2、创建Pod

    1. $ kubectl apply -f clusterip-pod.yaml
    2. deployment.apps/nginx created
    3. $ kubectl get pod -o wide
    4. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    5. nginx-7f456874f4-c7224 1/1 Running 0 13s 192.168.1.3 node01
    6. nginx-7f456874f4-v8pnz 1/1 Running 0 13s 192.168.0.6 controlplane
    7. # 任意节点发起请求
    8. $ curl 192.168.1.3
    9. Welcome to nginx!
    10. Welcome to nginx!

    11. If you see this page, the nginx web server is successfully installed and

    12. working. Further configuration is required.

    13. For online documentation and support please refer to

    14. Commercial support is available at
    15. Thank you for using nginx.

    16. # 任意节点发起请求
    17. $ curl 192.168.0.6
    18. Welcome to nginx!
    19. Welcome to nginx!

    20. If you see this page, the nginx web server is successfully installed and

    21. working. Further configuration is required.

    22. For online documentation and support please refer to

    23. Commercial support is available at
    24. Thank you for using nginx.

    由于pod节点都是nginx默认界面都是一样的,为了方便测试,我们进入Pod修改默认界面:

    1. $ kubectl get pod -o wide
    2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    3. nginx-7f456874f4-c7224 1/1 Running 0 2m39s 192.168.1.3 node01
    4. nginx-7f456874f4-v8pnz 1/1 Running 0 2m39s 192.168.0.6 controlplane
    5. $ kubectl exec -it nginx-7f456874f4-c7224 /bin/sh
    6. kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
    7. # echo "hello, this request is from 192.168.1.3" > /usr/share/nginx/html/index.html
    8. #
    9. # cat /usr/share/nginx/html/index.html
    10. hello, this request is from 192.168.1.3
    11. # exit
    12. $ kubectl exec -it nginx-7f456874f4-v8pnz /bin/sh
    13. kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
    14. # echo "hello, this request is from 192.168.0.6" > /usr/share/nginx/html/index.html
    15. # cat /usr/share/nginx/html/index.html
    16. hello, this request is from 192.168.0.6
    17. # exit
    18. # 发起请求
    19. $ curl 192.168.1.3
    20. hello, this request is from 192.168.1.3
    21. $ curl 192.168.0.6
    22. hello, this request is from 192.168.0.6

    5.1.3、定义Service资源清单

    vim clusterip-svc.yaml

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: nginx
    5. spec:
    6. selector:
    7. app: nginx
    8. type: ClusterIP
    9. # clusterIP: 10.109.113.53 # service IP地址 如果不写默认会生成一个
    10. ports:
    11. - port: 80 # service端口
    12. targetPort: 80 # 目标pod端口

    5.1.4、创建Service

    1. $ kubectl apply -f clusterip-svc.yaml
    2. service/nginx created
    3. $ kubectl get svc -o wide
    4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    5. kubernetes ClusterIP 10.96.0.1 443/TCP 14d
    6. nginx ClusterIP 10.109.113.53 80/TCP 15s app=nginx
    7. $ kubectl get pod -o wide
    8. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    9. nginx-7f456874f4-c7224 1/1 Running 0 11m 192.168.1.3 node01
    10. nginx-7f456874f4-v8pnz 1/1 Running 0 11m 192.168.0.6 controlplane
    11. # 查看svc的详细描述信息,注意Endpoints表示的就是后端真实的Pod服务节点
    12. $ kubectl describe svc nginx
    13. Name: nginx
    14. Namespace: default
    15. Labels:
    16. Annotations:
    17. Selector: app=nginx
    18. Type: ClusterIP
    19. IP Family Policy: SingleStack
    20. IP Families: IPv4
    21. IP: 10.109.113.53
    22. IPs: 10.109.113.53
    23. Port: 80/TCP
    24. TargetPort: 80/TCP
    25. Endpoints: 192.168.0.6:80,192.168.1.3:80 # 后端真实的Pod服务节点【Pod的IP地址+虚端口】
    26. Session Affinity: None
    27. Events:

    如上,我们可以看到,分配的ClusterIP为10.109.113.53,我们可以通过ClusterIP访问:

    1. $ curl 10.109.113.53:80
    2. hello, this request is from 192.168.0.6
    3. $ curl 10.109.113.53:80
    4. hello, this request is from 192.168.1.3

    可以看到,发起两次请求,请求会被负载均衡到两个不同的Pod,这就是Service提供的负载均衡功能。

    5.2、NodePort

    1. $ kubectl create deployment nginx --image=nginx
    2. # 暴露端口,让集群外部可以访问,Service类型为NodePort
    3. $ kubectl expose deployment nginx --port=80 --type=NodePort
    4. service/nginx exposed
    5. $ kubectl get pod -o wide
    6. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    7. nginx-748c667d99-5mf49 1/1 Running 0 6m13s 192.168.1.6 node01
    8. $ kubectl get svc -o wide
    9. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    10. kubernetes ClusterIP 10.96.0.1 443/TCP 14d
    11. nginx NodePort 10.98.224.65 80:31853/TCP 6m13s app=nginx

    可以看到,nginx这个service分配了一个ClusterIP为10.98.224.65,该Service的虚端口为80,K8S随机分配一个nodePort端口为31853,此时要访问nginx这个服务的话,有两种方式:

    5.2.1、集群内通过ClusterIP+虚端口访问

    1. # ClusterIP: 10.98.224.65 虚端口: 80
    2. $ curl 10.98.224.65:80
    3. Welcome to nginx!
    4. Welcome to nginx!

    5. If you see this page, the nginx web server is successfully installed and

    6. working. Further configuration is required.

    7. For online documentation and support please refer to

    8. Commercial support is available at
    9. Thank you for using nginx.

    5.2.2、集群外通过NodeIP+nodePort端口访问

    1. # Node IP地址: 192.168.1.33 nodePort端口: 31853
    2. $ curl 192.168.1.33:31853
    3. Welcome to nginx!
    4. Welcome to nginx!

    5. If you see this page, the nginx web server is successfully installed and

    6. working. Further configuration is required.

    7. For online documentation and support please refer to

    8. Commercial support is available at
    9. Thank you for using nginx.

    5.2.3、集群内通过Pod IP + 虚端口访问

    1. $ kubectl get pod -o wide
    2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    3. nginx-748c667d99-5mf49 1/1 Running 0 11m 192.168.1.6 node01
    4. $ kubectl get svc -o wide
    5. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    6. kubernetes ClusterIP 10.96.0.1 443/TCP 14d
    7. nginx NodePort 10.98.224.65 80:31853/TCP 11m app=nginx
    8. $ kubectl get ep
    9. NAME ENDPOINTS AGE
    10. kubernetes 172.30.1.2:6443 14d
    11. nginx 192.168.1.6:80 11m
    12. # Pod IP地址: 192.168.1.6 虚端口: 80
    13. $ curl 192.168.1.6:80
    14. Welcome to nginx!
    15. Welcome to nginx!

    16. If you see this page, the nginx web server is successfully installed and

    17. working. Further configuration is required.

    18. For online documentation and support please refer to

    19. Commercial support is available at
    20. Thank you for using nginx.

    5.3、ExternalName

    创建ExternalName Service:

    vim externalname-service.yaml:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: externalname-service
    5. spec:
    6. type: ExternalName # Service类型为ExternalName
    7. externalName: www.baidu.com
    1. $ kubectl apply -f externalname-service.yaml
    2. service/externalname-service created
    3. $ kubectl get svc -o wide
    4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    5. externalname-service ExternalName www.baidu.com 7s
    6. kubernetes ClusterIP 10.96.0.1 443/TCP 17d
    7. $ kubectl describe svc externalname-service
    8. Name: externalname-service
    9. Namespace: default
    10. Labels:
    11. Annotations:
    12. Selector:
    13. Type: ExternalName
    14. IP Families:
    15. IP:
    16. IPs:
    17. External Name: www.baidu.com
    18. Session Affinity: None
    19. Events:
    20. # 查看DNS域名解析
    21. controlplane $ dig @10.96.0.10 externalname-service.default.svc.cluster.local
    22. ; <<>> DiG 9.16.1-Ubuntu <<>> @10.96.0.10 externalname-service.default.svc.cluster.local
    23. ; (1 server found)
    24. ;; global options: +cmd
    25. ;; Got answer:
    26. ;; WARNING: .local is reserved for Multicast DNS
    27. ;; You are currently testing what happens when an mDNS query is leaked to DNS
    28. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31598
    29. ;; flags: qr aa rd; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
    30. ;; WARNING: recursion requested but not available
    31. ;; OPT PSEUDOSECTION:
    32. ; EDNS: version: 0, flags:; udp: 4096
    33. ; COOKIE: 085781a8a4291b6d (echoed)
    34. ;; QUESTION SECTION:
    35. ;externalname-service.default.svc.cluster.local. IN A
    36. ;; ANSWER SECTION:
    37. externalname-service.default.svc.cluster.local. 7 IN CNAME www.baidu.com.
    38. www.baidu.com. 7 IN CNAME www.a.shifen.com.
    39. www.a.shifen.com. 7 IN CNAME www.wshifen.com.
    40. www.wshifen.com. 7 IN A 103.235.46.40
    41. ;; Query time: 4 msec
    42. ;; SERVER: 10.96.0.10#53(10.96.0.10)
    43. ;; WHEN: Mon Jan 09 09:43:29 UTC 2023
    44. ;; MSG SIZE rcvd: 279

    六、Service代理模式

    6.1、userspace 代理模式

    在该模式下 kube-proxy 会为每一个 Service 创建一个监听端口,发送给 Cluseter IP 请求会被 iptable 重定向给 kube-proxy 监听的端口上,其中 kube-proxy 会根据 LB 算法将请求转发到相应的pod之上。

    该模式下,kube-proxy充当了一个四层负载均衡器的角色。由于kube-proxy运行在userspace中,在进行转发处理的时候会增加内核和用户空间之间的数据拷贝,虽然比较稳定,但是效率非常低下。

    6.2、iptables 代理模式

    iptables模式下 kube-proxy 为每一个pod创建相对应的 iptables 规则,发送给 ClusterIP 的请求会被直接发送给后端pod之上。

    在该模式下 kube-proxy 不承担负载均衡器的角色,其只会负责创建相应的转发策略,该模式的优点在于较userspace模式效率更高,但是不能提供灵活的LB策略,当后端Pod不可用的时候无法进行重试。

    6.3、IPVS 代理模式

    IPVS模式与iptable模式类型,kube-proxy 会根据pod的变化创建相应的 IPVS转发规则,与iptalbes模式相比,IPVS模式工作在内核态,在同步代理规则时具有更好的性能,同时提高网络吞吐量为大型集群提供了更好的可扩展性,同时提供了大量的负责均衡算法。

    注意:当 kube-proxy 以 IPVS 代理模式启动时,它将验证 IPVS 内核模块是否可用。 如果未检测到 IPVS 内核模块,则 kube-proxy 将退回到以 iptables 代理模式运行。

    下面演示如何修改代理模式为IPVS代理模式。

    6.3.1、加载ipvs相关内核模块 

    1. modprobe ip_vs
    2. modprobe ip_vs_rr
    3. modprobe ip_vs_wrr
    4. modprobe ip_vs_sh

    6.3.2、编辑配置文件

    实际上是修改kube-proxy对应的ConfingMap:

    1. $ kubectl get cm -n kube-system | grep kube-proxy
    2. NAME DATA AGE
    3. kube-proxy 2 14d
    4. $ kubectl edit cm kube-proxy -n kube-system
    5. configmap/kube-proxy edited

    修改mode的值,将其修改为ipvs,如下图:

    6.3.3、删除原有的kube-proxy代理 

    1. $ kubectl get pod -A | grep kube-proxy
    2. NAMESPACE NAME READY STATUS RESTARTS AGE
    3. kube-system kube-proxy-xnz4r 1/1 Running 0 14d
    4. kube-system kube-proxy-zbxrb 1/1 Running 0 14d
    5. $ kubectl delete pod kube-proxy-xnz4r -n kube-system
    6. pod "kube-proxy-xnz4r" deleted
    7. $ kubectl delete pod kube-proxy-zbxrb -n kube-system
    8. pod "kube-proxy-zbxrb" deleted

     删除完成后,k8s会自动重启kube-proxy的Pod:

    1. $ kubectl get pod -A | grep kube-proxy
    2. kube-system kube-proxy-hvnmt 1/1 Running 0 6m35s
    3. kube-system kube-proxy-zp6z5 1/1 Running 0 6m20s

    6.3.4、查看IPVS

    1. $ ipvsadm -Ln
    2. IP Virtual Server version 1.2.1 (size=4096)
    3. Prot LocalAddress:Port Scheduler Flags
    4. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
    5. TCP 10.96.0.1:443 rr
    6. -> 172.30.1.2:6443 Masq 1 1 0
    7. TCP 10.96.0.10:53 rr
    8. -> 192.168.0.5:53 Masq 1 0 0
    9. -> 192.168.1.2:53 Masq 1 0 0
    10. TCP 10.96.0.10:9153 rr
    11. -> 192.168.0.5:9153 Masq 1 0 0
    12. -> 192.168.1.2:9153 Masq 1 0 0
    13. TCP 10.99.58.167:80 rr
    14. -> 192.168.0.6:80 Masq 1 0 0
    15. -> 192.168.1.3:80 Masq 1 0 0
    16. UDP 10.96.0.10:53 rr
    17. -> 192.168.0.5:53 Masq 1 0 0
    18. -> 192.168.1.2:53 Masq 1 0 0

    如上图,产生了一批转发规则。

    七、HeadLiness services

    在某些场景中,开发人员可能不想使用Service提供的负载均衡功能,而希望自己来控制负载均衡策略,针对这种情况,kubernetes提供了HeadLiness Service,这类Service不会分配Cluster IP,如果想要访问Service,只能通过Service的域名进行查询。

    7.1、创建Pod

    vim headliness-pod.yaml

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: nginx
    5. spec:
    6. replicas: 2
    7. selector:
    8. matchLabels:
    9. app: nginx
    10. template:
    11. metadata:
    12. labels:
    13. app: nginx
    14. spec:
    15. containers:
    16. - name: nginx
    17. image: nginx
    18. ports:
    19. - containerPort: 80
    20. protocol: TCP
    1. $ vim headliness-pod.yaml
    2. $ kubectl apply -f headliness-pod.yaml
    3. deployment.apps/nginx created
    4. $ kubectl get pod -o wide
    5. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    6. nginx-7f456874f4-225j6 1/1 Running 0 16s 192.168.1.3 node01
    7. nginx-7f456874f4-r7csn 1/1 Running 0 16s 192.168.0.6 controlplane

    7.2、创建Service

    vim headliness-service.yaml

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: headliness-service
    5. spec:
    6. selector:
    7. app: nginx
    8. clusterIP: None # 将clusterIP设置为None,即可创建headliness Service
    9. type: ClusterIP
    10. ports:
    11. - port: 80 # Service的端口
    12. targetPort: 80 # Pod的端口
    1. $ vim headliness-service.yaml
    2. $ kubectl get pod -o wide --show-labels
    3. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
    4. nginx-7f456874f4-225j6 1/1 Running 0 103s 192.168.1.3 node01 app=nginx,pod-template-hash=7f456874f4
    5. nginx-7f456874f4-r7csn 1/1 Running 0 103s 192.168.0.6 controlplane app=nginx,pod-template-hash=7f456874f4
    6. controlplane $ kubectl apply -f headliness-service.yaml
    7. service/headliness-service created
    8. controlplane $ kubectl get svc -o wide --show-labels
    9. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS
    10. headliness-service ClusterIP None 80/TCP 10s app=nginx
    11. kubernetes ClusterIP 10.96.0.1 443/TCP 17d component=apiserver,provider=kubernetes
    12. # 查看详情
    13. $ kubectl describe service headliness-service
    14. Name: headliness-service
    15. Namespace: default
    16. Labels:
    17. Annotations:
    18. Selector: app=nginx
    19. Type: ClusterIP
    20. IP Family Policy: SingleStack
    21. IP Families: IPv4
    22. IP: None
    23. IPs: None
    24. Port: 80/TCP
    25. TargetPort: 80/TCP
    26. Endpoints: 192.168.0.6:80,192.168.1.3:80 # 能提供服务的Pod
    27. Session Affinity: None
    28. Events:

    7.3、进入容器,查看服务域名 

    1. # 查看Pod
    2. $ kubectl get pod -o wide
    3. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    4. nginx-7f456874f4-225j6 1/1 Running 0 4m37s 192.168.1.3 node01
    5. nginx-7f456874f4-r7csn 1/1 Running 0 4m37s 192.168.0.6 controlplane
    6. # 进入Pod
    7. $ kubectl exec nginx-7f456874f4-r7csn -it -- bin/bash
    8. # 查看域名
    9. root@nginx-7f456874f4-r7csn:/# cat /etc/resolv.conf
    10. search default.svc.cluster.local svc.cluster.local cluster.local
    11. nameserver 10.96.0.10
    12. options ndots:5
    13. # 通过Service的域名进行查询:默认访问规则service名称.名称空间.svc.cluster.local
    14. # 在Pod内通过curl可以访问
    15. root@nginx-7f456874f4-r7csn:/# curl headliness-service.default.svc.cluster.local
    16. Welcome to nginx!
    17. Welcome to nginx!

    18. If you see this page, the nginx web server is successfully installed and

    19. working. Further configuration is required.

    20. For online documentation and support please refer to

    21. Commercial support is available at
    22. Thank you for using nginx.

    23. root@nginx-7f456874f4-r7csn:/# exit
    24. exit
    25. # 查看DNS
    26. $ apt install bind9-utils
    27. $ dig @10.96.0.10 headliness-service.default.svc.cluster.local
    28. ; <<>> DiG 9.16.1-Ubuntu <<>> @10.96.0.10 headliness-service.default.svc.cluster.local
    29. ; (1 server found)
    30. ;; global options: +cmd
    31. ;; Got answer:
    32. ;; WARNING: .local is reserved for Multicast DNS
    33. ;; You are currently testing what happens when an mDNS query is leaked to DNS
    34. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59994
    35. ;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
    36. ;; WARNING: recursion requested but not available
    37. ;; OPT PSEUDOSECTION:
    38. ; EDNS: version: 0, flags:; udp: 4096
    39. ; COOKIE: b633b51a4ca3a653 (echoed)
    40. ;; QUESTION SECTION:
    41. ;headliness-service.default.svc.cluster.local. IN A
    42. ;; ANSWER SECTION:
    43. headliness-service.default.svc.cluster.local. 30 IN A 192.168.1.3
    44. headliness-service.default.svc.cluster.local. 30 IN A 192.168.0.6
    45. ;; Query time: 0 msec
    46. ;; SERVER: 10.96.0.10#53(10.96.0.10)
    47. ;; WHEN: Mon Jan 09 09:

  • 相关阅读:
    【C++私房菜】面向对象中的多重继承以及菱形继承
    循环队列的实现
    React Redux
    helm tekonci 技术总结待续
    SENet架构-通道注意力机制
    七月集训(第26天) —— 并查集
    毕设 电影网论文
    一些编程的基础
    C语言-联合体操作
    固定文章生成易语言代码
  • 原文地址:https://blog.csdn.net/Weixiaohuai/article/details/133036880