• k8s(3)


    目录

    一.K8S的三种网络

    flannel的三种模式:

    在 node01 节点上操作:

    calico的 三种模式:

    flannel 与 calico 的区别?

    二.CoreDNS

     在所有 node 节点上操作:

    在 master01 节点上操作:

    ​编辑

    DNS 解析测试:


    一.K8S的三种网络

    节点网络       nodeIP         物理网卡的IP实现节点间的通信
    Pod网络        podIP          Pod与Pod之间可通过Pod的IP相互通信
    Service网络    clusterIP      在K8S集群内可通过service资源的clusterIP实现对Pod集群的网络代理转发
    VLAN 和 VXLAN 的区别:
    1)作用不同:VLAN主要用作于在交换机上逻辑划分广播域,还可以配合STP生成树协议阻塞路径接口,避免产生环路和广播风暴
                 VXLAN可以将数据帧封装成UDP报文,再通过网络层传输给其它网络,从而实现虚拟大二层网络的通信
    2)VXLAN支持更多的二层网络:VXLAN最多可支持 2^24 个;VLAN最多支持 2^12 个(4096-2)
    3)VXLAN可以防止物理交换机MAC表耗尽:VLAN需要在交换机的MAC表中记录MAC物理地址;VXLAN采用隧道机制,MAC物理地址不需记录在交换机

    flannel的三种模式:


    UDP      出现最早的模式,但是性能最差,基于flanneld应用程序实现数据包的封装/解封装
    VXLAN    flannel的默认模式,也是推荐使用的模式,性能比UDP模式更好,基于内核实现数据帧的封装/解封装,而且配置简单使用方便
    HOST-GW  性能最好的模式,但是配置负载,且不能跨网段

    flannel的UDP模式工作原理:
    1)原始数据包从源主机的Pod容器发出到cni0网桥接口,再由cni0转发到flannel0虚拟接口
    2)flanneld服务进程会监听flannel0接口接收到的数据,flanneld进程会将原始数据包封装到UDP报文里
    3)flanneld进程会根据在etcd中维护的路由表查到目标Pod所在的nodeIP,并在UDP报文外封装nodeIP头部、MAC头部,再通过物理网卡发送到目标node节点
    4)UDP报文通过8285端口送达到目标node节点的flanneld进程进行解封装,再根据本地路由规则通过flannel0接口发送到cni0网桥,再由cni0发送到目标Pod容器

    在 node01 节点上操作:

    上传 cni-plugins-linux-amd64-v0.8.6.tgz 和 flannel.tar 到 /opt 目录中:

    docker load -i flannel-cni-plugin.tar

    docker load -i flannel.tar

    tar xf cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin

    复制kube-flannel.yml 文件到 /opt/k8s 目录中,部署 CNI 网络:

    scp kube-flannel.yml 192.168.233.10:/opt/k8s/

    查看master:

    kubectl apply -f kube-flannel.yml 

    kubectl get pods -n kube-flannel

    kubectl get pods -n kube-flannel -owide

    看下node节点ip:

    查看master文件使用的网络模式:

    1. apiVersion: v1
    2. kind: Namespace
    3. metadata:
    4. labels:
    5. k8s-app: flannel
    6. pod-security.kubernetes.io/enforce: privileged
    7. name: kube-flannel
    8. ---
    9. apiVersion: v1
    10. kind: ServiceAccount
    11. metadata:
    12. labels:
    13. k8s-app: flannel
    14. name: flannel
    15. namespace: kube-flannel
    16. ---
    17. apiVersion: rbac.authorization.k8s.io/v1
    18. kind: ClusterRole
    19. metadata:
    20. labels:
    21. k8s-app: flannel
    22. name: flannel
    23. rules:
    24. - apiGroups:
    25. - ""
    26. resources:
    27. - pods
    28. verbs:
    29. - get
    30. - apiGroups:
    31. - ""
    32. resources:
    33. - nodes
    34. verbs:
    35. - get
    36. - list
    37. - watch
    38. - apiGroups:
    39. - ""
    40. resources:
    41. - nodes/status
    42. verbs:
    43. - patch
    44. - apiGroups:
    45. - networking.k8s.io
    46. resources:
    47. - clustercidrs
    48. verbs:
    49. - list
    50. - watch
    51. ---
    52. apiVersion: rbac.authorization.k8s.io/v1
    53. kind: ClusterRoleBinding
    54. metadata:
    55. labels:
    56. k8s-app: flannel
    57. name: flannel
    58. roleRef:
    59. apiGroup: rbac.authorization.k8s.io
    60. kind: ClusterRole
    61. name: flannel
    62. subjects:
    63. - kind: ServiceAccount
    64. name: flannel
    65. namespace: kube-flannel
    66. ---
    67. apiVersion: v1
    68. data:
    69. cni-conf.json: |
    70. {
    71. "name": "cbr0",
    72. "cniVersion": "0.3.1",
    73. "plugins": [
    74. {
    75. "type": "flannel",
    76. "delegate": {
    77. "hairpinMode": true,
    78. "isDefaultGateway": true
    79. }
    80. },
    81. {
    82. "type": "portmap",
    83. "capabilities": {
    84. "portMappings": true
    85. }
    86. }
    87. ]
    88. }
    89. net-conf.json: |
    90. {
    91. "Network": "10.244.0.0/16",
    92. "Backend": {
    93. "Type": "vxlan"
    94. }
    95. }
    96. kind: ConfigMap
    97. metadata:
    98. labels:
    99. app: flannel
    100. k8s-app: flannel
    101. tier: node
    102. name: kube-flannel-cfg
    103. namespace: kube-flannel
    104. ---
    105. apiVersion: apps/v1
    106. kind: DaemonSet
    107. metadata:
    108. labels:
    109. app: flannel
    110. k8s-app: flannel
    111. tier: node
    112. name: kube-flannel-ds
    113. namespace: kube-flannel
    114. spec:
    115. selector:
    116. matchLabels:
    117. app: flannel
    118. k8s-app: flannel
    119. template:
    120. metadata:
    121. labels:
    122. app: flannel
    123. k8s-app: flannel
    124. tier: node
    125. spec:
    126. affinity:
    127. nodeAffinity:
    128. requiredDuringSchedulingIgnoredDuringExecution:
    129. nodeSelectorTerms:
    130. - matchExpressions:
    131. - key: kubernetes.io/os
    132. operator: In
    133. values:
    134. - linux
    135. containers:
    136. - args:
    137. - --ip-masq
    138. - --kube-subnet-mgr
    139. command:
    140. - /opt/bin/flanneld
    141. env:
    142. - name: POD_NAME
    143. valueFrom:
    144. fieldRef:
    145. fieldPath: metadata.name
    146. - name: POD_NAMESPACE
    147. valueFrom:
    148. fieldRef:
    149. fieldPath: metadata.namespace
    150. - name: EVENT_QUEUE_DEPTH
    151. value: "5000"
    152. image: docker.io/flannel/flannel:v0.21.5
    153. name: kube-flannel
    154. resources:
    155. requests:
    156. cpu: 100m
    157. memory: 50Mi
    158. securityContext:
    159. capabilities:
    160. add:
    161. - NET_ADMIN
    162. - NET_RAW
    163. privileged: false
    164. volumeMounts:
    165. - mountPath: /run/flannel
    166. name: run
    167. - mountPath: /etc/kube-flannel/
    168. name: flannel-cfg
    169. - mountPath: /run/xtables.lock
    170. name: xtables-lock
    171. hostNetwork: true
    172. initContainers:
    173. - args:
    174. - -f
    175. - /flannel
    176. - /opt/cni/bin/flannel
    177. command:
    178. - cp
    179. image: docker.io/flannel/flannel-cni-plugin:v1.1.2
    180. name: install-cni-plugin
    181. volumeMounts:
    182. - mountPath: /opt/cni/bin
    183. name: cni-plugin
    184. - args:
    185. - -f
    186. - /etc/kube-flannel/cni-conf.json
    187. - /etc/cni/net.d/10-flannel.conflist
    188. command:
    189. - cp
    190. image: docker.io/flannel/flannel:v0.21.5
    191. name: install-cni
    192. volumeMounts:
    193. - mountPath: /etc/cni/net.d
    194. name: cni
    195. - mountPath: /etc/kube-flannel/
    196. name: flannel-cfg
    197. priorityClassName: system-node-critical
    198. serviceAccountName: flannel
    199. tolerations:
    200. - effect: NoSchedule
    201. operator: Exists
    202. volumes:
    203. - hostPath:
    204. path: /run/flannel
    205. name: run
    206. - hostPath:
    207. path: /opt/cni/bin
    208. name: cni-plugin
    209. - hostPath:
    210. path: /etc/cni/net.d
    211. name: cni
    212. - configMap:
    213. name: kube-flannel-cfg
    214. name: flannel-cfg
    215. - hostPath:
    216. path: /run/xtables.lock
    217. type: FileOrCreate
    218. name: xtables-lock


    flannel的VXLAN模式工作原理:

    1)原始数据帧从源主机的Pod容器发出到cni0网桥接口,再由cni0转发到flannel.1虚拟接口
    2)flannel.1接口接收到数据帧后添加VXLAN头部,并在内核将原始数据帧封装到UDP报文里
    3)根据在etcd中维护的路由表查到目标Pod所在的nodeIP,并在UDP报文外封装nodeIP头部、MAC头部,再通过物理网卡发送到目标node节点
    4)UDP报文通过8472端口送达到目标node节点的flannel.1接口并在内核进行解封装,再根据本地路由规则发送到cni0网桥,再由cni0发送到目标Pod容器

    calico的 三种模式:

    calico的IPIP模式工作原理:

    1)原始数据包从源主机的Pod容器发出,通过 veth pair 设备送达到tunl0接口,再被内核的IPIP驱动封装到node节点网络的IP报文里
    2)根据Felix维护的路由规则通过物理网卡发送到目标node节点
    3)IP数据包到达目标node节点的tunl0接口后再通过内核的IPIP驱动解封装得到原始数据包,再根据本地路由规则通过 veth pair 设备送达到目标Pod容器

    calico的BGP模式工作原理(本质就是通过路由规则来实现Pod之间的通信)
    每个Pod容器都有一个 veth pair 设备,一端接入容器,另一个接入宿主机网络空间,并设置一条路由规则。
    这些路由规则都是 Felix 维护配置的,由 BIRD 组件基于 BGP 动态路由协议分发路由信息给其它节点。
    1)原始数据包从源主机的Pod容器发出,通过 veth pair 设备送达到宿主机网络空间
    2)根据Felix维护的路由规则通过物理网卡发送到目标node节点
    3)目标node节点接收到数据包后,会根据本地路由规则通过 veth pair 设备送达到目标Pod容器

    flannel 与 calico 的区别?

    flannel: UDP  VXLAN  HOST-GW
    默认网段:10.244.0.0/16
    通常会采用VXLAN模式,用的是叠加网络、IP隧道方式传输数据,对性能有一定的影响。
    Flannel产品成熟,依赖性较少,易于安装,功能简单,配置方便,利于管理。但是不具备复杂的网络策略配置能力。

    calico: IPIP  BGP  混合模式(CrossSubnet)
    默认网段:192.168.0.0/16
    使用IPIP模式可以实现跨子网传输,但是传输过程中需要额外的封包和解包过程,对性能有一定的影响。
    使用BGP模式会把每个node节点看作成路由器,通过Felix、BIRD组件来维护和分发路由规则,可实现直接通过BGP路由协议实现路由转发,传输过程中不需要额外封包和解包过程,因此性能较好,但是只能在同一个网段里使用,无法跨子网传输。
    calico不使用cni0网桥,而使通过路由规则把数据包直接发送到目标主机,所以性能较高;而且还具有更丰富的网络策略配置管理能力,功能更全面,但是维护起来较为复杂。

    所以对于较小规模且网络要求简单的K8S集群,可以采用flannel作为cni网络插件。对于K8S集群规模较大且要求更多的网络策略配置时,可以考虑采用性能更好功能更全面的calico或cilium。

    二.CoreDNS

    CoreDNS 是 K8S 默认的集群内部 DNS 功能实现,为 K8S 集群内的 Pod 提供 DNS 解析服务
    根据 service 的资源名称 解析出对应的 clusterIP
    根据 statefulset 控制器创建的Pod资源名称 解析出对应的 podIP

     在所有 node 节点上操作:

    上传 coredns.tar 到 /opt 目录中:

    docker load -i coredns.tar

    在 master01 节点上操作:

    上传 coredns.yaml 文件到 /opt/k8s 目录中,部署 CoreDNS :

    1. # __MACHINE_GENERATED_WARNING__
    2. apiVersion: v1
    3. kind: ServiceAccount
    4. metadata:
    5. name: coredns
    6. namespace: kube-system
    7. labels:
    8. kubernetes.io/cluster-service: "true"
    9. addonmanager.kubernetes.io/mode: Reconcile
    10. ---
    11. apiVersion: rbac.authorization.k8s.io/v1
    12. kind: ClusterRole
    13. metadata:
    14. labels:
    15. kubernetes.io/bootstrapping: rbac-defaults
    16. addonmanager.kubernetes.io/mode: Reconcile
    17. name: system:coredns
    18. rules:
    19. - apiGroups:
    20. - ""
    21. resources:
    22. - endpoints
    23. - services
    24. - pods
    25. - namespaces
    26. verbs:
    27. - list
    28. - watch
    29. - apiGroups:
    30. - ""
    31. resources:
    32. - nodes
    33. verbs:
    34. - get
    35. ---
    36. apiVersion: rbac.authorization.k8s.io/v1
    37. kind: ClusterRoleBinding
    38. metadata:
    39. annotations:
    40. rbac.authorization.kubernetes.io/autoupdate: "true"
    41. labels:
    42. kubernetes.io/bootstrapping: rbac-defaults
    43. addonmanager.kubernetes.io/mode: EnsureExists
    44. name: system:coredns
    45. roleRef:
    46. apiGroup: rbac.authorization.k8s.io
    47. kind: ClusterRole
    48. name: system:coredns
    49. subjects:
    50. - kind: ServiceAccount
    51. name: coredns
    52. namespace: kube-system
    53. ---
    54. apiVersion: v1
    55. kind: ConfigMap
    56. metadata:
    57. name: coredns
    58. namespace: kube-system
    59. labels:
    60. addonmanager.kubernetes.io/mode: EnsureExists
    61. data:
    62. Corefile: |
    63. .:53 {
    64. errors
    65. health {
    66. lameduck 5s
    67. }
    68. ready
    69. kubernetes cluster.local in-addr.arpa ip6.arpa {
    70. pods insecure
    71. fallthrough in-addr.arpa ip6.arpa
    72. ttl 30
    73. }
    74. prometheus :9153
    75. forward . /etc/resolv.conf {
    76. max_concurrent 1000
    77. }
    78. cache 30
    79. loop
    80. reload
    81. loadbalance
    82. }
    83. ---
    84. apiVersion: apps/v1
    85. kind: Deployment
    86. metadata:
    87. name: coredns
    88. namespace: kube-system
    89. labels:
    90. k8s-app: kube-dns
    91. kubernetes.io/cluster-service: "true"
    92. addonmanager.kubernetes.io/mode: Reconcile
    93. kubernetes.io/name: "CoreDNS"
    94. spec:
    95. # replicas: not specified here:
    96. # 1. In order to make Addon Manager do not reconcile this replicas parameter.
    97. # 2. Default is 1.
    98. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
    99. strategy:
    100. type: RollingUpdate
    101. rollingUpdate:
    102. maxUnavailable: 1
    103. selector:
    104. matchLabels:
    105. k8s-app: kube-dns
    106. template:
    107. metadata:
    108. labels:
    109. k8s-app: kube-dns
    110. spec:
    111. securityContext:
    112. seccompProfile:
    113. type: RuntimeDefault
    114. priorityClassName: system-cluster-critical
    115. serviceAccountName: coredns
    116. affinity:
    117. podAntiAffinity:
    118. preferredDuringSchedulingIgnoredDuringExecution:
    119. - weight: 100
    120. podAffinityTerm:
    121. labelSelector:
    122. matchExpressions:
    123. - key: k8s-app
    124. operator: In
    125. values: ["kube-dns"]
    126. topologyKey: kubernetes.io/hostname
    127. tolerations:
    128. - key: "CriticalAddonsOnly"
    129. operator: "Exists"
    130. nodeSelector:
    131. kubernetes.io/os: linux
    132. containers:
    133. - name: coredns
    134. image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
    135. imagePullPolicy: IfNotPresent
    136. resources:
    137. limits:
    138. memory: 170Mi
    139. requests:
    140. cpu: 100m
    141. memory: 70Mi
    142. args: [ "-conf", "/etc/coredns/Corefile" ]
    143. volumeMounts:
    144. - name: config-volume
    145. mountPath: /etc/coredns
    146. readOnly: true
    147. ports:
    148. - containerPort: 53
    149. name: dns
    150. protocol: UDP
    151. - containerPort: 53
    152. name: dns-tcp
    153. protocol: TCP
    154. - containerPort: 9153
    155. name: metrics
    156. protocol: TCP
    157. livenessProbe:
    158. httpGet:
    159. path: /health
    160. port: 8080
    161. scheme: HTTP
    162. initialDelaySeconds: 60
    163. timeoutSeconds: 5
    164. successThreshold: 1
    165. failureThreshold: 5
    166. readinessProbe:
    167. httpGet:
    168. path: /ready
    169. port: 8181
    170. scheme: HTTP
    171. securityContext:
    172. allowPrivilegeEscalation: false
    173. capabilities:
    174. add:
    175. - NET_BIND_SERVICE
    176. drop:
    177. - all
    178. readOnlyRootFilesystem: true
    179. dnsPolicy: Default
    180. volumes:
    181. - name: config-volume
    182. configMap:
    183. name: coredns
    184. items:
    185. - key: Corefile
    186. path: Corefile
    187. ---
    188. apiVersion: v1
    189. kind: Service
    190. metadata:
    191. name: kube-dns
    192. namespace: kube-system
    193. annotations:
    194. prometheus.io/port: "9153"
    195. prometheus.io/scrape: "true"
    196. labels:
    197. k8s-app: kube-dns
    198. kubernetes.io/cluster-service: "true"
    199. addonmanager.kubernetes.io/mode: Reconcile
    200. kubernetes.io/name: "CoreDNS"
    201. spec:
    202. selector:
    203. k8s-app: kube-dns
    204. clusterIP: 10.0.0.2
    205. ports:
    206. - name: dns
    207. port: 53
    208. protocol: UDP
    209. - name: dns-tcp
    210. port: 53
    211. protocol: TCP
    212. - name: metrics
    213. port: 9153
    214. protocol: TCP

    kubectl apply -f coredns.yaml

    kubectl get pods -n kube-system 

    DNS 解析测试:

    kubectl run -it --rm dns-test --image=busybox:1.28.4 sh

    kubelet get svc

  • 相关阅读:
    Vue3 + Nodejs 实战 ,文件上传项目--实现拖拽上传
    HTML5离线Web应用概述
    加速推进企业信息化建设,SRM采购系统赋能建筑工程产业生态链实现数字化转型
    大学生找工作如何做自我介绍?如何做好自我介绍?
    大数据与Hadoop入门理论
    团建游戏---赢得用户
    [附源码]计算机毕业设计JAVA音乐交流平台
    MySQL数据类型
    Ansible——template模块
    盛最多水的容器
  • 原文地址:https://blog.csdn.net/2302_79748698/article/details/136236177