• Istio实战(六)- Istio 部署


    部署Istio方法

    1. Istioctl
    2. Istio Operator
    3. Helm

    4.2 使用Istioctl 部署

    1. # mkdir /apps
    2. # cd /apps
    3. # curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.13.3 TARGET_ARCH=x86_64 sh -
    4. # ln -sf /apps/istio-1.13.3 /apps/istio
    5. # ln -sf /apps/istio/bin/istioctl /usr/bin/

    4.3 查看Istio内置档案

    1. # istioctl profile list
    2. Istio configuration profiles:
    3. default
    4. demo
    5. empty
    6. external
    7. minimal
    8. openshift
    9. preview
    10. remote

    4.4 部署Istio集群

    1. # istioctl install -s profile=demo -y
    2. ✔ Istio core installed
    3. ✔ Istiod installed
    4. ✔ Ingress gateways installed
    5. ✔ Egress gateways installed
    6. ✔ Installation complete
    7. Making this installation the default for injection and validation.
    8. Thank you for installing Istio 1.13. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/pzWZpAvMVBecaQ9h9
    9. # kubectl get pods -n istio-system
    10. NAME READY STATUS RESTARTS AGE
    11. istio-egressgateway-5c597cdb77-48bj8 1/1 Running 0 95s
    12. istio-ingressgateway-8d7d49b55-bmbx5 1/1 Running 0 95s
    13. istiod-54c54679d7-78f2d 1/1 Running 0 119s

    4.5 安装组件

    1. # kubectl apply -f samples/addons/
    2. serviceaccount/grafana created
    3. configmap/grafana created
    4. service/grafana created
    5. deployment.apps/grafana created
    6. configmap/istio-grafana-dashboards created
    7. configmap/istio-services-grafana-dashboards created
    8. deployment.apps/jaeger created
    9. service/tracing created
    10. service/zipkin created
    11. service/jaeger-collector created
    12. serviceaccount/kiali created
    13. configmap/kiali created
    14. clusterrole.rbac.authorization.k8s.io/kiali-viewer created
    15. clusterrole.rbac.authorization.k8s.io/kiali created
    16. clusterrolebinding.rbac.authorization.k8s.io/kiali created
    17. role.rbac.authorization.k8s.io/kiali-controlplane created
    18. rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
    19. service/kiali created
    20. deployment.apps/kiali created
    21. serviceaccount/prometheus created
    22. configmap/prometheus created
    23. clusterrole.rbac.authorization.k8s.io/prometheus configured
    24. clusterrolebinding.rbac.authorization.k8s.io/prometheus configured
    25. service/prometheus created
    26. deployment.apps/prometheus created
    27. # kubectl get svc -n istio-system
    28. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    29. grafana ClusterIP 10.200.62.191 <none> 3000/TCP 70s
    30. istio-egressgateway ClusterIP 10.200.29.127 <none> 80/TCP,443/TCP 8m58s
    31. istio-ingressgateway LoadBalancer 10.200.239.76 <pending> 15021:49708/TCP,80:42116/TCP,443:61970/TCP,31400:56204/TCP,15443:45720/TCP 8m58s
    32. istiod ClusterIP 10.200.249.97 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 9m23s
    33. jaeger-collector ClusterIP 10.200.4.90 <none> 14268/TCP,14250/TCP,9411/TCP 70s
    34. kiali ClusterIP 10.200.132.85 <none> 20001/TCP,9090/TCP 69s
    35. prometheus ClusterIP 10.200.99.200 <none> 9090/TCP 69s
    36. tracing ClusterIP 10.200.160.255 <none> 80/TCP,16685/TCP 70s
    37. zipkin ClusterIP 10.200.92.91 <none> 9411/TCP 70s
    38. # kubectl get pods -n istio-system
    39. NAME READY STATUS RESTARTS AGE
    40. grafana-6c5dc6df7c-96cwc 1/1 Running 0 2m31s
    41. istio-egressgateway-5c597cdb77-48bj8 1/1 Running 0 10m
    42. istio-ingressgateway-8d7d49b55-bmbx5 1/1 Running 0 10m
    43. istiod-54c54679d7-78f2d 1/1 Running 0 10m
    44. jaeger-9dd685668-hmg9c 1/1 Running 0 2m31s
    45. kiali-699f98c497-wjtwl 1/1 Running 0 2m30s
    46. prometheus-699b7cc575-5d7gk 2/2 Running 0 2m30s

    4.6 建立命名空间

    1. root@k8s-master-01:/apps/istio# kubectl create ns hr
    2. namespace/hr created
    3. root@k8s-master-01:/apps/istio# kubectl label ns hr istio-injection=enabled
    4. namespace/hr labeled

    4.7 部署测试镜像

    启动一个测试pod,可以看到会自动注入一个container

    1. ot@k8s-master-01:/apps/istio# kubectl apply -f samples/sleep/sleep.yaml -n hr
    2. serviceaccount/sleep created
    3. service/sleep created
    4. deployment.apps/sleep created
    5. root@k8s-master-01:/apps/istio# kubectl get pods -n hr
    6. NAME READY STATUS RESTARTS AGE
    7. sleep-557747455f-k8qzc 2/2 Running 0 32s
    8. oot@k8s-master-01:/apps/istio# kubectl get pods -n hr sleep-557747455f-k8qzc -o yaml
    9. ...略
    10. containerStatuses:
    11. - containerID: docker://4f376791fe667e23910a512710839e82913df2191fc8fc288e9bfd441e9f1b23
    12. image: istio/proxyv2:1.13.3
    13. imageID: docker-pullable://istio/proxyv2@sha256:e8986efce46a7e1fcaf837134f453ea2b5e0750a464d0f2405502f8ddf0e2cd2
    14. lastState: {}
    15. name: istio-proxy
    16. ready: true
    17. restartCount: 0
    18. started: true
    19. state:
    20. running:
    21. startedAt: "2022-10-11T09:05:13Z"
    22. root@k8s-master-01:/apps/istio# istioctl ps
    23. NAME CLUSTER CDS LDS EDS RDS ISTIOD VERSION
    24. istio-egressgateway-5c597cdb77-48bj8.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT istiod-54c54679d7-78f2d 1.13.3
    25. istio-ingressgateway-8d7d49b55-bmbx5.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT istiod-54c54679d7-78f2d 1.13.3
    26. sleep-557747455f-k8qzc.hr Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-54c54679d7-78f2d 1.13.3

    1. root@k8s-master-01:/apps/istio# istioctl pc listener sleep-557747455f-k8qzc.hr
    2. ADDRESS PORT MATCH DESTINATION
    3. 10.200.0.2 53 ALL Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
    4. 0.0.0.0 80 Trans: raw_buffer; App: http/1.1,h2c Route: 80
    5. 0.0.0.0 80 ALL PassthroughCluster
    6. 10.200.103.146 80 Trans: raw_buffer; App: http/1.1,h2c Route: tomcat-service.default.svc.cluster.local:80
    7. 10.200.103.146 80 ALL Cluster: outbound|80||tomcat-service.default.svc.cluster.local
    8. 10.200.0.1 443 ALL Cluster: outbound|443||kubernetes.default.svc.cluster.local
    9. 10.200.133.113 443 Trans: raw_buffer; App: http/1.1,h2c Route: kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443
    10. 10.200.133.113 443 ALL Cluster: outbound|443||kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
    11. 10.200.176.11 443 ALL Cluster: outbound|443||ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local
    12. 10.200.239.76 443 ALL Cluster: outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
    13. 10.200.249.97 443 ALL Cluster: outbound|443||istiod.istio-system.svc.cluster.local
    14. 10.200.29.127 443 ALL Cluster: outbound|443||istio-egressgateway.istio-system.svc.cluster.local
    15. 10.200.91.80 443 ALL Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local
    16. 10.200.62.191 3000 Trans: raw_buffer; App: http/1.1,h2c Route: grafana.istio-system.svc.cluster.local:3000
    17. 10.200.62.191 3000 ALL Cluster: outbound|3000||grafana.istio-system.svc.cluster.local
    18. 192.168.31.101 4194 Trans: raw_buffer; App: http/1.1,h2c Route: kubelet.kube-system.svc.cluster.local:4194
    19. 192.168.31.101 4194 ALL Cluster: outbound|4194||kubelet.kube-system.svc.cluster.local
    20. 192.168.31.102 4194 Trans: raw_buffer; App: http/1.1,h2c Route: kubelet.kube-system.svc.cluster.local:4194
    21. 192.168.31.102 4194 ALL Cluster: outbound|4194||kubelet.kube-system.svc.cluster.local
    22. 192.168.31.103 4194 Trans: raw_buffer; App: http/1.1,h2c Route: kubelet.kube-system.svc.cluster.local:4194
    23. 192.168.31.103 4194 ALL Cluster: outbound|4194||kubelet.kube-system.svc.cluster.local
    24. 192.168.31.111 4194 Trans: raw_buffer; App: http/1.1,h2c Route: kubelet.kube-system.svc.cluster.local:4194
    25. 192.168.31.111 4194 ALL Cluster: outbound|4194||kubelet.kube-system.svc.cluster.local
    26. 192.168.31.112 4194 Trans: raw_buffer; App: http/1.1,h2c Route: kubelet.kube-system.svc.cluster.local:4194
    27. 192.168.31.112 4194 ALL Cluster: outbound|4194||kubelet.kube-system.svc.cluster.local
    28. 192.168.31.113 4194 Trans: raw_buffer; App: http/1.1,h2c Route: kubelet.kube-system.svc.cluster.local:4194
    29. 192.168.31.113 4194 ALL Cluster: outbound|4194||kubelet.kube-system.svc.cluster.local
    30. 192.168.31.114 4194 Trans: raw_buffer; App: http/1.1,h2c Route: kubelet.kube-system.svc.cluster.local:4194
    31. 192.168.31.114 4194 ALL Cluster: outbound|4194||kubelet.kube-system.svc.cluster.local
    32. 10.200.41.11 8000 Trans: raw_buffer; App: http/1.1,h2c Route: dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local:8000
    33. 10.200.41.11 8000 ALL Cluster: outbound|8000||dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local
    34. 0.0.0.0 8080 Trans: raw_buffer; App: http/1.1,h2c Route: 8080
    35. 0.0.0.0 8080 ALL PassthroughCluster
    36. 10.200.98.90 8080 Trans: raw_buffer; App: http/1.1,h2c Route: kube-state-metrics.kube-system.svc.cluster.local:8080
    37. 10.200.98.90 8080 ALL Cluster: outbound|8080||kube-state-metrics.kube-system.svc.cluster.local
    38. 0.0.0.0 9090 Trans: raw_buffer; App: http/1.1,h2c Route: 9090
    39. 0.0.0.0 9090 ALL PassthroughCluster
    40. 10.200.241.145 9090 Trans: raw_buffer; App: http/1.1,h2c Route: prometheus.monitoring.svc.cluster.local:9090
    41. 10.200.241.145 9090 ALL Cluster: outbound|9090||prometheus.monitoring.svc.cluster.local
    42. 0.0.0.0 9100 Trans: raw_buffer; App: http/1.1,h2c Route: 9100
    43. 0.0.0.0 9100 ALL PassthroughCluster
    44. 10.200.0.2 9153 Trans: raw_buffer; App: http/1.1,h2c Route: kube-dns.kube-system.svc.cluster.local:9153
    45. 10.200.0.2 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
    46. 0.0.0.0 9411 Trans: raw_buffer; App: http/1.1,h2c Route: 9411
    47. 0.0.0.0 9411 ALL PassthroughCluster
    48. 192.168.31.101 10250 ALL Cluster: outbound|10250||kubelet.kube-system.svc.cluster.local
    49. 192.168.31.102 10250 ALL Cluster: outbound|10250||kubelet.kube-system.svc.cluster.local
    50. 192.168.31.103 10250 ALL Cluster: outbound|10250||kubelet.kube-system.svc.cluster.local
    51. 192.168.31.111 10250 ALL Cluster: outbound|10250||kubelet.kube-system.svc.cluster.local
    52. 192.168.31.112 10250 ALL Cluster: outbound|10250||kubelet.kube-system.svc.cluster.local
    53. 192.168.31.113 10250 ALL Cluster: outbound|10250||kubelet.kube-system.svc.cluster.local
    54. 192.168.31.114 10250 ALL Cluster: outbound|10250||kubelet.kube-system.svc.cluster.local
    55. 0.0.0.0 10255 Trans: raw_buffer; App: http/1.1,h2c Route: 10255
    56. 0.0.0.0 10255 ALL PassthroughCluster
    57. 10.200.4.90 14250 Trans: raw_buffer; App: http/1.1,h2c Route: jaeger-collector.istio-system.svc.cluster.local:14250
    58. 10.200.4.90 14250 ALL Cluster: outbound|14250||jaeger-collector.istio-system.svc.cluster.local
    59. 10.200.4.90 14268 Trans: raw_buffer; App: http/1.1,h2c Route: jaeger-collector.istio-system.svc.cluster.local:14268
    60. 10.200.4.90 14268 ALL Cluster: outbound|14268||jaeger-collector.istio-system.svc.cluster.local
    61. 0.0.0.0 15001 ALL PassthroughCluster
    62. 0.0.0.0 15001 Addr: *:15001 Non-HTTP/Non-TCP
    63. 0.0.0.0 15006 Addr: *:15006 Non-HTTP/Non-TCP
    64. 0.0.0.0 15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
    65. 0.0.0.0 15006 Trans: raw_buffer; App: http/1.1,h2c; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
    66. 0.0.0.0 15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
    67. 0.0.0.0 15006 Trans: raw_buffer; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
    68. 0.0.0.0 15006 Trans: tls; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
    69. 0.0.0.0 15006 Trans: tls; App: istio,istio-peer-exchange,istio-http/1.0,istio-http/1.1,istio-h2; Addr: *:80 Cluster: inbound|80||
    70. 0.0.0.0 15006 Trans: raw_buffer; Addr: *:80 Cluster: inbound|80||
    71. 0.0.0.0 15010 Trans: raw_buffer; App: http/1.1,h2c Route: 15010
    72. 0.0.0.0 15010 ALL PassthroughCluster
    73. 10.200.249.97 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
    74. 0.0.0.0 15014 Trans: raw_buffer; App: http/1.1,h2c Route: 15014
    75. 0.0.0.0 15014 ALL PassthroughCluster
    76. 0.0.0.0 15021 ALL Inline Route: /healthz/ready*
    77. 10.200.239.76 15021 Trans: raw_buffer; App: http/1.1,h2c Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
    78. 10.200.239.76 15021 ALL Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
    79. 0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
    80. 10.200.239.76 15443 ALL Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
    81. 0.0.0.0 16685 Trans: raw_buffer; App: http/1.1,h2c Route: 16685
    82. 0.0.0.0 16685 ALL PassthroughCluster
    83. 0.0.0.0 20001 Trans: raw_buffer; App: http/1.1,h2c Route: 20001
    84. 0.0.0.0 20001 ALL PassthroughCluster
    85. 10.200.239.76 31400 ALL Cluster: outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local

    查看endpoint

    1. root@k8s-master-01:~# istioctl pc ep sleep-557747455f-k8qzc.hr
    2. ENDPOINT STATUS OUTLIER CHECK CLUSTER
    3. 10.200.92.91:9411 HEALTHY OK zipkin
    4. 127.0.0.1:15000 HEALTHY OK prometheus_stats
    5. 127.0.0.1:15020 HEALTHY OK agent
    6. 172.100.109.65:8443 HEALTHY OK outbound|443||kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
    7. 172.100.109.66:8080 HEALTHY OK outbound|8080||kube-state-metrics.kube-system.svc.cluster.local
    8. 172.100.109.73:9090 HEALTHY OK outbound|9090||prometheus.istio-system.svc.cluster.local
    9. 172.100.109.74:9411 HEALTHY OK outbound|9411||jaeger-collector.istio-system.svc.cluster.local
    10. 172.100.109.74:9411 HEALTHY OK outbound|9411||zipkin.istio-system.svc.cluster.local
    11. 172.100.109.74:14250 HEALTHY OK outbound|14250||jaeger-collector.istio-system.svc.cluster.local
    12. 172.100.109.74:14268 HEALTHY OK outbound|14268||jaeger-collector.istio-system.svc.cluster.local
    13. 172.100.109.74:16685 HEALTHY OK outbound|16685||tracing.istio-system.svc.cluster.local
    14. 172.100.109.74:16686 HEALTHY OK outbound|80||tracing.istio-system.svc.cluster.local
    15. 172.100.109.82:8080 HEALTHY OK outbound|8080||ehelp-tomcat-service.ehelp.svc.cluster.local
    16. 172.100.109.87:4443 HEALTHY OK outbound|443||metrics-server.kube-system.svc.cluster.local
    17. 172.100.140.74:8080 HEALTHY OK outbound|8080||ehelp-tomcat-service.ehelp.svc.cluster.local
    18. 172.100.183.184:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
    19. 172.100.183.184:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
    20. 172.100.183.184:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
    21. 172.100.183.184:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
    22. 172.100.183.185:8080 HEALTHY OK outbound|80||istio-ingressgateway.istio-system.svc.cluster.local
    23. 172.100.183.185:8443 HEALTHY OK outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
    24. 172.100.183.185:15021 HEALTHY OK outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
    25. 172.100.183.185:15443 HEALTHY OK outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
    26. 172.100.183.185:31400 HEALTHY OK outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
    27. 172.100.183.186:8080 HEALTHY OK outbound|80||istio-egressgateway.istio-system.svc.cluster.local
    28. 172.100.183.186:8443 HEALTHY OK outbound|443||istio-egressgateway.istio-system.svc.cluster.local
    29. 172.100.183.187:3000 HEALTHY OK outbound|3000||grafana.istio-system.svc.cluster.local
    30. 172.100.183.188:9090 HEALTHY OK outbound|9090||kiali.istio-system.svc.cluster.local
    31. 172.100.183.188:20001 HEALTHY OK outbound|20001||kiali.istio-system.svc.cluster.local
    32. 172.100.183.189:80 HEALTHY OK outbound|80||sleep.hr.svc.cluster.local
    33. 172.100.76.163:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
    34. 172.100.76.163:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
    35. 172.100.76.165:8000 HEALTHY OK outbound|8000||dashboard-metrics-scraper.kubernetes-dashboard.svc.cluster.local
    36. 172.100.76.167:9090 HEALTHY OK outbound|9090||prometheus.monitoring.svc.cluster.local
    37. 192.168.31.101:6443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
    38. 192.168.31.101:8443 HEALTHY OK outbound|443||ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local
    39. 192.168.31.101:9100 HEALTHY OK outbound|9100||node-exporter.monitoring.svc.cluster.local
    40. 192.168.31.102:6443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
    41. 192.168.31.102:8443 HEALTHY OK outbound|443||ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local
    42. 192.168.31.102:9100 HEALTHY OK outbound|9100||node-exporter.monitoring.svc.cluster.local
    43. 192.168.31.103:6443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
    44. 192.168.31.103:8443 HEALTHY OK outbound|443||ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local
    45. 192.168.31.103:9100 HEALTHY OK outbound|9100||node-exporter.monitoring.svc.cluster.local
    46. 192.168.31.111:8443 HEALTHY OK outbound|443||ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local
    47. 192.168.31.111:9100 HEALTHY OK outbound|9100||node-exporter.monitoring.svc.cluster.local
    48. 192.168.31.112:8443 HEALTHY OK outbound|443||ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local
    49. 192.168.31.112:9100 HEALTHY OK outbound|9100||node-exporter.monitoring.svc.cluster.local
    50. 192.168.31.113:8443 HEALTHY OK outbound|443||ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local
    51. 192.168.31.113:9100 HEALTHY OK outbound|9100||node-exporter.monitoring.svc.cluster.local
    52. 192.168.31.114:8443 HEALTHY OK outbound|443||ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local
    53. 192.168.31.114:9100 HEALTHY OK outbound|9100||node-exporter.monitoring.svc.cluster.local
    54. unix://./etc/istio/proxy/SDS HEALTHY OK sds-grpc
    55. unix://./etc/istio/proxy/XDS HEALTHY OK xds-grpc
    56. root@k8s-master-01:~# kubectl get pods -o wide -n hr
    57. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    58. sleep-557747455f-k8qzc 2/2 Running 0 15h 172.100.183.189 192.168.31.103 <none> <none>

    4.8 通过网格访问

    进入pod,通过网格访问kiali和grafana

    1. root@k8s-master-01:~# kubectl exec -it sleep-557747455f-k8qzc -n hr sh
    2. kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
    3. / $ curl kiali.istio-system:20001
    4. <a href="/kiali/">Found</a>.
    5. ## 访问grafana
    6. / $ curl grafana.istio-system:3000
    7. <!doctype html><html lang="en"><head><meta charset="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"/><meta name="viewport" content="width=device-width"/><meta name="theme-color" content="#000"/><title>Grafana</title><base href="/"/><link rel="preload" href="public/fonts/roboto/RxZJdnzeo3R5zSexge8UUVtXRa8TVwTICgirnJhmVJw.woff2" as="font" crossorigin/><link rel="icon" type="image/png" href="public/img/fav32.png"/><link rel="apple-touch-icon" sizes="180x180" href="public/img/apple-touch-icon.png"/><link rel="mask-icon" href="public/img/grafana_mask_icon.svg" color="#F05A28"/><link rel="stylesheet" href="public/build/grafana.dark.fab5d6bbd438adca1160.css"/><script nonce="">performance.mark('frontend_boot_css_time_seconds');</script><meta name="apple-mobile-web-app-capable" content="yes"/><meta name="apple-mobile-web-app-status-bar-style" content="black"/><meta name="msapplication-TileColor" content="#2b5797"/><

    5. 网格内的服务发现

    5.1 deployment.yaml

    1. ---
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. labels:
    6. app: demoappv10
    7. version: v1.0
    8. name: demoappv10
    9. spec:
    10. progressDeadlineSeconds: 600
    11. replicas: 3
    12. selector:
    13. matchLabels:
    14. app: demoapp
    15. version: v1.0
    16. template:
    17. metadata:
    18. labels:
    19. app: demoapp
    20. version: v1.0
    21. spec:
    22. containers:
    23. - image: ikubernetes/demoapp:v1.0
    24. imagePullPolicy: IfNotPresent
    25. name: demoapp
    26. env:
    27. - name: "PORT"
    28. value: "8080"
    29. ports:
    30. - containerPort: 8080
    31. name: web
    32. protocol: TCP
    33. resources:
    34. limits:
    35. cpu: 50m
    36. ---
    37. apiVersion: v1
    38. kind: Service
    39. metadata:
    40. name: demoappv10
    41. spec:
    42. ports:
    43. - name: http
    44. port: 8080
    45. protocol: TCP
    46. targetPort: 8080
    47. selector:
    48. app: demoapp
    49. version: v1.0
    50. type: ClusterIP
    51. ---

    1. # kubectl apply -f deploy-demoapp.yaml
    2. deployment.apps/demoappv10 created
    3. service/demoappv10 created
    4. # kubectl get pod -n hr -o wide
    5. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    6. demoappv10-6ff964cbff-bnlgx 2/2 Running 0 4m25s 172.100.183.190 192.168.31.103 <none> <none>
    7. demoappv10-6ff964cbff-ccsnc 2/2 Running 0 4m25s 172.100.76.166 192.168.31.113 <none> <none>
    8. demoappv10-6ff964cbff-vp7gw 2/2 Running 0 4m25s 172.100.109.77 192.168.31.111 <none> <none>
    9. sleep-557747455f-k8qzc 2/2 Running 0 15h 172.100.183.189 192.168.31.103 <none> <none>
    10. # kubectl get svc -n hr -o wide
    11. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    12. demoappv10 ClusterIP 10.200.156.253 <none> 8080/TCP 2m55s app=demoapp,version=v1.0
    13. sleep ClusterIP 10.200.210.162 <none> 80/TCP 15h app=sleep

    此时路由已经显示了demoapp的8080

    1. # istioctl pc route sleep-557747455f-k8qzc.hr|grep demo
    2. 8080 demoappv10, demoappv10.hr + 1 more... /*
    3. # istioctl pc cluster sleep-557747455f-k8qzc.hr|grep demo
    4. demoappv10.hr.svc.cluster.local 8080 - outbound EDS
    5. ## 此时后端的ep也被发现
    6. # istioctl pc ep sleep-557747455f-k8qzc.hr|grep demo
    7. 172.100.109.77:8080 HEALTHY OK outbound|8080||demoappv10.hr.svc.cluster.local
    8. 172.100.183.190:8080 HEALTHY OK outbound|8080||demoappv10.hr.svc.cluster.local
    9. 172.100.76.166:8080 HEALTHY OK outbound|8080||demoappv10.hr.svc.cluster.local

    5.2 在sleep pod中测试通过网格访问demoapp10

    1. / $ curl demoappv10:8080
    2. iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-6ff964cbff-bnlgx, ServerIP: 172.100.183.190!
    3. / $ curl demoappv10:8080
    4. iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-6ff964cbff-vp7gw, ServerIP: 172.100.109.77!
    5. / $ curl demoappv10:8080
    6. iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-6ff964cbff-vp7gw, ServerIP: 172.100.109.77!
    7. / $ curl demoappv10:8080
    8. iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-6ff964cbff-ccsnc, ServerIP: 172.100.76.166!
    9. / $ curl demoappv10:8080
    10. iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-6ff964cbff-bnlgx, ServerIP: 172.100.183.190!

    6. 使用Ingress添加端口方式实现kiali对外提供访问

    想要将网格内部的服务实现对外提供服务,需要配置:

    1. Gateway
    2. VirtualService
    3. DestinationRule

    请添加图片描述

    6.1 gateway

    1. apiVersion: networking.istio.io/v1beta1
    2. kind: Gateway
    3. metadata:
    4. name: kiali-gateway
    5. namespace: istio-system
    6. spec:
    7. selector:
    8. app: istio-ingressgateway
    9. servers:
    10. - port:
    11. number: 20001
    12. name: http-kiali
    13. protocol: HTTP
    14. hosts:
    15. - "kiali.intra.com"
    16. ---

    6.2 VirtualService

    1. apiVersion: networking.istio.io/v1beta1
    2. kind: VirtualService
    3. metadata:
    4. name: kiali-virtualservice
    5. namespace: istio-system
    6. spec:
    7. hosts:
    8. - "kiali.intra.com"
    9. gateways:
    10. - kiali-gateway
    11. http:
    12. - match:
    13. - port: 20001
    14. route:
    15. - destination:
    16. host: kiali
    17. port:
    18. number: 20001
    19. ---

    6.3 DestinationRule

    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: kiali
    5. namespace: istio-system
    6. spec:
    7. host: kiali
    8. trafficPolicy:
    9. tls:
    10. mode: DISABLE
    11. ---

    6.4 部署kiali-gateway

    1. # kubectl apply -f .
    2. destinationrule.networking.istio.io/kiali created
    3. gateway.networking.istio.io/kiali-gateway created
    4. virtualservice.networking.istio.io/kiali-virtualservice created
    5. root@k8s-master-01:/apps/istio-in-practise/Traffic-Management-Basics/kiali# kubectl get svc -n istio-system
    6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    7. grafana ClusterIP 10.200.62.191 <none> 3000/TCP 17h
    8. istio-egressgateway ClusterIP 10.200.29.127 <none> 80/TCP,443/TCP 17h
    9. istio-ingressgateway LoadBalancer 10.200.239.76 <pending> 15021:49708/TCP,80:42116/TCP,443:61970/TCP,31400:56204/TCP,15443:45720/TCP 17h
    10. istiod ClusterIP 10.200.249.97 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 17h
    11. jaeger-collector ClusterIP 10.200.4.90 <none> 14268/TCP,14250/TCP,9411/TCP 17h
    12. kiali ClusterIP 10.200.132.85 <none> 20001/TCP,9090/TCP 17h
    13. prometheus ClusterIP 10.200.99.200 <none> 9090/TCP 17h
    14. tracing ClusterIP 10.200.160.255 <none> 80/TCP,16685/TCP 17h
    15. zipkin ClusterIP 10.200.92.91 <none> 9411/TCP 17h

    6.5 查看侦听器

    可以看到kiali的侦听已经在ingress上生成

    1. # kubectl get pods -n istio-system
    2. NAME READY STATUS RESTARTS AGE
    3. grafana-6c5dc6df7c-96cwc 1/1 Running 0 20h
    4. istio-egressgateway-5c597cdb77-48bj8 1/1 Running 0 21h
    5. istio-ingressgateway-8d7d49b55-bmbx5 1/1 Running 0 21h
    6. istiod-54c54679d7-78f2d 1/1 Running 0 21h
    7. jaeger-9dd685668-hmg9c 1/1 Running 0 20h
    8. kiali-699f98c497-wjtwl 1/1 Running 0 20h
    9. prometheus-699b7cc575-5d7gk 2/2 Running 0 20h
    10. # istioctl pc listener istio-ingressgateway-8d7d49b55-bmbx5.istio-system
    11. ADDRESS PORT MATCH DESTINATION
    12. 0.0.0.0 15021 ALL Inline Route: /healthz/ready*
    13. 0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
    14. 0.0.0.0 20001 ALL Route: http.20001

    6.6 service添加external地址

    1. # kubectl edit svc istio-ingressgateway -n istio-system
    2. ### 在spec下追加
    3. spec:
    4. allocateLoadBalancerNodePorts: true
    5. clusterIP: 10.200.239.76
    6. clusterIPs:
    7. - 10.200.239.76
    8. ## 追加以下两行
    9. externalIPs:
    10. - 192.168.31.163
    11. #在port:下追加4
    12. - name: kiali
    13. nodePort: 43230
    14. port: 20001
    15. protocol: TCP
    16. targetPort: 20001
    17. # kubectl get svc -n istio-system |grep ingress
    18. istio-ingressgateway LoadBalancer 10.200.239.76 192.168.31.163 15021:49708/TCP,80:42116/TCP,443:61970/TCP,31400:56204/TCP,15443:45720/TCP,20001:43230/TCP 21h

    此时在主机的hosts中绑定kiali的域名,进程访问测试

    请添加图片描述

    6.7 梳理下访问流程

    1. 浏览器访问ingress service的20001端口

      1. # kubectl get svc -n istio-system |grep istio-ingressgateway
      2. istio-ingressgateway LoadBalancer 10.200.239.76 192.168.31.163 15021:49708/TCP,80:42116/TCP,443:61970/TCP,31400:56204/TCP,15443:45720/TCP,20001:43230/TCP 22h
    2. Ingress的svc将流量转给Ingress的pod的20001

      1. # istioctl pc listener istio-ingressgateway-8d7d49b55-bmbx5.istio-system|grep 20001
      2. 0.0.0.0 20001 ALL Route: http.20001
    3. Ingress再根据kiali-virtualservice.yaml的配置将流量转给kiali的service

      1. # cat kiali-virtualservice.yaml
      2. apiVersion: networking.istio.io/v1beta1
      3. kind: VirtualService
      4. metadata:
      5. name: kiali-virtualservice
      6. namespace: istio-system
      7. spec:
      8. hosts:
      9. - "kiali.intra.com"
      10. gateways:
      11. - kiali-gateway
      12. http:
      13. - match:
      14. - port: 20001
      15. route:
      16. - destination:
      17. host: kiali
      18. port:
      19. number: 20001
      20. ---
      21. # kubectl get svc -n istio-system |grep kiali
      22. kiali ClusterIP 10.200.132.85 <none> 20001/TCP,9090/TCP 22h

    4. kiali的service再将流量转给后端的ep,最后根据ep对应的pod进行响应

      1. # kubectl get ep kiali -n istio-system
      2. NAME ENDPOINTS AGE
      3. kiali 172.100.183.188:9090,172.100.183.188:20001 22h
      4. # kubectl get pods -n istio-system -o wide|grep kiali
      5. kiali-699f98c497-wjtwl 1/1 Running 0 22h 172.100.183.188 192.168.31.103 <none> <none>

    7. 通过80端口方式实现kiali对外提供访问

    7.1 gateway

    1. apiVersion: networking.istio.io/v1beta1
    2. kind: Gateway
    3. metadata:
    4. name: kiali-gateway
    5. namespace: istio-system
    6. spec:
    7. selector:
    8. app: istio-ingressgateway
    9. servers:
    10. - port:
    11. number: 80
    12. name: http-kiali
    13. protocol: HTTP
    14. hosts:
    15. - "kiali.intra.com"
    16. ---

    7.2 VirtualService

    1. apiVersion: networking.istio.io/v1beta1
    2. kind: VirtualService
    3. metadata:
    4. name: kiali-virtualservice
    5. namespace: istio-system
    6. spec:
    7. hosts:
    8. - "kiali.intra.com"
    9. gateways:
    10. - kiali-gateway
    11. http:
    12. - match:
    13. - uri:
    14. prefix: /
    15. route:
    16. - destination:
    17. host: kiali
    18. port:
    19. number: 20001
    20. ---

    7.3 DestinationRule

    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: kiali
    5. namespace: istio-system
    6. spec:
    7. host: kiali
    8. trafficPolicy:
    9. tls:
    10. mode: DISABLE
    11. ---

    此时直接访问http://kiali.intra.com/即可

    请添加图片描述

    请添加图片描述

    8. 卸载Istio

    8.1 插件卸载

    1. # kubectl delete -f samples/addons/
    2. serviceaccount "grafana" deleted
    3. configmap "grafana" deleted
    4. service "grafana" deleted
    5. deployment.apps "grafana" deleted
    6. configmap "istio-grafana-dashboards" deleted
    7. configmap "istio-services-grafana-dashboards" deleted
    8. deployment.apps "jaeger" deleted
    9. service "tracing" deleted
    10. service "zipkin" deleted
    11. service "jaeger-collector" deleted
    12. serviceaccount "kiali" deleted
    13. configmap "kiali" deleted
    14. clusterrole.rbac.authorization.k8s.io "kiali-viewer" deleted
    15. clusterrole.rbac.authorization.k8s.io "kiali" deleted
    16. clusterrolebinding.rbac.authorization.k8s.io "kiali" deleted
    17. role.rbac.authorization.k8s.io "kiali-controlplane" deleted
    18. rolebinding.rbac.authorization.k8s.io "kiali-controlplane" deleted
    19. service "kiali" deleted
    20. deployment.apps "kiali" deleted
    21. serviceaccount "prometheus" deleted
    22. configmap "prometheus" deleted
    23. clusterrole.rbac.authorization.k8s.io "prometheus" deleted
    24. clusterrolebinding.rbac.authorization.k8s.io "prometheus" deleted
    25. service "prometheus" deleted
    26. deployment.apps "prometheus" deleted

    此时只剩下了2个数据平面的gateway和控制平面的istiod

    1. # kubectl get pods -n istio-system
    2. NAME READY STATUS RESTARTS AGE
    3. istio-egressgateway-5c597cdb77-48bj8 1/1 Running 0 40h
    4. istio-ingressgateway-8d7d49b55-bmbx5 1/1 Running 0 40h
    5. istiod-54c54679d7-78f2d 1/1 Running 0 40h

    8.2 卸载服务网格

    1. # istioctl x uninstall --purge -y
    2. All Istio resources will be pruned from the cluster
    3. Removed IstioOperator:istio-system:installed-state.
    4. Removed PodDisruptionBudget:istio-system:istio-egressgateway.
    5. Removed PodDisruptionBudget:istio-system:istio-ingressgateway.
    6. Removed PodDisruptionBudget:istio-system:istiod.
    7. Removed Deployment:istio-system:istio-egressgateway.
    8. Removed Deployment:istio-system:istio-ingressgateway.
    9. Removed Deployment:istio-system:istiod.
    10. Removed Service:istio-system:istio-egressgateway.
    11. Removed Service:istio-system:istio-ingressgateway.
    12. Removed Service:istio-system:istiod.
    13. Removed ConfigMap:istio-system:istio.
    14. Removed ConfigMap:istio-system:istio-sidecar-injector.
    15. Removed Pod:istio-system:istio-egressgateway-5c597cdb77-48bj8.
    16. Removed Pod:istio-system:istio-ingressgateway-8d7d49b55-bmbx5.
    17. Removed Pod:istio-system:istiod-54c54679d7-78f2d.
    18. Removed ServiceAccount:istio-system:istio-egressgateway-service-account.
    19. Removed ServiceAccount:istio-system:istio-ingressgateway-service-account.
    20. Removed ServiceAccount:istio-system:istio-reader-service-account.
    21. Removed ServiceAccount:istio-system:istiod.
    22. Removed ServiceAccount:istio-system:istiod-service-account.
    23. Removed RoleBinding:istio-system:istio-egressgateway-sds.
    24. Removed RoleBinding:istio-system:istio-ingressgateway-sds.
    25. Removed RoleBinding:istio-system:istiod.
    26. Removed RoleBinding:istio-system:istiod-istio-system.
    27. Removed Role:istio-system:istio-egressgateway-sds.
    28. Removed Role:istio-system:istio-ingressgateway-sds.
    29. Removed Role:istio-system:istiod.
    30. Removed Role:istio-system:istiod-istio-system.
    31. Removed EnvoyFilter:istio-system:stats-filter-1.11.
    32. Removed EnvoyFilter:istio-system:stats-filter-1.12.
    33. Removed EnvoyFilter:istio-system:stats-filter-1.13.
    34. Removed EnvoyFilter:istio-system:tcp-stats-filter-1.11.
    35. Removed EnvoyFilter:istio-system:tcp-stats-filter-1.12.
    36. Removed EnvoyFilter:istio-system:tcp-stats-filter-1.13.
    37. Removed MutatingWebhookConfiguration::istio-revision-tag-default.
    38. Removed MutatingWebhookConfiguration::istio-sidecar-injector.
    39. Removed ValidatingWebhookConfiguration::istio-validator-istio-system.
    40. Removed ValidatingWebhookConfiguration::istiod-default-validator.
    41. Removed ClusterRole::istio-reader-clusterrole-istio-system.
    42. Removed ClusterRole::istio-reader-istio-system.
    43. Removed ClusterRole::istiod-clusterrole-istio-system.
    44. Removed ClusterRole::istiod-gateway-controller-istio-system.
    45. Removed ClusterRole::istiod-istio-system.
    46. Removed ClusterRoleBinding::istio-reader-clusterrole-istio-system.
    47. Removed ClusterRoleBinding::istio-reader-istio-system.
    48. Removed ClusterRoleBinding::istiod-clusterrole-istio-system.
    49. Removed ClusterRoleBinding::istiod-gateway-controller-istio-system.
    50. Removed ClusterRoleBinding::istiod-istio-system.
    51. Removed CustomResourceDefinition::authorizationpolicies.security.istio.io.
    52. Removed CustomResourceDefinition::destinationrules.networking.istio.io.
    53. Removed CustomResourceDefinition::envoyfilters.networking.istio.io.
    54. Removed CustomResourceDefinition::gateways.networking.istio.io.
    55. Removed CustomResourceDefinition::istiooperators.install.istio.io.
    56. Removed CustomResourceDefinition::peerauthentications.security.istio.io.
    57. Removed CustomResourceDefinition::proxyconfigs.networking.istio.io.
    58. Removed CustomResourceDefinition::requestauthentications.security.istio.io.
    59. Removed CustomResourceDefinition::serviceentries.networking.istio.io.
    60. Removed CustomResourceDefinition::sidecars.networking.istio.io.
    61. Removed CustomResourceDefinition::telemetries.telemetry.istio.io.
    62. Removed CustomResourceDefinition::virtualservices.networking.istio.io.
    63. Removed CustomResourceDefinition::wasmplugins.extensions.istio.io.
    64. Removed CustomResourceDefinition::workloadentries.networking.istio.io.
    65. Removed CustomResourceDefinition::workloadgroups.networking.istio.io.
    66. Uninstall complete

    1. # kubectl get pods -n istio-system
    2. No resources found in istio-system namespace.
    3. # kubectl get svc -n istio-system
    4. No resources found in istio-system namespace.

    8.3 删除istio-system命名空间

    1. # kubectl delete ns istio-system
    2. namespace "istio-system" deleted
    3. # kubectl get ns istio-system
    4. Error from server (NotFound): namespaces "istio-system" not found

    其他命名空间下可能还有一些由服务网格创建的残余的sidecar.可以单独进行删除.这里就不一一演示了.

  • 相关阅读:
    LabVIEW专栏七、队列
    Dubbo分组聚合

    Uncaught TypeError: Cannot read properties of undefined (reading ‘password‘)
    理智申请香港优才计划!香港优才的6个真相,很多人被坑了!
    机器人工程考研难易主观感受和客观数据
    C语言指向数组元素的指针变量的定义和赋值
    2.2 PE结构:文件头详细解析
    prosemirror 学习记录(二)创建 apple 节点
    c#设计模式-行为型模式 之 解释器模式
  • 原文地址:https://blog.csdn.net/qq_19734597/article/details/134036280