• upgrade k8s (by quqi99)


    作者:张华 发表于:2023-11-17
    版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明(http://blog.csdn.net/quqi99)

    本文只是从网上搜索一些升级k8s的理论学习,下面的步骤未实际测试。

    理论学习 - upgrade k8s from 1.20.6 to 1.20.15 by kubeadm

    refer: 云原生Kubernetes:K8S集群版本升级(v1.20.6 - v1.20.15) - https://blog.csdn.net/cronaldo91/article/details/133789264

    1, check verion
    kubectl get nodes
    kubectl version
    kubeadm version
    kubectl get componentstatuses
    kubectl get deployments --all-namespaces
    
    2, upgrade kubeadm
    apt install kubeadm=1.20.15*
    kubeadm version
    
    3, upgrade master1
    kubeadm upgrade plan
    kubeadm upgrade apply v1.20.15
    #in case it's offline mode, need to load the image first
    docker image load -i kube-apiserver\:v1.15.1.tar 
    docker image load -i kube-scheduler\:v1.15.1.tar 
    docker image load -i kube-controller-manager\:v1.15.1.tar 
    docker image load -i kube-proxy\:v1.15.1.tar
    docker image list
    
    4, upgrade master2, but pls use 'kubeadm upgrade node' instead of 'kubeadm upgrade apply'
    apt install kubeadm=1.20.15*
    kubeadm version
    kubeadm upgrade node
    
    5, upgrade kubelet and kubectl on master1
    kubectl drain master1 --ignore-daemonsets
    apt install kubelet=1.20.15*
    systemctl daemon-reload && systemctl restart kubelet
    kubectl uncordon master1
    kubectl get nodes
    
    6, upgrade kubelet and k on master2
    kubectl drain master2 --ignore-daemonsets
    apt install kubelet=1.20.15*
    systemctl daemon-reload && systemctl restart kubelet
    kubectl uncordon master2
    kubectl get nodes
    
    7, upgrade worker
    apt install kubeadm=1.20.15*
    kubeadm version
    kubeadm upgrade node
    kubectl drain worker1 --ignore-daemonsets --delete-emptydir-data
    apt install kubelet=1.20.15*
    systemctl daemon-reload && systemctl restart kubelet
    kubectl uncordon worker1
    kubectl get nodes
    
    8, verify the cluster
    kubectl get nodes
    kubeadm alpa certs check-expiration
    kubectl get pods -n kube-system
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54

    理论学习 - Upgrade k8s from 1.21 to 1.26 by charm

    参考: https://ubuntu.com/kubernetes/docs/1.26/upgrading
    Upgrade path: 1.21 --> 1.22 --> 1.23 --> 1.24 --> 1.25 --> 1.26

    1, backup db
    juju run-action etcd/leader snapshot --wait
    juju scp etcd/0:/home/ubuntu/etcd-snapshots/etcd-snapshot-bak.tar.gz .
    
    2, upgrade containerd, pls wait for the units to turn back to "active" and "idle" state
    juju upgrade-charm containerd
    watch juju status containerd
    # fix 'blocked' and 'idle' state
    juju run -u  'hooks/update-status'
    
    3, upgrade etcd
    juju upgrade-charm etcd
    watch juju status etcd
    
    4, upgrade the additional charms
    juju upgrade-charm easyrsa
    juju upgrade-charm calico
    juju upgrade-charm hacluster-kubernetes-master
    juju upgrade-charm hacluster-dashboard
    juju upgrade-charm hacluster-keystone
    juju upgrade-charm hacluster-vault
    juju upgrade-charm telegraf
    juju upgrade-charm public-policy-routing
    juju upgrade-charm landscape-client
    juju upgrade-charm filebeat
    juju upgrade-charm ntp
    juju upgrade-charm nfs-client
    juju upgrade-charm nrpe-container
    juju upgrade-charm nrpe-host
    juju upgrade-charm prometheus-ceph-exporter
    
    5, upgrade k8s master from 1.21 to 1.22
    juju upgrade-charm kubernetes-master
    juju config kubernetes-master channel=1.22/stable
    juju run-action kubernetes-master/0 upgrade
    juju run-action kubernetes-master/1 upgrade
    juju run-action kubernetes-master/2 upgrade
    
    6, upgrade k8s worker from 1.21 to 1.22
    juju upgrade-charm kubernetes-worker
    juju config kubernetes-worker channel=1.22/stable
    juju run-action kubernetes-worker/0 upgrade
    juju run-action kubernetes-worker/1 upgrade
    juju run-action kubernetes-worker/2 upgrade
    
    7, when upgrading charmed k8s to 1.24, which has relocated from juju store to charmhub, this means that upgrading each charm will require the use of --switch during the upgrade.
    juju upgrade-charm containerd --switch ch:containerd --channel 1.24/stable
    juju upgrade-charm easyrsa --switch ch:easyrsa --channel 1.24/stable
    juju upgrade-charm calico --switch ch:calico --channel 1.24/stable
    juju upgrade-charm hacluster-kubernetes-master --switch ch:hacluster --channel latest/stable
    juju upgrade-charm hacluster-dashboard --switch ch:hacluster --channel latest/stable
    juju upgrade-charm hacluster-kesystone --switch ch:hacluster --channel latest/stable
    juju upgrade-charm hacluster-vault --switch ch:hacluster --channel latest/stable
    juju upgrade-charm telegraf --switch ch:telegraf --channel latest/stable
    juju upgrade-charm public-policy-routing
    juju upgrade-charm landscape-client
    juju upgrade-charm filebeat --switch ch:filebeat --channel latest/stable
    juju upgrade-charm ntp --switch ch:ntp --channel latest/stable
    juju upgrade-charm nfs-client
    juju upgrade-charm nrpe-container --switch ch:nrpe --channel latest/stable
    juju upgrade-charm nrpe-host --switch ch:nrpe --channel latest/stable
    juju upgrade-charm prometheus-ceph-exporter --switch ch:prometheus-ceph-exporter --channel latest/stable
    
    8, for the charm 1.24, kubernetes-master is renamed to kubernetes=control-plane as well
    juju upgrade-charm kubernetes-master --switch ch:kubernetes-control-plane --channel 1.24/stable
    juju config kubernetes-master channel=1.24/stable
    juju run-action kubernetes-master/0 upgrade
    juju run-action kubernetes-master/1 upgrade
    juju run-action kubernetes-master/2 upgrade
    # for the message "ceph-storage relation deprecated, use ceph-client instead" 
    juju remove-relation kubernetes-master:ceph-storage ceph-mon
    juju upgrade-charm kubernetes-worker --switch ch:kubernetes-worker --channel 1.24/stable
    juju run-action kubernetes-worker/0 upgrade
    juju run-action kubernetes-worker/1 upgrade
    juju run-action kubernetes-worker/2 upgrade
    
    9, charm 1.26 may need to use 'juju fresh' as well
    juju upgrade-charm kubernetes-master
    juju config kubernetes-master channel=1.26/stable
    juju run-action kubernetes-master/0 upgrade
    juju run-action kubernetes-master/1 upgrade
    juju run-action kubernetes-master/2 upgrade
    juju refresh kubernetes-master --channel 1.26/stable
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83

    客户问题

    客户只是从1.21往1.22升级时运行‘juju upgrade-charm kubernetes-master’时发现charmhub里没有1.22 (1.22太老已经被删除了,它只有1.23到1.29, 可用‘juju info --series focal kubernetes-worker’查看), k8s升级没法从1.21直接跳到1.23啊。
    那没有1.22怎么办?找产品team的把1.22加上?另外,可以使用local charm过渡吗 ( https://github.com/charmed-kubernetes/charm-kubernetes-worker/releases/tag/1.22%2Bck2 ). 下面方法可以build 1.22 charm, 但可以从1.21升级到local 1.22再升级到1.23 on charmhub? 这个需要测试

    #'python_version < "3.8"\' don\'t match your environment\nIgnoring Jinja2: markers \'python_version >= "3.0" and python_version <= "3.4"\'
    # ppa:deadsnakes/ppa only has >=python3.5 as well, so have to use xenial instead
    #juju add-machine --series jammy --constraints "mem=16G cores=8 root-disk=100G" -n 2
    juju add-machine --series xenial -n 1
    juju ssh 0
    # but xenial is also using python3.5, and it said:
    DEPRECATION: Python 3.5 reached the end of its life on September 13th, 2020. Please upgrade your Python as Python 3.5 is no longer maintained. pip 21.0 will drop support for Python 3.5 in January 2021. pip 21.0 will remove support for this functionality.
    
    那在这个xenial的基础上继续通过源码来编译python3.2之后再build charm成功了
    sudo apt-get install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev wget -y
    wget https://www.python.org/ftp/python/3.2.6/Python-3.2.6.tgz
    tar -xf Python-3.2.6.tgz
    cd Python-3.2.6/
    ./configure --enable-optimizations
    make -j$(nproc)
    sudo make altinstall
    python3.2 --version
    alias python=python3.2
    alias python3=python3.2
    sudo apt install build-essential -y
    sudo apt install python3-pip python3-dev python3-nose python3-mock -y
    cd $CHARM_LAYERS_DIR/..
    charm build --debug ./layers/kubernetes-worker/
    cd /home/ubuntu/charms/layers/builds/kubernetes-worker
    zip -rq ../kubernetes-worker.charm .
    
    sudo snap install charm --classic
    mkdir -p /home/ubuntu/charms
    mkdir -p ~/charms/{layers,interfaces}
    export JUJU_REPOSITORY=/home/ubuntu/charms
    export CHARM_INTERFACES_DIR=$JUJU_REPOSITORY/interfaces
    export CHARM_LAYERS_DIR=$JUJU_REPOSITORY/layers
    export CHARM_BUILD_DIR=$JUJU_REPOSITORY/layers/builds
    cd $CHARM_LAYERS_DIR
    git clone https://github.com/charmed-kubernetes/charm-kubernetes-worker.git kubernetes-worker
    cd kubernetes-worker && git checkout -b 1.22+ck2 1.22+ck2
    sudo apt install python3-virtualenv tox -y
    cd .. && charm build --debug layers/kubernetes-worker/
    #cd ${JUJU_REPOSITORY}/layers/builds/kubernetes-worker && tox -e func
    cd /home/ubuntu/charms/layers/builds/kubernetes-worker
    zip -rq ../kubernetes-worker.charm .
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41

    还有一个问题是,在1.24时charm kubernetes-master被更名为kubernetes-control-plane, charmhub中的版本目前是从1.23开始的已经是kubernetes-control-plane了,所以‘juju info --series focal kubernetes-control-plane’能看到信息,而‘juju info --series focal kubernetes-master’是看不到的。

    测试环境搭建

    需要搭建一个和客户环境尽可能一样的环境, 客户环境还在使用下列768的charmstore里的老charm.

      kubernetes-worker:
        charm: cs:~containers/kubernetes-worker-768
        channel: stable
    
    • 1
    • 2
    • 3

    目前测试工具默认生成的是用的charmhub中的latest/stable.

        charm: ch:kubernetes-control-plane
        channel: latest/stable
    
    • 1
    • 2

    我的初步想法是使用"–use-stable-charms --charmstore --k8s-channel stable --revision-info ./juju_export_bundle.txt"在产生bundle时将ch变回cs:

    ./generate-bundle.sh -s focal --name k8s --num-control-planes 2 --num-workers 2 --calico --use-stable-charms --charmstore --k8s-channel stable --revision-info ./juju_export_bundle.txt
    
    • 1

    但是因为客户给的是juju_export_bundle的输出,而不是juju status的输出,所以上面的–revision-info不work, 这样上面命令产生的输出是:

    cs:~containers/kubernetes-worker
    channel: latest/stable
    
    • 1
    • 2

    这样我的设想是:

    • 手动编辑b/k8s/kubernetes.yaml只是将k8s master与k8s worker改成和客户一样的版本。 另外,charmstore里的stable revision 768已经不存在了,目前charmstore website已经停服了,所以没法查比768还新一点的版本是哪个,可以通过’juju deploy --series focal cs:containers-kubernetes-worker-770 test’增加数字一个个试,最终试出来是770. 所以最后改成:cs:~containers/kubernetes-worker-770
    • 其他charm还是用charmstore里最新的stable版本, 这样做upgrade测试的时候这些upgrade都省了,将精力只集中在客户有问题的k8s worker 上, 所以最后改成:cs:~containers/kubernetes-master-1008

    修改完b/k8s/kubernetes.yaml之后运行‘./generate-bundle.sh --name k8s --replay --run’完成测试环境搭建。

    ./generate-bundle.sh --name k8s --replay --run
    watch -c juju status --color                                                    
    sudo snap install kubectl --classic                                             
    juju ssh kubernetes-control-plane/leader -- cat config > ~/.kube/config         
    source <(kubectl completion bash)                                               
    kubectl completion bash |sudo tee /etc/bash_completion.d/kubectl                
    kubectl cluster-info
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
  • 相关阅读:
    App常用接口
    C语言实现直接插入排序
    UML/SysML建模工具更新情况-截至2024年4月(1)5款-Trufun建模平台 v2024
    2023.09.03 学习周报
    java源码系列:HashMap源码验证,在JDK8中新增红黑树详解
    在电脑上实现微信多开的技巧教程
    工程制图点的投影练习
    山东大学软件学院项目实训-创新实训-基于大模型的旅游平台(二十七)- 微服务(7)
    mov格式如何转换成mp4?详细步骤教程
    教你快速解决unity无法添加脚本bug
  • 原文地址:https://blog.csdn.net/quqi99/article/details/134456281