• Ubuntu 20.04 手动安装OpenStack


    官网文档

    官方文档开始看的时候,感觉琳琅满目,不知道从哪里下手
    按照固有的思路,安装总是去找安装包(packages),所以,经过一大圈观光之后,最后,确定从这里开始
    OpenStack packages for Ubuntu

    因为之前已经学习过各种 Openstack All-In-One 单机版的安装,所以,我这里实际只安装 Controller ,并且最终可以 ping 通 route ,就算大功告成,其余部分都是小问题

    Centos7 单机单网卡 RDO 安装 OpenStack
    ubuntu20.04 安装 DevStack
    Ubuntu20.04 单节点 microstack

    一、我的环境(前提)

    3 个 Hyper-V 虚拟机

    1. Controller (Ubuntu 20.04)
      管理者网卡( Management ): 192.168.0.125
      公网提供者网卡 ( Provider ): 203.0.113.125

    2. Compute1 (Ubuntu 20.04)
      管理者网卡( Management ): 192.168.0.127
      公网提供者网卡 ( Provider ): 203.0.113.127

    3. 模拟网关 (非必需 )
      工作环境内网网卡 : 192.168.0.109
      模拟公网网关 : 203.0.113.1

      在 Ubuntu 模拟一个网关其实很简单
      Ubuntu 18.04 通过 ufw route 配置网关服务器

    二、关于 网卡 IP 和角色

    1. 来回失败的尝试,死活 ping 不通 route 的地址 (203.0.113.XX)

    2. 习惯性的以为 Networking 部分配置不对,于是在
      Install and configure for Ubuntu 这里打转转,甚至折腾了好久 OVN Install Documentation

    3. 无奈又无聊的时候,翻看文档
      原文抄录如下:

      **Management on 10.0.0.0/24 with gateway 10.0.0.1

      This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, DNS, and NTP.

      Provider on 203.0.113.0/24 with gateway 203.0.113.1

      This network requires a gateway to provide Internet access to instances in your OpenStack environment.**

      终于恍然大悟!问题出在网关(gateway)!

    4. 这个 Provider 在以前版本叫 Public ,开始总是半知半解,把 2 个单词加一起理解比较好

    5. 学习中不可能有实际公网地址和网关给我们使用,所以,模拟一个 203.0.113.0/24 网关

    6. 如果不想单独用一个虚拟机作为网关,可以在 Controller 增加第三个网卡来模拟

    总之,要想最终可以 ping 通路由器和虚拟机的浮动地址,开始时就需要预备好网关
    在 Controller 和 Compute1 服务器上, ping 通203.0.113.1

    三、环境准备

    Environment

    虽然最新版 OpenStack Yoga for Ubuntu 20.04 LTS:
    但是,我这里操作时,提供的列表中,没有这个 Yoga,所以,就从 Xena 下手

    # add-apt-repository cloud-archive:yoga
    
    'yoga': not a valid cloud-archive name.
    Must be one of ['folsom', 'folsom-proposed', 'grizzly', 'grizzly-proposed', 'havana', 'havana-proposed', 'icehouse', 'icehouse-proposed', 'juno', 'juno-proposed', 'kilo', 'kilo-proposed', 'liberty', 'liberty-proposed', 'mitaka', 'mitaka-proposed', 'newton', 'newton-proposed', 'ocata', 'ocata-proposed', 'pike', 'pike-proposed', 'queens', 'queens-proposed', 'rocky', 'rocky-proposed', 'stein', 'stein-proposed', 'tools', 'tools-proposed', 'train', 'train-proposed', 'ussuri', 'ussuri-proposed', 'victoria', 'victoria-proposed', 'wallaby', 'wallaby-proposed', 'xena', 'xena-proposed']
    
    • 1
    • 2
    • 3
    • 4

    学习过程中所有的密码设置为同一个,例如: secret
    Host networking按照前面修改为自己的 IP

    完成后做一次快照!
    完成后做一次快照!
    完成后做一次快照!

    建议 su 后,以 root 身份进行操作,一个一个 sudo 会很费劲
    完全单独安装完成 Controller 后,再去处理 Compute1

    四、Controller 安装 OpenStack 服务

    Install OpenStack services
    如图所示,建议安装到 Networking service – neutron installation for Xena 之前,
    再做一个快照!
    再做一个快照!
    再做一个快照!
    在这里插入图片描述
    安装完 Networking service – neutron installation for Xena 之后,
    再做一个快照!
    再做一个快照!
    再做一个快照!

    因为完全单独安装完成 Controller ,所以验证这里会缺少一项 compute1,只有 4 条

    还有一个 network bridge filters 需要设置

    vim /etc/sysctl.conf

    增加

    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    
    • 1
    • 2

    必须先加载模块 br_netfilter

     modprobe  br_netfilter 
    
    • 1

    应用生效

    sysctl -p /etc/sysctl.conf

    	net.bridge.bridge-nf-call-iptables = 1
    	net.bridge.bridge-nf-call-ip6tables = 1
    
    • 1
    • 2

    root@controller:/home/dhbm# openstack network agent list

    +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
    | ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
    +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
    | 326c6f4c-370a-4f06-abfb-6e7fa1bc6c67 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
    | 633c6363-6a63-4ac9-a589-41d28a1508ae | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
    | 9a8ead8b-2173-44f2-a102-8c1d4b4098da | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
    | fce10fb1-f9a7-4f5e-862b-51955d52b149 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
    +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    不要着急进入 Dashboard – horizon installation for Xena
    跳到下一步 : Launch an instance

    五、发布一个实例

    Launch an instance

    如果担心后续虚拟机 ping 不通外网,可以修改 8.8.4.4 为 114.114.114.114 等国内 DNS

    1. 创建公网提供者(provider)网络和子网
      Provider network

      抄录一下 3 条命令

       $ . admin-openrc
       
       $ openstack network create  --share --external \
         --provider-physical-network provider \
         --provider-network-type flat provider
         
         $ openstack subnet create --network provider \
         --allocation-pool start=203.0.113.101,end=203.0.113.250 \
         --dns-nameserver 8.8.4.4 --gateway 203.0.113.1 \
         --subnet-range 203.0.113.0/24 provider
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
    2. 创建私有网络(selfservice)网络和子网
      Self-service network

      抄录一下 3 条命令

       $ . demo-openrc
       
       $ openstack network create selfservice
       
       $ openstack subnet create --network selfservice \
         --dns-nameserver 8.8.4.4 --gateway 172.16.1.1 \
         --subnet-range 172.16.1.0/24 selfservice
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
    3. 创建路由器(router)
      抄录一下 4 条命令

       $ . demo-openrc
       
       $ openstack router create router
       
       $ openstack router add subnet router selfservice
       
       $ openstack router set router --external-gateway provider
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
    4. 验证网络连通性

      1). root@controller:/home/dhbm# source admin-openrc

      2). root@controller:/home/dhbm# ip netns

       qrouter-f6a53f5b-104a-4840-bd67-db6b5a51d6dd (id: 2)
       qdhcp-1c17dcdc-ab9a-4324-8dc0-e5a21515323d (id: 0)
       qdhcp-dcda4686-434c-409c-8de4-134eafdbe939 (id: 1)
      
      • 1
      • 2
      • 3

      3). root@controller:/home/dhbm# openstack port list --router router

       +--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
       | ID                                   | Name | MAC Address       | Fixed IP Addresses                                                           | Status |
       +--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
       | 32127872-6154-44d0-b52d-5408fe659528 |      | fa:16:3e:14:b6:32 | ip_address='172.16.1.1', subnet_id='c0a15e76-3694-4b61-9101-71a9aed1e7b0'    | ACTIVE |
       | fe315fd4-9014-43f0-a64f-472758ac305f |      | fa:16:3e:c5:a5:e9 | ip_address='203.0.113.198', subnet_id='eac59d6f-a533-424d-bb22-8bc504acf773' | ACTIVE |
       +--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6

      4). root@controller:/home/dhbm# ping 203.0.113.198

       PING 203.0.113.198 (203.0.113.198) 56(84) bytes of data.
       64 bytes from 203.0.113.198: icmp_seq=1 ttl=64 time=0.090 ms
       64 bytes from 203.0.113.198: icmp_seq=2 ttl=64 time=0.065 ms
       64 bytes from 203.0.113.198: icmp_seq=3 ttl=64 time=0.076 ms
       ......
      
      • 1
      • 2
      • 3
      • 4
      • 5
    5. 后悔药 (反向删除以上 2 个 network 和 router)

      openstack router remove subnet router selfservice
      openstack router delete router
      openstack subnet delete  selfservice
      openstack network delete selfservice
      openstack subnet delete  provider
      openstack network delete provider
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6

      到这里先不必往下继续,等下一步 DashBoard 完成后,在那里学习并操作

    六、安装 openstack-dashboard

    install and configure the dashboard

    Ubuntu 的在这里
    Install and configure components

    1. 这里有个错误

       Enable the Identity API version 3:
       
       OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST
      
      • 1
      • 2
      • 3

      在 Yoga 版本中这里有个注解

        Note
       
       In case your keystone run at 5000 port then you would mentioned keystone port here as well i.e. OPENSTACK_KEYSTONE_URL = “http://%s:5000/identity/v3” % OPENSTACK_HOST
      
      • 1
      • 2
      • 3
    2. 需要修改 2 处 : SESSION_ENGINE 和 OPENSTACK_KEYSTONE_URL

      root@controller:/home/dhbm# vim /etc/openstack-dashboard/local_settings.py

       # If you use ``tox -e runserver`` for developments,then configure
       # SESSION_ENGINE to django.contrib.sessions.backends.signed_cookies
       # as shown below:
       #SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies'
       # add by wzh 20221108 use cookie 
       # SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
       SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies'
       
       CACHES = {
           'default': {
                'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
                'LOCATION': 'controller:11211',
           }
       }
      
       ......
       
       # OPENSTACK_HOST = "127.0.0.1"
       # add by wzh 20221108
       OPENSTACK_HOST = "controller"
       # OPENSTACK_HOST = "192.168.0.125"
       # OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST
       OPENSTACK_KEYSTONE_URL = "http://%s:5000/identity/v3" % OPENSTACK_HOST
       
       ......
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      1. 登录 http://192.168.0.125/horizon/auth/login/?next=/horizon/project/

      查看网络
      在这里插入图片描述

      查看项目:
      在这里插入图片描述
      退出 Admin 切换到 myproject (即 Demo)

    在这里插入图片描述

    七、再来处理计算节点: compute1

    计算节点好像只需要安装 3 样东西: chrony、Compute service、Networking service

    1. 加入 apt 源
      OpenStack packages for Ubuntu

    2. 安装 chrony 时间同步服务
      Other nodes reference the controller node for clock synchronization

    3. 计算节点安装 Compute service
      Install and configure a compute node

      进入 Ubuntu 部分
      Install and configure a compute node for Ubuntu

      在 Controller 服务器上加入并验证

      root@controller:/home/dhbm# openstack compute service list --service nova-compute

       +----+--------------+------------+------+---------+-------+----------------------------+
       | ID | Binary       | Host       | Zone | Status  | State | Updated At                 |
       +----+--------------+------------+------+---------+-------+----------------------------+
       |  5 | nova-compute | controller | nova | enabled | up    | 2022-11-20T06:19:13.000000 |
       | 10 | nova-compute | compute1   | nova | enabled | up    | 2022-11-20T06:19:20.000000 |
       +----+--------------+------------+------+---------+-------+----------------------------+
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6

      如果没有列出 compute1 ,再次 discover_hosts

      root@controller:/home/dhbm# su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova

       Found 2 cell mappings.
       Skipping cell0 since it does not contain hosts.
       Getting computes from cell 'cell1': acff5521-7a28-4a1d-bf04-6354dc5884eb
       Checking host mapping for compute host 'controller': 7b591dfb-abe9-4403-95f8-577cc5609e85
       Creating host mapping for compute host 'controller': 7b591dfb-abe9-4403-95f8-577cc5609e85
       Checking host mapping for compute host 'compute1': c1d2f250-617a-4b9a-9d56-a88a8378a730
       Creating host mapping for compute host 'compute1': c1d2f250-617a-4b9a-9d56-a88a8378a730
       Found 2 unmapped computes in cell: acff5521-7a28-4a1d-bf04-6354dc5884eb
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
    4. 计算节点安装网络服务 ( Networking service )

      Install and configure compute node
      和 Controller 一样选择 : Networking Option 2: Self-service networks

      重新验证 network agent ,这次可以看到列出了 5 条,多出来的是 compute1 的 Linux bridge agent

      root@controller:/home/dhbm# openstack network agent list

       +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
       | ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
       +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
       | 326c6f4c-370a-4f06-abfb-6e7fa1bc6c67 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
       | 4b594d85-cc54-45b1-8264-2c692827d7ca | Linux bridge agent | compute1   | None              | XXX   | UP    | neutron-linuxbridge-agent |
       | 633c6363-6a63-4ac9-a589-41d28a1508ae | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
       | 9a8ead8b-2173-44f2-a102-8c1d4b4098da | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
       | fce10fb1-f9a7-4f5e-862b-51955d52b149 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
       +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
    5. DashBoard 查看一下

    在这里插入图片描述

    其余部分不再赘述 (略)

    后记

    记录一下Openstack 服务启动后的 IP 情况,为后续继续学习 OVN 留个对比记录

    Controller

    root@controller:/home/dhbm# ip a

    1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0:  mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:15:5d:5a:a6:70 brd ff:ff:ff:ff:ff:ff
        inet 192.168.0.125/24 brd 192.168.0.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::215:5dff:fe5a:a670/64 scope link 
           valid_lft forever preferred_lft forever
    3: eth1:  mtu 1500 qdisc mq master brq1c17dcdc-ab state UP group default qlen 1000
        link/ether 00:15:5d:5a:a6:71 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::215:5dff:fe5a:a671/64 scope link 
           valid_lft forever preferred_lft forever
    4: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:bf:5f:f9 brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
           valid_lft forever preferred_lft forever
    5: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
        link/ether 52:54:00:bf:5f:f9 brd ff:ff:ff:ff:ff:ff
    8: tap1d7845ef-f0@if2:  mtu 1450 qdisc noqueue master brqdcda4686-43 state UP group default qlen 1000
        link/ether 6e:ad:58:d4:3f:d7 brd ff:ff:ff:ff:ff:ff link-netns qdhcp-dcda4686-434c-409c-8de4-134eafdbe939
    9: tape0f651a9-88@if2:  mtu 1500 qdisc noqueue master brq1c17dcdc-ab state UP group default qlen 1000
        link/ether 06:a4:91:d3:aa:78 brd ff:ff:ff:ff:ff:ff link-netns qdhcp-1c17dcdc-ab9a-4324-8dc0-e5a21515323d
    10: brq1c17dcdc-ab:  mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 00:15:5d:5a:a6:71 brd ff:ff:ff:ff:ff:ff
        inet 203.0.113.125/24 brd 203.0.113.255 scope global brq1c17dcdc-ab
           valid_lft forever preferred_lft forever
        inet6 fe80::1819:f8ff:fe25:2dd6/64 scope link 
           valid_lft forever preferred_lft forever
    11: vxlan-415:  mtu 1450 qdisc noqueue master brqdcda4686-43 state UNKNOWN group default qlen 1000
        link/ether 86:2f:41:52:bc:9d brd ff:ff:ff:ff:ff:ff
    12: brqdcda4686-43:  mtu 1450 qdisc noqueue state UP group default qlen 1000
        link/ether 6e:ad:58:d4:3f:d7 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::1082:4dff:fe3e:9e5a/64 scope link 
           valid_lft forever preferred_lft forever
    13: tap32127872-61@if2:  mtu 1450 qdisc noqueue master brqdcda4686-43 state UP group default qlen 1000
        link/ether 9e:6e:29:64:50:82 brd ff:ff:ff:ff:ff:ff link-netns qrouter-f6a53f5b-104a-4840-bd67-db6b5a51d6dd
    14: tapfe315fd4-90@if3:  mtu 1500 qdisc noqueue master brq1c17dcdc-ab state UP group default qlen 1000
        link/ether f6:47:a5:37:b4:2a brd ff:ff:ff:ff:ff:ff link-netns qrouter-f6a53f5b-104a-4840-bd67-db6b5a51d6dd
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42

    计算节点( compute1 )

    root@compute1:/home/dhbm# ip a

    1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0:  mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:15:5d:5a:a6:81 brd ff:ff:ff:ff:ff:ff
        inet 192.168.0.127/24 brd 192.168.0.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::215:5dff:fe5a:a681/64 scope link 
           valid_lft forever preferred_lft forever
    3: eth1:  mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:15:5d:5a:a6:82 brd ff:ff:ff:ff:ff:ff
        inet 203.0.113.127/24 brd 203.0.113.255 scope global eth1
           valid_lft forever preferred_lft forever
        inet6 fe80::215:5dff:fe5a:a682/64 scope link 
           valid_lft forever preferred_lft forever
    4: ovs-system:  mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether 3e:3d:be:8b:a4:a3 brd ff:ff:ff:ff:ff:ff
    5: br-int:  mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether 4e:2b:f8:3f:e7:16 brd ff:ff:ff:ff:ff:ff
    6: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:1b:4c:40 brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
           valid_lft forever preferred_lft forever
    7: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
        link/ether 52:54:00:1b:4c:40 brd ff:ff:ff:ff:ff:ff
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
  • 相关阅读:
    YOLO系列之yolov2解读(2)
    security异常处理机制
    ORB-SLAM3测试
    微服务框架 SpringCloud微服务架构 15 RabbitMQ 快速入门 15.2 消息模型介绍
    P4491 [HAOI2018] 染色
    助力工业物联网,工业大数据项目介绍及环境构建【一】
    UE4 快速入门 1
    ERD Online 4.0.3_fix 元数据在线建模(免费、私有部署)
    数据湖(十一):Iceberg表数据组织与查询
    Python逆向爬虫之scrapy框架,非常详细
  • 原文地址:https://blog.csdn.net/u010953609/article/details/127933944