• openstack 遇到的error


    也算是第一次搭建这个玩意,遇到的问题也是奇奇怪怪,这里记录一下,有些是绕过去的(完全不知道的怎么着手),所以仅供参考吧,版本为q版,系统是麒麟V10


    查看注册节点

    添加新的compute节点,一直注册不到cells中_itachi-uchiha的博客-CSDN博客

    处理:

    原因是计算节点没有加入成功,那么就去计算查看相关日志

    发现如下红框报错

    暂时不知道为啥找这个路径,或者说为啥这个路径没有创建,我们手动mkdir这个路径

    控制节点那边可以看到了


    failed to compute_task_build_instances: 找不到有效主机,原因是

    查看计算节点的资源是否够用,不够的话,减少创建虚拟机的实例资源

    openstack创造实例报:找不到有效有效主机,没有足够主机,而且点击实例还报500_Cvv_菜包的博客-CSDN博客


    控制节点openstack volume service list查看存储服务状态,报错如下:

    且cinder的log如下:看着像是一些认证失败的问题

    首先去看/etc/cinder/cinder.conf文件中关于cinder的密码有没有对(我刚开始就是这没对,后来改成正确的cinder密码,还是不行)

    然后没有问题的话,如果没发现其他的错误。那么最终只有重新生成cinder数据库了


    chown -R nova:nova /usr/lib/python2.7/site-packages/instances/

    [root@compute site-packages]# ls -al /usr/lib/python2.7/site-packages/instances/

    总用量 40

    drwxr-xr-x   3 nova nova    40  9月 26 10:44 .

    drwxr-xr-x 397 root root 20480  9月 26 10:03 ..

    -rw-r--r--   1 nova nova    30  9月 26 10:44 compute_nodes

    drwxr-xr-x   2 nova nova    40  9月 26 10:44 locks

    [root@compute site-packages]#


    [root@cinder ~]# tail -f /var/log/cinder/volume.log

     ImageCopyFailure: Failed to copy image to volume: qemu-img: error while writing sector 2207744: No space left on device

    在控制节点删除openstack volume list下的卷


    访问API显示问题

    上面的service跟这个对应


    [root@compute ~]# tail -f /var/log/nova/*.log

    2022-10-28 09:09:55.062 1595945 INFO nova.virt.libvirt.driver [req-28a720fb-7929-43d2-9aa5-2c8d9fc44c40 0d807080c140419d8402722f55d28780 3988ee5d3b9041dfb7e8ffbb18ee17a2 - default default] [instance: f7a1971c-0894-46b5-ba83-4c8fb59dcefe] Creating image

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager [req-28a720fb-7929-43d2-9aa5-2c8d9fc44c40 0d807080c140419d8402722f55d28780 3988ee5d3b9041dfb7e8ffbb18ee17a2 - default default] Instance failed network setup after 1 attempt(s): PortBindingFailed: \u7ed1\u5b9a\u7aef\u53e35c84287d-5a46-4e9d-a70d-2912ed8ca063\u5931\u8d25\uff0c\u66f4\u591a\u7ec6\u8282\u8bf7\u67e5\u770b\u65e5\u5fd7\u3002

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager Traceback (most recent call last):

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1436, in _allocate_network_async

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager     bind_host_id=bind_host_id)

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 989, in allocate_for_instance

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager     bind_host_id, available_macs, requested_ports_dict)

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1122, in _update_ports_for_instance

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager     vif.destroy()

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager     self.force_reraise()

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager     six.reraise(self.type_, self.value, self.tb)

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1092, in _update_ports_for_instance

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager     port_client, instance, port_id, port_req_body)

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 470, in _update_port

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager     _ensure_no_port_binding_failure(port)

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 215, in _ensure_no_port_binding_failure

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager     raise exception.PortBindingFailed(port_id=port['id'])

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager PortBindingFailed: \u7ed1\u5b9a\u7aef\u53e35c84287d-5a46-4e9d-a70d-2912ed8ca063\u5931\u8d25\uff0c\u66f4\u591a\u7ec6\u8282\u8bf7\u67e5\u770b\u65e5\u5fd7\u3002

    2022-10-28 09:09:58.218 1595945 ERROR nova.compute.manager

    2022-10-28 09:09:58.221 1595945 ERROR nova.compute.manager [req-28a720fb-7929-43d2-9aa5-2c8d9fc44c40 0d807080c140419d8402722f55d28780 3988ee5d3b9041dfb7e8ffbb18ee17a2 - default default] [instance: f7a1971c-0894-46b5-ba83-4c8fb59dcefe] Instance failed to spawn: PortBindingFailed: \u7ed1\u5b9a\u7aef\u53e35c84287d-5a46-4e9d-a70d-2912ed8ca063\u5931\u8d25\uff0c\u66f4\u591a\u7ec6\u8282\u8bf7\u67e5\u770b\u65e5\u5fd7\u3002

    [root@compute neutronv2]# systemctl restart openstack-nova-compute.service


    计算节点报错

    [root@compute ~]# journalctl -xefu  openstack-nova-compute.service

    9月 27 10:36:45 compute nova-compute[362647]: 2022-09-27 10:36:45.032 362647 ERROR root [req-19ed90ea-c891-4e31-9b11-facd9168a7ba 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] Original exception being dropped: ['Traceback (most recent call last):\n', '  File "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py", line 510, in connect_volume\n    return self._connect_single_volume(connection_properties)\n', '  File "/usr/lib/python2.7/site-packages/os_brick/utils.py", line 61, in _wrapper\n    return r.call(f, *args, **kwargs)\n', '  File "/usr/lib/python2.7/site-packages/retrying.py", line 247, in call\n    return attempt.get(self._wrap_exception)\n', '  File "/usr/lib/python2.7/site-packages/retrying.py", line 285, in get\n    reraise(self.value[0], self.value[1], self.value[2])\n', '  File "/usr/lib/python2.7/site-packages/retrying.py", line 241, in call\n    attempt = Attempt(fn(*args, **kwargs), attempt_number, False)\n', '  File "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py", line 565, in _connect_single_volume\n    self._connect_vol(self.device_scan_attempts, props, data)\n', '  File "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py", line 620, in _connect_vol\n    session, manual_scan = self._connect_to_iscsi_portal(props)\n', '  File "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py", line 1009, in _connect_to_iscsi_portal\n    \'--op\', \'new\'))\n', '  File "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py", line 962, in _run_iscsiadm\n    delay_on_retry=delay_on_retry)\n', '  File "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in _execute\n    result = self.__execute(*args, **kwargs)\n', '  File "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 169, in execute\n    return execute_root(*cmd, **kwargs)\n', '  File "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 207, in _wrap\n    return self.channel.remote_call(name, args, kwargs)\n', '  File "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in remote_call\n    raise exc_type(*result[2])\n', "ProcessExecutionError: Unexpected error while running command.\nCommand: iscsiadm -m node -T iqn.2010-10.org.openstack:volume-309aa2dc-f83b-41fe-9cd8-dbd22a583b03 -p 192.168.100.134:3260 --interface default --op new\nExit code: 6\nStdout: u''\nStderr: u'iscsiadm: Cannot rename /etc/iscsi/nodes/iqn.2010-10.org.openstack:volume-309aa2dc-f83b-41fe-9cd8-dbd22a583b03/192.168.100.134,3260 -> /etc/iscsi/nodes/iqn.2010-10.org.openstack:volume-309aa2dc-f83b-41fe-9cd8-dbd22a583b03/192.168.100.134,3260,-1/default\\n\\niscsiadm: Error while adding record: encountered iSCSI database failure\\n'\n"]: IndexError: list index out of range

    9月 27 10:36:45 compute nova-compute[362647]: 2022-09-27 10:36:45.035 362647 ERROR nova.compute.manager [req-19ed90ea-c891-4e31-9b11-facd9168a7ba 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] [instance: 112a3d6f-5d73-4a2f-974e-a4af16eca890] Instance failed to spawn: IndexError: list index out of range

    9月 27 10:36:45 compute nova-compute[362647]: 2022-09-27 10:36:45.035 362647 ERROR nova.compute.manager [instance: 112a3d6f-5d73-4a2f-974e-a4af16eca890] Traceback (most recent call last):

    iscsi的报错

    分析过程:排查存储节点的iscsi的各个命令是否正常

    [root@cinder ~]# iscsiadm -m discovery -t st -p 192.168.100.134

    iscsiadm: cannot make connection to 192.168.100.134: Connection refused

    iscsiadm: cannot make connection to 192.168.100.134: Connection refused

    iscsiadm: cannot make connection to 192.168.100.134: Connection refused

    iscsiadm: cannot make connection to 192.168.100.134: Connection refused

    iscsiadm: cannot make connection to 192.168.100.134: Connection refused

    iscsiadm: cannot make connection to 192.168.100.134: Connection refused

    iscsiadm: connection login retries (reopen_max) 5 exceeded

    iscsiadm: Could not perform SendTargets discovery: iSCSI PDU timed out

    [root@cinder ~]# tgtadm --mode target --op show

    tgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected

    [root@cinder ~]# tgtadm

    tgtadm: specify the operation type

    [root@cinder ~]# tgtadm --mode

    tgtadm: option '--mode' requires an argument

    Try `tgtadm --help' for more information.

    [root@cinder ~]# tgtadm --mode  target

    tgtadm: specify the operation type

    [root@cinder ~]# tgtadm --mode  target --op

    tgtadm: option '--op' requires an argument

    Try `tgtadm --help' for more information.

    [root@cinder ~]# tgtadm --mode  target --op show

    tgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected

    [root@cinder ~]# iptables-save > 1.txt

    [root@cinder ~]#

    [root@cinder ~]# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

    [root@cinder ~]#

    [root@cinder ~]# tgtadm --mode  target --op show

    tgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected

    [root@cinder ~]# systemctl status tgtd

    ● tgtd.service - tgtd iSCSI target daemon

       Loaded: loaded (/usr/lib/systemd/system/tgtd.service; disabled; vendor preset: dis>

       Active: inactive (dead)

    [root@cinder ~]# systemctl start  tgtd

    [root@cinder ~]# systemctl status tgtd

    ● tgtd.service - tgtd iSCSI target daemon

       Loaded: loaded (/usr/lib/systemd/system/tgtd.service; disabled; vendor preset: dis>

       Active: active (running) since Tue 2022-09-27 12:23:28 CST; 1s ago

      Process: 455937 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)

      Process: 456066 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State >

      Process: 456067 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, >

      Process: 456072 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State >

     Main PID: 455936 (tgtd)

        Tasks: 2

       Memory: 3.5M

       CGroup: /system.slice/tgtd.service

               └─455936 /usr/sbin/tgtd -f

    9月 27 12:23:23 cinder systemd[1]: Starting tgtd iSCSI target daemon...

    9月 27 12:23:23 cinder tgtd[455936]: tgtd: iser_ib_init(3431) Failed to initialize RD>

    9月 27 12:23:23 cinder tgtd[455936]: tgtd: work_timer_start(146) use timer_fd based s>

    9月 27 12:23:23 cinder tgtd[455936]: tgtd: bs_init(393) use pthread notification

    9月 27 12:23:28 cinder systemd[1]: Started tgtd iSCSI target daemon.

    [root@cinder ~]# tgtadm --mode  target --op show

    [root@cinder ~]#

    [root@cinder ~]# iscsiadm -m discovery -t st -p 192.168.100.134

    [root@cinder ~]#

    [root@cinder ~]#

    [root@cinder ~]# iscsiadm -m discovery -t st -p 192.168.100.134:3260

    [root@cinder ~]#

    参考:OpenStack添加iscsi共享存储_独笔孤行的博客-CSDN博客_openstack 共享卷

    备注:cinder 的ip 192.168.100.134

    但是:cinder节点会有报错,表示3260端口已经被占用了。。。。。?????

    最终的解决办法,升级包........

    [root@compute ~]# iscsiadm -m node -T iqn.2010-10.org.openstack:volume-88e9a96a-6b57-4b62-a279-f04213a1c130 -p 192.168.100.134:3260 --interface default --op new

    New iSCSI node [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.100.134,3260,-1 iqn.2010-10.org.openstack:volume-88e9a96a-6b57-4b62-a279-f04213a1c130] added

    [root@compute ~]#

    [root@compute ~]#

    [root@compute ~]#

    [root@compute ~]# rpm -Uvh http://10.1.123.238/kojifiles/packages/open-iscsi/2.1.1/11.p01.ky10/aarch64/open-iscsi-2.1.1-11.p01.ky10.aarch64.rpm http://10.1.123.238/kojifiles/packages/open-iscsi/2.1.1/11.p01.ky10/aarch64/open-iscsi-help-2.1.1-11.p01.ky10.aarch64.rpm

    [root@compute ~]# ssh cinder

    root@cinder's password:

    Web console: https://cinder:9090/ or https://192.168.100.134:9090/

    Last login: Tue Sep 27 17:14:21 2022 from 192.168.100.157

    [root@cinder ~]# lsof -i :3260

    [root@cinder ~]#


    9月 28 09:16:52 compute nova-compute[86360]: 2022-09-28 09:16:52.831 86360 INFO nova.virt.libvirt.driver [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] CPU mode "host-passthrough" was chosen. Live migration can break unless all compute nodes have identical cpus. AArch64 does not support other modes.

    9月 28 09:16:52 compute nova-compute[86360]: 2022-09-28 09:16:52.833 86360 WARNING nova.virt.libvirt.driver [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] uefi support is without some kind of functional testing and therefore considered experimental.

    9月 28 09:16:52 compute nova-compute[86360]: 2022-09-28 09:16:52.834 86360 INFO os_brick.initiator.connectors.iscsi [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] Trying to connect to iSCSI portal 192.168.100.134:3260

    9月 28 09:16:53 compute nova-compute[86360]: 2022-09-28 09:16:53.086 86360 WARNING os_brick.initiator.connectors.iscsi [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] Couldn't find iscsi sessions because iscsiadm err: iscsiadm: No active sessions.

    9月 28 09:16:53 compute nova-compute[86360]: 2022-09-28 09:16:53.087 86360 INFO os_brick.initiator.connectors.iscsi [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] Trying to 4444444444444444444444444444444

    9月 28 09:16:53 compute nova-compute[86360]: 2022-09-28 09:16:53.133 86360 INFO os_brick.initiator.connectors.iscsi [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] Trying to 6666666666666666666666666666

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac] Instance failed to spawn: InternalError: \u5b58\u5728\u610f\u5916 vif_type=binding_failed

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac] Traceback (most recent call last):

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2274, in _build_resources

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]     yield resources

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2054, in _build_and_run_instance

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]     block_device_info=block_device_info)

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3165, in spawn

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]     mdevs=mdevs)

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5458, in _get_guest_xml

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]     context, mdevs)

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5255, in _get_guest_config

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]     flavor, virt_type, self._host)

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 591, in get_config

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]     _("Unexpected vif_type=%s") % vif_type)

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac] InternalError: \u5b58\u5728\u610f\u5916 vif_type=binding_failed

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.297 86360 ERROR nova.compute.manager [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac]

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.299 86360 INFO nova.compute.manager [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac] Terminating instance

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.313 86360 INFO nova.virt.libvirt.driver [-] [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac] Instance destroyed successfully.

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.588 86360 INFO nova.virt.libvirt.driver [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac] Deleting instance files /usr/lib/python2.7/site-packages/instances/504fc9ac-6461-4eb3-81c7-fdc2053088ac_del

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.589 86360 INFO nova.virt.libvirt.driver [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac] Deletion of /usr/lib/python2.7/site-packages/instances/504fc9ac-6461-4eb3-81c7-fdc2053088ac_del complete

    9月 28 09:16:57 compute nova-compute[86360]: 2022-09-28 09:16:57.812 86360 INFO nova.compute.manager [req-2401507c-8bc7-4ce0-8b05-9bec080a4307 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] [instance: 504fc9ac-6461-4eb3-81c7-fdc2053088ac] Took 0.50 seconds to destroy the instance on the hypervisor.

    ^C

    [root@compute ~]# systemctl restart openstack-nova-compute.service

    [root@compute ~]#


    计算节点openstack-nova-compute日志报错:

    -privsep_context vif_plug_linux_bridge.privsep.vif_plug --privsep_sock_path /tmp/tmpmrTFPn/privsep.sock

    9月 28 12:05:49 compute nova-compute[100181]: 2022-09-28 12:05:49.239 100181 INFO oslo.privsep.daemon [req-98be9e22-2aeb-4d61-9a55-28793f31ff2a 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] Spawned new privsep daemon via rootwrap

    9月 28 12:05:49 compute nova-compute[100181]: 2022-09-28 12:05:48.687 100810 INFO oslo.privsep.daemon [-] privsep daemon starting

    9月 28 12:05:49 compute nova-compute[100181]: 2022-09-28 12:05:48.752 100810 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0

    9月 28 12:05:49 compute nova-compute[100181]: 2022-09-28 12:05:48.777 100810 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN/CAP_NET_ADMIN/none

    9月 28 12:05:49 compute nova-compute[100181]: 2022-09-28 12:05:48.778 100810 INFO oslo.privsep.daemon [-] privsep daemon running as pid 100810

    9月 28 12:05:49 compute nova-compute[100181]: 2022-09-28 12:05:49.544 100181 INFO os_vif [req-98be9e22-2aeb-4d61-9a55-28793f31ff2a 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:a2:54:ad,bridge_name='brq596914c8-72',has_traffic_filtering=True,id=20500ebd-6549-4247-ba85-47a7a82b5e37,network=Network(596914c8-7233-4c33-afed-169b10caa3d4),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap20500ebd-65')

    9月 28 12:05:49 compute nova-compute[100181]: Traceback (most recent call last):

    9月 28 12:05:49 compute nova-compute[100181]:   File "/usr/lib64/python2.7/logging/__init__.py", line 868, in emit

    9月 28 12:05:49 compute nova-compute[100181]:     msg = self.format(record)

    9月 28 12:05:49 compute nova-compute[100181]:   File "/usr/lib/python2.7/site-packages/oslo_log/handlers.py", line 168, in format

    9月 28 12:05:49 compute nova-compute[100181]:     return logging.StreamHandler.format(self, record) + record.reset_color

    9月 28 12:05:49 compute nova-compute[100181]:   File "/usr/lib64/python2.7/logging/__init__.py", line 741, in format

    9月 28 12:05:49 compute nova-compute[100181]:     return fmt.format(record)

    9月 28 12:05:49 compute nova-compute[100181]:   File "/usr/lib/python2.7/site-packages/oslo_log/formatters.py", line 496, in format

    9月 28 12:05:49 compute nova-compute[100181]:     return logging.Formatter.format(self, record)

    9月 28 12:05:49 compute nova-compute[100181]:   File "/usr/lib64/python2.7/logging/__init__.py", line 476, in format

    9月 28 12:05:49 compute nova-compute[100181]:     raise e

    9月 28 12:05:49 compute nova-compute[100181]: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 14: ordinal not in range(128)

    9月 28 12:05:49 compute nova-compute[100181]: Logged from file guest.py, line 129

    9月 28 12:05:49 compute nova-compute[100181]: Traceback (most recent call last):

    9月 28 12:05:49 compute nova-compute[100181]:   File "/usr/lib64/python2.7/logging/__init__.py", line 868, in emit

    9月 28 12:05:49 compute nova-compute[100181]:     msg = self.format(record)

    9月 28 12:05:49 compute nova-compute[100181]:   File "/usr/lib/python2.7/site-packages/oslo_log/handlers.py", line 168, in format

    9月 28 12:05:49 compute nova-compute[100181]:     return logging.StreamHandler.format(self, record) + record.reset_color

    9月 28 12:05:49 compute nova-compute[100181]:   File "/usr/lib64/python2.7/logging/__init__.py", line 741, in format

    9月 28 12:05:49 compute nova-compute[100181]:     return fmt.format(record)

    9月 28 12:05:49 compute nova-compute[100181]:   File "/usr/lib/python2.7/site-packages/oslo_log/formatters.py", line 496, in format

    9月 28 12:05:49 compute nova-compute[100181]:     return logging.Formatter.format(self, record)

    9月 28 12:05:49 compute nova-compute[100181]:   File "/usr/lib64/python2.7/logging/__init__.py", line 476, in format

    9月 28 12:05:49 compute nova-compute[100181]:     raise e

    9月 28 12:05:49 compute nova-compute[100181]: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 14: ordinal not in range(128)

    9月 28 12:05:49 compute nova-compute[100181]: Logged from file driver.py, line 5670

    解决办法:

    [root@compute site-packages]# vim /usr/lib64/python2.7/logging/__init__.py +741

    [root@compute site-packages]# vim /usr/lib/python2.7/site-packages/oslo_log/handlers.py +168

    [root@compute site-packages]# vim /usr/lib/python2.7/site-packages/oslo_log/handlers.py +168

    [root@compute site-packages]# vim /usr/lib/python2.7/site-packages/oslo_log/formatters.py

    reload(sys)

    sys.setdefaultencoding('utf-8')

    [root@compute ~]# systemctl restart openstack-nova-compute.service 即可


    控制节点网络报错日志:

    计算节点openstack-nova-compute报错:

    d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] Couldn't find iscsi sessions because iscsiadm err: iscsiadm: No active sessions.

    9月 27 17:31:36 compute nova-compute[15398]: 2022-09-27 17:31:36.242 15398 ERROR nova.compute.manager [req-250f27e3-8517-4dfb-8b84-74bc900975c9 627ba90c1e3d4d139090d39179b81181 e43ae063bfad4c79b6bde8fe31449f4c - default default] [instance: e69580c6-11b9-477f-ac2a-64f08ee9cf7d] Instance failed to spawn: InternalError: \u5b58\u5728\u610f\u5916 vif_type=binding_failed

    9月 27 17:31:36 compute nova-compute[15398]: 2022-09-27 17:31:36.242 15398 ERROR nova.compute.manager [instance: e69580c6-11b9-477f-ac2a-64f08ee9cf7d] Traceback (most recent call last):

    9月 27 17:31:36 compute nova-compute[15398]: 2022-09-27 17:31:36.242 15398 ERROR nova.compute.manager [instance: e69580c6-11b9-477f-ac2a-64f08ee9cf7d]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2274, in _build_resources

    9月 27 17:31:36 compute nova-compute[15398]: 2022-09-27 17:31:36.242 15398 ERROR nova.compute.manager [instance: e69580c6-11b9-477f-ac2a-64f08ee9cf7d]     yield resources

    9月 27 17:31:36 compute nova-compute[15398]: 2022-09-27 17:31:36.242 15398 ERROR nova.compute.manager [instance: e69580c6-11b9-477f-ac2a-64f08ee9cf7d]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2054, in _build_and_run_instance

    9月 27 17:31:36 compute nova-compute[15398]: 2022-09-27 17:31:36.242 15398 ERROR nova.compute.manager [instance: e69580c6-11b9-477f-ac2a-64f08ee9cf7d]     block_device_info=block_device_info)

    9月 27 17:31:36 compute nova-compute[15398]: 2022-09-27 17:31:36.242 15398 ERROR nova.compute.manager [instance: e69580c6-11b9-477f-ac2a-64f08ee9cf7d]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3165, in spawn

    9月 27 17:31:36 compute nova-compute[15398]: 2022-09-27 17:31:36.242 15398 ERROR nova.compute.manager [instance: e69580c6-11b9-477f-ac2a-64f08ee9cf7d]     mdevs=mdevs)

    9月 27 17:31:36 compute nova-compute[15398]: 2022-09-27 17:31:3

    网络报那个错误,检查

    nova配置中的 neutron部分

    neutron的配置文件中transport_url

    我这里157是计算节点,这里由于只在控制节点部署了rabbitmq,因此这里只能填控制节点的ip地址,上图是错误的。。。。。

    改成控制节点的ip之后,上面的网络报错就么有了。


    [root@compute nova]# journalctl -xefu  neutron-linuxbridge-agent.service

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]: Traceback (most recent call last):

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/bin/neutron-linuxbridge-agent", line 6, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.cmd.eventlet.plugins.linuxbridge_neutron_agent import main

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/cmd/eventlet/plugins/linuxbridge_neutron_agent.py", line 15, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     import \

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", line 35, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.api.rpc.handlers import securitygroups_rpc as sg_rpc

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/securitygroups_rpc.py", line 24, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.api.rpc.handlers import resources_rpc

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py", line 24, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.api.rpc.callbacks.consumer import registry as cons_registry

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/api/rpc/callbacks/consumer/registry.py", line 15, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.api.rpc.callbacks import resource_manager

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/api/rpc/callbacks/resource_manager.py", line 21, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.api.rpc.callbacks import resources

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/api/rpc/callbacks/resources.py", line 15, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.objects import network

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/objects/network.py", line 21, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.db.models import segment as segment_model

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/db/models/segment.py", line 24, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.extensions import segment

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/extensions/segment.py", line 26, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.api import extensions

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/api/extensions.py", line 32, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     from neutron.plugins.common import constants as const

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:   File "/usr/lib/python2.7/site-packages/neutron/plugins/common/constants.py", line 28, in

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]:     'router': constants.L3,

    10月 28 11:13:07 compute neutron-linuxbridge-agent[2066496]: AttributeError: 'module' object has no attribute 'L3'

    10月 28 11:13:07 compute systemd[1]: neutron-linuxbridge-agent.service: Main process exited, code=exited, status=1/FAILURE

    -- Subject: Unit process exited

    -- Defined-By: systemd

    -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel

    注释掉

    [root@compute nova]# systemctl restart  neutron-linuxbridge-agent.service


    cinder节点报错

    [root@cinder ~]# pvcreate /dev/sdb

      Device /dev/sdb excluded by a filter.

    [root@cinder ~]#

    注释掉filter

    [root@cinder ~]# pvcreate /dev/sdb

      Physical volume "/dev/sdb" successfully created.


    2022-10-09 09:00:17.246 4655 ERROR nova.compute.manager [instance: ca7f45ce-94d2-4bc0-a895-e44bd34d0cbc] BuildAbortException: 实例ca7f45ce-94d2-4bc0-a895-e44bd34d0cbc的构建已中止:在等待191 秒或61尝试后,卷8a6b9e55-bb98-4613-8f33-2a3c8dea865e仍然没有创建成功。它的状态是creating。

    在计算节点上的nova.conf中有一个控制卷设备重试的参数:block_device_allocate_retries,可以通过修改此参数延长等待时间。

    该参数默认值为60,这个对应了之前实例创建失败消息里的61 attempts。我们可以将此参数设置的大一点,例如:180。这样Nova组件就不会等待卷创建超时,也即解决了此问题。

    然后重启计算节点服务

    openstack-config --set /etc/nova/nova.conf DEFAULT block_device_allocate_retries 180

    systemctl restart libvirtd.service openstack-nova-compute.service


    错误1:

    计算节点启动openstack-nova-compute.service失败,错误日志:

    解决办法:

    控制节点执行如下

    [root@openstack-node01 ~]# netstat -ntpl|grep 5672

    tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      695942/beam.smp    

    tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      695942/beam.smp    

    tcp6       0      0 :::5672                 :::*                    LISTEN      695942/beam.smp    

    [root@openstack-node01 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

    Setting permissions for user "openstack" in vhost "/" ...

    Error:

    {:no_such_user, "openstack"}

    [root@openstack-node01 ~]# rabbitmqctl add_user openstack openstack

    Adding user "openstack" ...

    [root@openstack-node01 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

    Setting permissions for user "openstack" in vhost "/" ...

    [root@openstack-node01 ~]#

    然后计算节点重新启动

    -


    一直卡在登录界面,一直进不去项目管理界面

    检查/etc/openstack-dashboard/local_settings这个文件

     systemctl daemon-reload

    systemctl restart httpd.service

    systemctl restart memcached.service

    即可

    补充:

    参考资料:

    OpenStack Queens版搭建详解_CodeStarNote的博客-CSDN博客

    3.5 安装Horizon服务

    -------------------------------------------------------------------------------------

    删除openstack network list显示的网络即可

    [root@openstack-node01 ~]# openstack network list  [root@openstack-node01 ~]#  [root@openstack-node01 ~]#  [root@openstack-node01 ~]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | UP                                   | | availability_zone_hints   |                                      | | availability_zones        |                                      | | created_at                | 2022-09-22T07:50:35Z                 | | description               |                                      | | dns_domain                | None                                 | | id                        | 5c8c50dc-2b40-48f9-b68b-0d1a41dd086d | | ipv4_address_scope        | None                                 | | ipv6_address_scope        | None                                 | | is_default                | None                                 | | is_vlan_transparent       | None                                 | | mtu                       | 1500                                 | | name                      | provider                             | | port_security_enabled     | True                                 | | project_id                | 430459b728f94b2898b80d0c1987ad7a     | | provider:network_type     | flat                                 | | provider:physical_network | provider                             | | provider:segmentation_id  | None                                 | | qos_policy_id             | None                                 | | revision_number           | 4                                    | | router:external           | External                             | | segments                  | None                                 | | shared                    | True                                 | | status                    | ACTIVE                               | | subnets                   |                                      | | tags                      |                                      | | updated_at                | 2022-09-22T07:50:35Z                 | +---------------------------+--------------------------------------+ [root@openstack-node01 ~]#  [root@openstack-node01 ~]# openstack network list +--------------------------------------+----------+---------+ | ID                                   | Name     | Subnets | +--------------------------------------+----------+---------+ | 5c8c50dc-2b40-48f9-b68b-0d1a41dd086d | provider |         | +--------------------------------------+----------+---------+ [root@openstack-node01 ~]# openstack subnet create --network provider --allocation-pool start=10.1.1.120,end=10.1.1.150 --dns-nameserver 202.96.128.86 --gateway 10.1.1.2 --subn et-range 10.1.1.0/24 provider-subnet usage: openstack subnet create [-h] [-f {json,shell,table,value,yaml}]                                [-c COLUMN] [--max-width ]                                [--fit-width] [--print-empty] [--noindent]                                [--prefix PREFIX] [--project ]                                [--project-domain ]                                [--subnet-pool | --use-prefix-delegation USE_PREFIX_DELEGATION | --use-default-subnet-pool]                                [--prefix-length ]                                [--subnet-range ]                                [--dhcp | --no-dhcp] [--gateway ]                                [--ip-version {4,6}]                                [--ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}]                                [--ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}]                                [--network-segment ] --network                                 [--description ]                                [--allocation-pool start=,end=]                                [--dns-nameserver ]                                [--host-route destination=,gateway=]                                [--service-type ]                                [--tag | --no-tag]                                name openstack subnet create: error: ambiguous option: --subn could match --subnet-range, --subnet-pool [root@openstack-node01 ~]#  [root@openstack-node01 ~]#  [root@openstack-node01 ~]# openstack subnet create --network provider --allocation-pool start=10.1.1.120,end=10.1.1.150 --dns-nameserver 202.96.128.86 --gateway 10.1.1.2 --subnet-range 10.1.1.0/24 provider-subnet +-------------------+--------------------------------------+ | Field             | Value                                | +-------------------+--------------------------------------+ | allocation_pools  | 10.1.1.120-10.1.1.150                | | cidr              | 10.1.1.0/24                          | | created_at        | 2022-09-22T07:52:30Z                 | | description       |                                      | | dns_nameservers   | 202.96.128.86                        | | enable_dhcp       | True                                 | | gateway_ip        | 10.1.1.2                             | | host_routes       |                                      | | id                | e64788c3-7767-4ab1-a9e3-b33fab0d820c | | ip_version        | 4                                    | | ipv6_address_mode | None                                 | | ipv6_ra_mode      | None                                 | | name              | provider-subnet                      | | network_id        | 5c8c50dc-2b40-48f9-b68b-0d1a41dd086d | | project_id        | 430459b728f94b2898b80d0c1987ad7a     | | revision_number   | 0                                    | | segment_id        | None                                 | | service_types     |                                      | | subnetpool_id     | None                                 | | tags              |                                      | | updated_at        | 2022-09-22T07:52:30Z                 | +-------------------+--------------------------------------+ [root@openstack-node01 ~]# openstack network list +--------------------------------------+----------+--------------------------------------+ | ID                                   | Name     | Subnets                              | +--------------------------------------+----------+--------------------------------------+ | 5c8c50dc-2b40-48f9-b68b-0d1a41dd086d | provider | e64788c3-7767-4ab1-a9e3-b33fab0d820c | +--------------------------------------+----------+--------------------------------------+ [root@openstack-node01 ~]#

    [root@openstack-node01 ~]# openstack volume service list

    Unable to establish connection to http://openstack-node01:8776/v2/430459b728f94b2898b80d0c1987ad7a/os-services: HTTPConnectionPool(host='openstack-node01', port=8776): Max retries exceeded with url: /v2/430459b728f94b2898b80d0c1987ad7a/os-services (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] \xe6\x8b\x92\xe7\xbb\x9d\xe8\xbf\x9e\xe6\x8e\xa5',))

    [root@openstack-node01 ~]# lsof  -i :8776

    [root@openstack-node01 ~]# cinder service-list

    ERROR: Unable to establish connection to http://openstack-node01:8776/v3/430459b728f94b2898b80d0c1987ad7a/os-services: HTTPConnectionPool(host='openstack-node01', port=8776): Max retries exceeded with url: /v3/430459b728f94b2898b80d0c1987ad7a/os-services (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))

    [root@openstack-node01 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

    2022-09-22 16:57:56.266 279201 INFO migrate.versioning.api [-] 84 -> 85...

    2022-09-22 16:57:57.063 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.064 279201 INFO migrate.versioning.api [-] 85 -> 86...

    2022-09-22 16:57:57.106 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.107 279201 INFO migrate.versioning.api [-] 86 -> 87...

    2022-09-22 16:57:57.136 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.137 279201 INFO migrate.versioning.api [-] 87 -> 88...

    2022-09-22 16:57:57.174 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.174 279201 INFO migrate.versioning.api [-] 88 -> 89...

    2022-09-22 16:57:57.193 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.193 279201 INFO migrate.versioning.api [-] 89 -> 90...

    2022-09-22 16:57:57.222 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.222 279201 INFO migrate.versioning.api [-] 90 -> 91...

    2022-09-22 16:57:57.267 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.267 279201 INFO migrate.versioning.api [-] 91 -> 92...

    2022-09-22 16:57:57.276 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.276 279201 INFO migrate.versioning.api [-] 92 -> 93...

    2022-09-22 16:57:57.283 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.284 279201 INFO migrate.versioning.api [-] 93 -> 94...

    2022-09-22 16:57:57.291 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.291 279201 INFO migrate.versioning.api [-] 94 -> 95...

    2022-09-22 16:57:57.299 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.299 279201 INFO migrate.versioning.api [-] 95 -> 96...

    2022-09-22 16:57:57.306 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.307 279201 INFO migrate.versioning.api [-] 96 -> 97...

    2022-09-22 16:57:57.341 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.342 279201 INFO migrate.versioning.api [-] 97 -> 98...

    2022-09-22 16:57:57.372 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.372 279201 INFO migrate.versioning.api [-] 98 -> 99...

    2022-09-22 16:57:57.426 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.428 279201 INFO migrate.versioning.api [-] 99 -> 100...

    2022-09-22 16:57:57.437 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding attachment_specs_attachment_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.445 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding cgsnapshots_consistencygroup_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.450 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding group_snapshots_group_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.454 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding group_type_specs_group_type_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.458 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding group_volume_type_mapping_group_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.462 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding group_volume_type_mapping_volume_type_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.466 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding quality_of_service_specs_specs_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.470 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding reservations_allocated_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.474 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding reservations_usage_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.478 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding snapshot_metadata_snapshot_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.483 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding snapshots_cgsnapshot_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.487 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding snapshots_group_snapshot_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.492 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding snapshots_volume_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.496 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding transfers_volume_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.500 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_admin_metadata_volume_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.504 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_attachment_volume_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.508 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_glance_metadata_snapshot_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.512 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_glance_metadata_volume_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.516 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_metadata_volume_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.520 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_type_extra_specs_volume_type_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.524 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_types_qos_specs_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.530 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding volumes_consistencygroup_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.535 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding volumes_group_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.539 279201 INFO 100_add_foreign_key_indexes [-] Skipped adding workers_service_id_idx because an equivalent index already exists.

    2022-09-22 16:57:57.546 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.546 279201 INFO migrate.versioning.api [-] 100 -> 101...

    2022-09-22 16:57:57.588 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.588 279201 INFO migrate.versioning.api [-] 101 -> 102...

    2022-09-22 16:57:57.609 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.610 279201 INFO migrate.versioning.api [-] 102 -> 103...

    2022-09-22 16:57:57.637 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.637 279201 INFO migrate.versioning.api [-] 103 -> 104...

    2022-09-22 16:57:57.675 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.675 279201 INFO migrate.versioning.api [-] 104 -> 105...

    2022-09-22 16:57:57.709 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.710 279201 INFO migrate.versioning.api [-] 105 -> 106...

    2022-09-22 16:57:57.718 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.718 279201 INFO migrate.versioning.api [-] 106 -> 107...

    2022-09-22 16:57:57.726 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.726 279201 INFO migrate.versioning.api [-] 107 -> 108...

    2022-09-22 16:57:57.734 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.735 279201 INFO migrate.versioning.api [-] 108 -> 109...

    2022-09-22 16:57:57.743 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.744 279201 INFO migrate.versioning.api [-] 109 -> 110...

    2022-09-22 16:57:57.753 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.753 279201 INFO migrate.versioning.api [-] 110 -> 111...

    2022-09-22 16:57:57.783 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.784 279201 INFO migrate.versioning.api [-] 111 -> 112...

    2022-09-22 16:57:57.818 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.818 279201 INFO migrate.versioning.api [-] 112 -> 113...

    2022-09-22 16:57:57.869 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.869 279201 INFO migrate.versioning.api [-] 113 -> 114...

    2022-09-22 16:57:57.950 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.950 279201 INFO migrate.versioning.api [-] 114 -> 115...

    2022-09-22 16:57:57.996 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:57.997 279201 INFO migrate.versioning.api [-] 115 -> 116...

    2022-09-22 16:57:58.041 279201 INFO migrate.versioning.api [-] done

    2022-09-22 16:57:58.042 279201 INFO migrate.versioning.api [-] 116 -> 117...

    2022-09-22 16:57:58.064 279201 INFO migrate.versioning.api [-] done

    [root@openstack-node01 ~]#

    [root@openstack-node01 ~]# cinder service-list

    +------------------+------------------+------+---------+-------+----------------------------+-----------------+

    | Binary           | Host             | Zone | Status  | State | Updated_at                 | Disabled Reason |

    +------------------+------------------+------+---------+-------+----------------------------+-----------------+

    | cinder-scheduler | openstack-node01 | nova | enabled | up    | 2022-09-22T08:58:00.000000 | -               |

    +------------------+------------------+------+---------+-------+----------------------------+-----------------+

    [root@openstack-node01 ~]#

    错误如下:

    [root@controller ~]# /usr/bin/glance-api

    ERROR: Unable to locate paste config file for glance-api.

    参考文档:

    glance无法正常启动_weixin_34375054的博客-CSDN博客


    virt-manager报错

    not all arguments converted during string

    并且通过virt-manager有如下报错:

    RuntimeError

    安装edk2-aarch64这个包即可

    systemctl restart libvirtd


    常用命令

    openstack user list   查看用户

    关于数据库处理:

     systemctl stop mariadb.service

    rm -rf  /var/lib/mysql/*

    systemctl start mariadb.service

    mysql_secure_installation

    如果有一步错了,就需要重新删除数据再重新搭建,例如:

    原来之前的操作写错了

    只能重新建立数据了


    报错

    [root@openstack-node01 qimei]# openstack --os-auth-url http://172.25.130.209:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue

    Password:

    The request you have made requires authentication. (HTTP 401) (Request-ID: req-5c284a89-ee02-4581-a68e-8756887d8aab)

    -


    通过httpd目录显示dashboard的日志打印

    ErrorLog /var/log/httpd/openstack_dashboard-error.log

    CustomLog /var/log/httpd/openstack_dashboard-access.log combined

    报错如下:

    [root@openstack-node02 nova]# nova-compute

    /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported

      exception.NotSupportedWarning

    2022-09-24 15:57:35.427 221765 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge

    2022-09-24 15:57:35.479 221765 INFO oslo_service.periodic_task [-] Skipping periodic task _schedinfo_global_quota_autoscale because its interval is negative

    2022-09-24 15:57:35.480 221765 WARNING oslo_config.cfg [-] Option "use_neutron" from group "DEFAULT" is deprecated for removal (

    nova-network is deprecated, as are any related configuration options.

    ).  Its value may be silently ignored in the future.

    2022-09-24 15:57:35.592 221765 INFO nova.virt.driver [req-5cf7101b-7616-4aff-9359-c17c70cb2c9c - - - - -] xxxxxxxxxxxLoading compute driver 'None'

    2022-09-24 15:57:35.593 221765 ERROR nova.virt.driver [req-5cf7101b-7616-4aff-9359-c17c70cb2c9c - - - - -] Compute driver option required, but not specified

    [root@openstack-node02 nova]#

    奇怪了,这里我们也写了,就是没识别到

    默认是这样识别的

    没有办法,先写死把

    /usr/lib/python2.7/site-packages/nova/virt/driver.py


    错误如下:

    [root@controller python2.7]# openstack user create --domain default --password-prompt cinder

    __init__() got an unexpected keyword argument 'collect_timing'

    使用所有的OpenStack命令都会出现这个错误

    解决办法:

    既然不认识这个错误    那就先注释掉

    [root@controller etc]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

    __init__() got an unexpected keyword argument 'status_code_retries'

    解决办法:


    [root@compute nova]# tail -f /var/log/nova/nova-compute.log

    2022-10-28 12:01:51.907 2067549 ERROR nova.compute.manager Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/libvirt/images/kylin10.0-3.qcow2 --force-share

    2022-10-28 12:01:51.907 2067549 ERROR nova.compute.manager Exit code: 1

    2022-10-28 12:01:51.907 2067549 ERROR nova.compute.manager Stdout: u''

    2022-10-28 12:01:51.907 2067549 ERROR nova.compute.manager Stderr: u"qemu-img: Could not open '/var/lib/libvirt/images/kylin10.0-3.qcow2': Could not open '/var/lib/libvirt/images/kylin10.0-3.qcow2': Permission denied\n"

    2022-10-28 12:01:51.907 2067549 ERROR nova.compute.manager

    2022-10-28 12:01:56.765 2067549 WARNING nova.virt.libvirt.driver [req-d687e247-51c4-463a-8727-25b87086e759 0d807080c140419d8402722f55d28780 3988ee5d3b9041dfb7e8ffbb18ee17a2 - default default] [instance: a063a80e-69d6-4098-b4b4-d7552632269a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names

    解决办法:

    [root@compute nova]# chown -R nova:nova /var/lib/libvirt/images/


    通过iso安装虚拟机,过程中没有找到磁盘

    解决办法:

    创建一个卷

    然后绑定到实例中

    iso安装无法识别到根磁盘。可以识别到临时磁盘,我们创建实例类型的时候,指定一个临时磁盘


    uefi not supported: uefi不受支持

    [root@compute /]# vim /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py +138


    计算节点和控制节点关于neutron报错:

    [root@controller ~]# tail -f /var/log/neutron/linuxbridge-agent.log

    2022-10-10 09:42:09.315 2957945 ERROR neutron     result = f(*args, **kwargs)

    2022-10-10 09:42:09.315 2957945 ERROR neutron   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", line 472, in set_rpc_timeout

    2022-10-10 09:42:09.315 2957945 ERROR neutron     self.state_rpc):

    2022-10-10 09:42:09.315 2957945 ERROR neutron AttributeError: 'CommonAgentLoop' object has no attribute 'state_rpc'

    2022-10-10 09:42:09.315 2957945 ERROR neutron

    2022-10-10 09:42:14.107 2957984 INFO neutron.common.config [-] Logging enabled!

    2022-10-10 09:42:14.107 2957984 INFO neutron.common.config [-] /usr/bin/neutron-linuxbridge-agent version 12.1.1

    2022-10-10 09:42:14.109 2957984 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Interface mappings: {'provider': 'eth0'}

    2022-10-10 09:42:14.110 2957984 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Bridge mappings: {}

    2022-10-10 09:42:14.127 2957984 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Agent initialized successfully, now running...

    2022-10-10 09:42:16.357 2957984 ERROR neutron.agent.linux.utils [req-1abc5096-c0b0-4c48-8f78-a6031a4e263b - - - - -] Rootwrap error running command: ['iptables-save', '-t', 'raw']: Exception: Failed to spawn rootwrap process.

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service [req-1abc5096-c0b0-4c48-8f78-a6031a4e263b - - - - -] Error starting thread.: Exception: Failed to spawn rootwrap process.

    stderr:

    我们信任您已经从系统管理员那里了解了日常注意事项。

    总结起来无外乎这三点:

        #1) 尊重别人的隐私。

        #2) 输入前要先考虑(后果和风险)。

        #3) 权力越大,责任越大。

    sudo: 没有终端存在,且未指定 askpass 程序

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service Traceback (most recent call last):

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 731, in run_service

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     service.start()

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 158, in wrapper

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     result = f(*args, **kwargs)

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", line 85, in start

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     self.setup_rpc()

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 158, in wrapper

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     result = f(*args, **kwargs)

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", line 152, in setup_rpc

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     self.context, self.sg_plugin_rpc, defer_refresh_firewall=True)

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 60, in __init__

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     self.init_firewall(defer_refresh_firewall, integration_bridge)

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 85, in init_firewall

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     self.firewall = firewall_class()

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_firewall.py", line 88, in __init__

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     zone_per_port=self.CONNTRACK_ZONE_PER_PORT)

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     return f(*args, **kwargs)

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_conntrack.py", line 58, in get_conntrack

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     execute, namespace, zone_per_port)

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_conntrack.py", line 75, in __init__

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     self._populate_initial_zone_map()

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_conntrack.py", line 182, in _populate_initial_zone_map

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     rules = self.get_rules_for_table_func('raw')

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", line 481, in get_rules_for_table

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     return self.execute(args, run_as_root=True).split('\n')

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 122, in execute

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     execute_rootwrap_daemon(cmd, process_input, addl_env))

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 109, in execute_rootwrap_daemon

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     LOG.error("Rootwrap error running command: %s", cmd)

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     self.force_reraise()

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     six.reraise(self.type_, self.value, self.tb)

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 106, in execute_rootwrap_daemon

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     return client.execute(cmd, process_input)

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/oslo_rootwrap/client.py", line 148, in execute

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     self._ensure_initialized()

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/oslo_rootwrap/client.py", line 115, in _ensure_initialized

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     self._initialize()

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service   File "/usr/lib/python2.7/site-packages/oslo_rootwrap/client.py", line 85, in _initialize

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     (stderr,))

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service Exception: Failed to spawn rootwrap process.

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service stderr:

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service 我们信任您已经从系统管理员那里了解了日常注意事项。

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service 总结起来无外乎这三点:

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     #1) 尊重别人的隐私。

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     #2) 输入前要先考虑(后果和风险)。

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service     #3) 权力越大,责任越大。

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service sudo: 没有终端存在,且未指定 askpass 程序

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service

    2022-10-10 09:42:16.360 2957984 ERROR oslo_service.service

    2022-10-10 09:42:16.368 2957984 INFO neutron.plugins.ml2.drivers.agent._common_agent [-] Stopping Linux bridge agent agent.

    2022-10-10 09:42:16.371 2957984 CRITICAL neutron [-] Unhandled error: AttributeError: 'CommonAgentLoop' object has no attribute 'state_rpc'

    2022-10-10 09:42:16.371 2957984 ERROR neutron Traceback (most recent call last):

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/bin/neutron-linuxbridge-agent", line 10, in

    2022-10-10 09:42:16.371 2957984 ERROR neutron     sys.exit(main())

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/lib/python2.7/site-packages/neutron/cmd/eventlet/plugins/linuxbridge_neutron_agent.py", line 21, in main

    2022-10-10 09:42:16.371 2957984 ERROR neutron     agent_main.main()

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", line 1018, in main

    2022-10-10 09:42:16.371 2957984 ERROR neutron     launcher.wait()

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 329, in wait

    2022-10-10 09:42:16.371 2957984 ERROR neutron     status, signo = self._wait_for_exit_or_signal()

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 314, in _wait_for_exit_or_signal

    2022-10-10 09:42:16.371 2957984 ERROR neutron     self.stop()

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 229, in stop

    2022-10-10 09:42:16.371 2957984 ERROR neutron     self.services.stop()

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 697, in stop

    2022-10-10 09:42:16.371 2957984 ERROR neutron     service.stop()

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 158, in wrapper

    2022-10-10 09:42:16.371 2957984 ERROR neutron     result = f(*args, **kwargs)

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", line 116, in stop

    2022-10-10 09:42:16.371 2957984 ERROR neutron     self.set_rpc_timeout(self.quitting_rpc_timeout)

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 158, in wrapper

    2022-10-10 09:42:16.371 2957984 ERROR neutron     result = f(*args, **kwargs)

    2022-10-10 09:42:16.371 2957984 ERROR neutron   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", line 472, in set_rpc_timeout

    2022-10-10 09:42:16.371 2957984 ERROR neutron     self.state_rpc):

    2022-10-10 09:42:16.371 2957984 ERROR neutron AttributeError: 'CommonAgentLoop' object has no attribute 'state_rpc'

    2022-10-10 09:42:16.371 2957984 ERROR neutron

    解决办法:

    然后

    [root@compute neutron]# systemctl restart neutron-linuxbridge-agent.service

    [root@controller ~]# systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service


    支持申威架构

    10月 10 10:06:25 compute nova-compute[2285722]: 2022-10-10 10:06:25.508 2285722 ERROR nova.compute.manager [instance: e39786d4-a4a4-4159-8fbe-13ecc559c33c] libvirtError: 不支持的配置:this QEMU does not support 'cirrus' video device

    10月 10 10:06:25 compute nova-compute[2285722]: 2022-10-10 10:06:25.508 2285722 ERROR nova.compute.manager [instance: e39786d4-a4a4-4159-8fbe-13ecc559c33c]

    修改文件:

    /usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py

    /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py

    /usr/lib/python2.7/site-packages/nova/virt/arch.py

    /usr/lib/python2.7/site-packages/nova/objects/fields.py

    /usr/lib/python2.7/site-packages/nova/virt/hyperv/constants.py

  • 相关阅读:
    AI音乐创作,让每一个人都成为音乐家
    JavaScript scope(作用域)知识
    掌握数据相关性新利器:基于R、Python的Copula变量相关性分析及AI大模型应用探索
    Feign远程接口调用
    PHP:枚举基础
    [轻笔记] SHAP值的计算步骤
    2.23作业
    【小月电子】FPGA开发板(XLOGIC_V1)系统学习教程-LESSON1点亮LED灯
    FFmpeg源码:AV_RB32宏定义分析
    塔望3W消费战略全案丨绿力冬瓜茶 三十年饮料老品牌,两年复兴战全国
  • 原文地址:https://blog.csdn.net/m0_49023005/article/details/127777300