网站建设尾款收取,企业网站美工设计,重庆网站开发工资,wordpress动漫博客主题免费下载文章目录dashboard疏散主机提示报错#xff1a;无法疏散主机...处理方法报错说明【状态卡在reboot状态】解决方法【登录nova数据库修改虚拟机信息】首先获取nova数据库的密码登录nova数据库并做修改验证信息是否修改成功再次迁移并验证报错说明【虚拟机状态error也会导致疏散失…
文章目录dashboard疏散主机提示报错无法疏散主机...处理方法报错说明【状态卡在reboot状态】解决方法【登录nova数据库修改虚拟机信息】首先获取nova数据库的密码登录nova数据库并做修改验证信息是否修改成功再次迁移并验证报错说明【虚拟机状态error也会导致疏散失败】解决方法再次迁移并验证dashboard疏散主机提示报错无法疏散主机…处理方法
报错说明【状态卡在reboot状态】 前提条件有个虚拟机控制台刷不出来以为虚拟机死机了重启后发现一直卡在重启界面并且host_status信息也获取不到了。。。 后面发现造成这个原因是因为该虚拟机所属的宿主机down机了。。。。但是又因为上面虚拟机一直处于硬重启状态所以这台虚拟机肯定是无法迁移或疏散出去的。。。。所以疏散主机就会报下面错误咯。 疏散主机就提示错误无法疏散主机。
解决方法【登录nova数据库修改虚拟机信息】
首先获取nova数据库的密码
控制节点执行grep mysql /etc/nova/nova.conf 如下Changeme_123就是密码
[rootcontroller01 nova]# grep mysql /etc/nova/nova.conf
connection mysqlpymysql://nova:Changeme_123controller01/nova_api
#connectionmysql://nova:novalocalhost/nova
# by the server configuration, set this to no value. Example: mysql_sql_mode
#mysql_sql_modeTRADITIONAL
connection mysqlpymysql://nova:Changeme_123controller01/nova
# by the server configuration, set this to no value. Example: mysql_sql_mode
#mysql_sql_modeTRADITIONAL
# by the server configuration, set this to no value. Example: mysql_sql_mode
#mysql_sql_modeTRADITIONAL登录nova数据库并做修改
命令mysql -unova -p密码就是上面查到的。 然后执行查询操作 use nova; #进入nova数据库select * from instances where uuid15beda0e-8a5a-47a8-976c-98c30f316d3b\G #查询虚拟机uuid要修改哦update instances set task_stateNULL where uuid15beda0e-8a5a-47a8-976c-98c30f316d3b\G # 修改虚拟机状态task_state参数确认你上面查询的有没有这个值【这个值是记录虚拟机状态信息的如这位task_state: rebooting_hard】然后后面的uuid也的同步修改。 【修改task_state后nova show命令可以看到status状态也会同步改变其实我们可以在控制节点用命令单独修改status状态但task_state不会跟着改变。】
[rootcontroller01 nova]# mysql -unova -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2819225
Server version: 10.1.20-MariaDB MariaDB ServerCopyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.Type help; or \h for help. Type \c to clear the current input statement.MariaDB [(none)] use nova;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -ADatabase changed
MariaDB [nova] select * from instances where uuid15beda0e-8a5a-47a8-976c-98c30f316d3b\G
*************************** 1. row ***************************created_at: 2018-01-20 00:02:56updated_at: 2023-03-08 16:38:28deleted_at: NULLid: 490internal_id: NULLuser_id: aff9368d69fc4373b55863329da4d320project_id: 8efdaf04f9d2442b9671de570dc175ebimage_ref: 7047ef81-0f8f-47c6-bd92-ac4556c5e600kernel_id: ramdisk_id: launch_index: 0key_name: NULLkey_data: NULLpower_state: 1vm_state: activememory_mb: 16384vcpus: 8hostname: lm-nfhost: computer02user_data: NULLreservation_id: r-cnxw9fhlscheduled_at: NULLlaunched_at: 2018-01-20 00:03:16terminated_at: NULLdisplay_name: lm_nfdisplay_description: lm_nfavailability_zone: safe_domainlocked: 0os_type: NULLlaunched_on: computer02instance_type_id: 52vm_mode: NULLuuid: 15beda0e-8a5a-47a8-976c-98c30f316d3barchitecture: NULLroot_device_name: /dev/vdaaccess_ip_v4: NULLaccess_ip_v6: NULLconfig_drive: task_state: rebooting_hard
default_ephemeral_device: NULLdefault_swap_device: NULLprogress: 0auto_disk_config: 1shutdown_terminate: 0disable_terminate: 0root_gb: 500ephemeral_gb: 0cell_name: NULLnode: computer02deleted: 0locked_by: NULLcleaned: 0ephemeral_key_uuid: NULL
1 row in set (0.00 sec)MariaDB [nova]
MariaDB [nova] update instances set task_stateNULL where uuid15beda0e-8a5a-47a8-976c-98c30f316d3b\G
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0MariaDB [nova] select * from instances where uuid15beda0e-8a5a-47a8-976c-98c30f316d3b\G
*************************** 1. row ***************************created_at: 2018-01-20 00:02:56updated_at: 2023-03-08 16:38:28deleted_at: NULLid: 490internal_id: NULLuser_id: aff9368d69fc4373b55863329da4d320project_id: 8efdaf04f9d2442b9671de570dc175ebimage_ref: 7047ef81-0f8f-47c6-bd92-ac4556c5e600kernel_id: ramdisk_id: launch_index: 0key_name: NULLkey_data: NULLpower_state: 1vm_state: activememory_mb: 16384vcpus: 8hostname: lm-nfhost: computer02user_data: NULLreservation_id: r-cnxw9fhlscheduled_at: NULLlaunched_at: 2018-01-20 00:03:16terminated_at: NULLdisplay_name: lm_nfdisplay_description: lm_nfavailability_zone: safe_domainlocked: 0os_type: NULLlaunched_on: computer02instance_type_id: 52vm_mode: NULLuuid: 15beda0e-8a5a-47a8-976c-98c30f316d3barchitecture: NULLroot_device_name: /dev/vdaaccess_ip_v4: NULLaccess_ip_v6: NULLconfig_drive: task_state: NULL
default_ephemeral_device: NULLdefault_swap_device: NULLprogress: 0auto_disk_config: 1shutdown_terminate: 0disable_terminate: 0root_gb: 500ephemeral_gb: 0cell_name: NULLnode: computer02deleted: 0locked_by: NULLcleaned: 0ephemeral_key_uuid: NULL
1 row in set (0.01 sec)MariaDB [nova] exit
Bye
[rootcontroller01 nova]#验证信息是否修改成功
控制节点执行 nova show 15beda0e-8a5a-47a8-976c-98c30f316d3b 定位到下面2行信息改变为如下即为正常。 OS-EXT-STS:task_state | NULL status | ACTIVE
[rootcontroller01 nova]# cd
[rootcontroller01 ~]# . admin-openrc.sh
[rootcontroller01 ~]# nova show 15beda0e-8a5a-47a8-976c-98c30f316d3b
------------------------------------------------------------------------------------------------
| Property | Value |
------------------------------------------------------------------------------------------------
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | safe_domain |
| OS-EXT-SRV-ATTR:host | computer02 |
| OS-EXT-SRV-ATTR:hostname | lm-nf |
| OS-EXT-SRV-ATTR:hypervisor_hostname | computer02 |
| OS-EXT-SRV-ATTR:instance_name | instance-000001ea |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-cnxw9fhl |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | NULL |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2018-01-20T00:03:16.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2018-01-20T00:02:56Z |
| description | lm_nf |
| flavor | lm.2 (265a90d9-d59b-4c76-be8b-3f0fb1d94bbf) |
| hostId | 6ff16631629bf4d179444973abac2b73d82cd140dcdb466c207f6d79 |
| host_status | MAINTENANCE |
| id | 15beda0e-8a5a-47a8-976c-98c30f316d3b |
| image | NF_601_9730 (7047ef81-0f8f-47c6-bd92-ac4556c5e600) |
| key_name | - |
| locked | False |
| metadata | {} |
| name | lm_nf |
| os-extended-volumes:volumes_attached | [] |
| out-network network | 4.5.6.65 |
| out-sw-network network | 192.168.252.101 |
| own-network network | 1.2.61.19 |
| own-network-2 network | 1.2.61.135 |
| own-sw-network network | 192.168.244.125 |
| progress | 0 |
| safe-network network | 1.2.61.69 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | 8efdaf04f9d2442b9671de570dc175eb |
| updated | 2023-03-08T16:38:28Z |
| user_id | aff9368d69fc4373b55863329da4d320 |
----------------------------------------------------------------------再次迁移并验证
其实这个时候疏散主机还是提示错误无法疏散主机。 但不完全错误因为4台已经迁出去2台了还剩2台没迁移出去。 底层看到02这个宿主机上没迁移出去的虚拟机有一台为ERROR所以大概已经知道是因为 ERROR这个虚拟机导致迁移失败了。 至于另外一个正常的为啥没迁移出去因为到ERROR那台就卡主了所以正常那台也没迁移出去。 处理和验证接着往下看
[rootcontroller01 ~]# nova list --all --host computer02
WARNING: Option --all_tenants is deprecated; use --all-tenants; this option will be removed in novaclient 3.3.0.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| ID | Name | Tenant ID | Status | Task State | Power State | Networks |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 15beda0e-8a5a-47a8-976c-98c30f316d3b | lm_nf | 8efdaf04f9d2442b9671de570dc175eb | ACTIVE | none | Running | out-network4.5.6.65; safe-network1.2.61.69; own-sw-network192.168.244.125; own-network1.2.61.19; own-network-21.2.61.135; out-sw-network192.168.252.101 |
| c96fa321-3d48-4364-aba7-fbb5856044e5 | modem | 8efdaf04f9d2442b9671de570dc175eb | ERROR | - | NOSTATE | out-network4.5.6.121; own-network1.2.61.46 |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[rootcontroller01 ~]#报错说明【虚拟机状态error也会导致疏散失败】
这个down机的宿主机上有个虚拟机状态为ERROR即时用nova reset-state c96fa321-3d48-4364-aba7-fbb5856044e5 --active重置后再次迁移后状态又会变成ERROR。
[rootcontroller01 ~]# nova show c96fa321-3d48-4364-aba7-fbb5856044e5
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Property | Value |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | safe_domain |
| OS-EXT-SRV-ATTR:host | computer02 |
| OS-EXT-SRV-ATTR:hostname | modem |
| OS-EXT-SRV-ATTR:hypervisor_hostname | computer02 |
| OS-EXT-SRV-ATTR:instance_name | instance-0000029c |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-gfwtqg2a |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | error |
| OS-SRV-USG:launched_at | 2019-12-13T19:12:34.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2018-08-23T00:38:50Z |
| description | modem |
| fault | {message: Virtual Interface creation failed, code: 500, details: File \/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 375, in decorated_function |
| | return function(self, context, *args, **kwargs) |
| | File \/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 2809, in rebuild_instance |
| | bdms, recreate, on_shared_storage, preserve_ephemeral) |
| | File \/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 2853, in _do_rebuild_instance_with_claim |
| | self._do_rebuild_instance(*args, **kwargs) |
| | File \/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 2969, in _do_rebuild_instance |
| | self._rebuild_default_impl(**kwargs) |
| | File \/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 2734, in _rebuild_default_impl |
| | block_device_infonew_block_device_info) |
| | File \/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py\, line 2780, in spawn |
| | block_device_infoblock_device_info) |
| | File \/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py\, line 4946, in _create_domain_and_network |
| | raise exception.VirtualInterfaceCreateException() |
| | , created: 2023-03-08T17:49:17Z} |
| flavor | m1.large (4) |
| hostId | 6ff16631629bf4d179444973abac2b73d82cd140dcdb466c207f6d79 |
| host_status | MAINTENANCE |
| id | c96fa321-3d48-4364-aba7-fbb5856044e5 |
| image | windows2008r2 (4d7d46a1-83f5-4806-958d-b06801f275b4) |
| key_name | Abc12345 |
| locked | False |
| metadata | {} |
| name | modem |
| os-extended-volumes:volumes_attached | [{id: 58117774-d1a3-40aa-83ab-abeee26c8f4d, delete_on_termination: false}] |
| out-network network | 4.5.6.121 |
| own-network network | 1.2.61.46 |
| security_groups | default |
| status | ERROR |
| tenant_id | 8efdaf04f9d2442b9671de570dc175eb |
| updated | 2023-03-08T17:49:17Z |
| user_id | 5357091ec61b472bb75668dfe3e2b7e5 |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[rootcontroller01 ~]# nova reset-state c96fa321-3d48-4364-aba7-fbb5856044e5 --active
Reset state for server c96fa321-3d48-4364-aba7-fbb5856044e5 succeeded; new state is active
[rootcontroller01 ~]# nova show c96fa321-3d48-4364-aba7-fbb5856044e5
------------------------------------------------------------------------------------------------------------------------
| Property | Value |
------------------------------------------------------------------------------------------------------------------------
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | safe_domain |
| OS-EXT-SRV-ATTR:host | computer02 |
| OS-EXT-SRV-ATTR:hostname | modem |
| OS-EXT-SRV-ATTR:hypervisor_hostname | computer02 |
| OS-EXT-SRV-ATTR:instance_name | instance-0000029c |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-gfwtqg2a |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2019-12-13T19:12:34.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2018-08-23T00:38:50Z |
| description | modem |
| flavor | m1.large (4) |
| hostId | 6ff16631629bf4d179444973abac2b73d82cd140dcdb466c207f6d79 |
| host_status | MAINTENANCE |
| id | c96fa321-3d48-4364-aba7-fbb5856044e5 |
| image | windows2008r2 (4d7d46a1-83f5-4806-958d-b06801f275b4) |
| key_name | Abc12345 |
| locked | False |
| metadata | {} |
| name | modem |
| os-extended-volumes:volumes_attached | [{id: 58117774-d1a3-40aa-83ab-abeee26c8f4d, delete_on_termination: false}] |
| out-network network | 4.5.6.121 |
| own-network network | 1.2.61.46 |
| progress | 0 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | 8efdaf04f9d2442b9671de570dc175eb |
| updated | 2023-03-08T17:52:10Z |
| user_id | 5357091ec61b472bb75668dfe3e2b7e5 |
------------------------------------------------------------------------------------------------------------------------
[rootcontroller01 ~]# nova list --all --host computer02
WARNING: Option --all_tenants is deprecated; use --all-tenants; this option will be removed in novaclient 3.3.0.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| ID | Name | Tenant ID | Status | Task State | Power State | Networks |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 15beda0e-8a5a-47a8-976c-98c30f316d3b | lm_nf | 8efdaf04f9d2442b9671de570dc175eb | ACTIVE | none | Running | out-network4.5.6.65; safe-network1.2.61.69; own-sw-network192.168.244.125; own-network1.2.61.19; own-network-21.2.61.135; out-sw-network192.168.252.101 |
| c96fa321-3d48-4364-aba7-fbb5856044e5 | modem | 8efdaf04f9d2442b9671de570dc175eb | ACTIVE | - | NOSTATE | out-network4.5.6.121; own-network1.2.61.46 |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[rootcontroller01 ~]## dashboard再次疏散后就报错了。。。
[rootcontroller01 ~]# nova list --all --host computer02
WARNING: Option --all_tenants is deprecated; use --all-tenants; this option will be removed in novaclient 3.3.0.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| ID | Name | Tenant ID | Status | Task State | Power State | Networks |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 15beda0e-8a5a-47a8-976c-98c30f316d3b | lm_nf | 8efdaf04f9d2442b9671de570dc175eb | ACTIVE | none | Running | out-network4.5.6.65; safe-network1.2.61.69; own-sw-network192.168.244.125; own-network1.2.61.19; own-network-21.2.61.135; out-sw-network192.168.252.101 |
| c96fa321-3d48-4364-aba7-fbb5856044e5 | modem | 8efdaf04f9d2442b9671de570dc175eb | ERROR | - | NOSTATE | out-network4.5.6.121; own-network1.2.61.46 |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[rootcontroller01 ~]#因为我是将compute02上的主机疏散迁移到compute04上的也可以到04上查看日志。 【可以用less命令查看然后?过滤参数来定位更方便。】 cat /var/log/neutron/linuxbridge-agent.log | grep ERROR【下面是展示了2次疏散产生的报错。报错一般可以用ERROR或INFO过滤】
[rootcomputer04 nova]# cat /var/log/neutron/linuxbridge-agent.log | grep ERROR
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent [req-f3416276-e545-4d47-bbce-e36195121ad6 - - - - -] Error occurred while removing port tap21d259d8-a9
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call last):
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py, line 308, in treat_devices_removed
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent cfg.CONF.host)
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/neutron/agent/rpc.py, line 151, in update_device_down
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent agent_idagent_id, hosthost)
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/neutron/common/rpc.py, line 136, in call
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent return self._original_context.call(ctxt, method, **kwargs)
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py, line 158, in call
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent retryself.retry)
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/oslo_messaging/transport.py, line 90, in _send
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent timeouttimeout, retryretry)
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 470, in send
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent retryretry)
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 461, in _send
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent raise result
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent RemoteError: Remote error: MechanismDriverError update_port_postcommit failed.
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent [uTraceback (most recent call last):\n, u File /usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 138, in _dispatch_and_reply\n incoming.message))\n, u File /usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 183, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n, u File /usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 127, in _do_dispatch\n result func(ctxt, **new_args)\n, u File /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py, line 190, in update_device_down\n rpc_context, port_id, n_const.PORT_STATUS_DOWN, host))\n, u File /usr/lib/python2.7/site-packages/oslo_db/api.py, line 148, in wrapper\n ectxt.value e.inner_exc\n, u File /usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 220, in __exit__\n self.force_reraise()\n, u File /usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 196, in force_reraise\n six.reraise(self.type_, self.value, self.tb)\n, u File /usr/lib/python2.7/site-packages/oslo_db/api.py, line 138, in wrapper\n return f(*args, **kwargs)\n, u File /usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py, line 1572, in update_port_status\n self.mechanism_manager.update_port_postcommit(mech_context)\n, u File /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py, line 638, in update_port_postcommit\n continue_on_failureTrue)\n, u File /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py, line 412, in _call_on_drivers\n methodmethod_name\n, uMechanismDriverError: update_port_postcommit failed.\n].
2023-03-09 01:28:42.656 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent [req-f3416276-e545-4d47-bbce-e36195121ad6 - - - - -] Error occurred while removing port tap70fde5c1-24
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call last):
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py, line 308, in treat_devices_removed
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent cfg.CONF.host)
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/neutron/agent/rpc.py, line 151, in update_device_down
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent agent_idagent_id, hosthost)
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/neutron/common/rpc.py, line 136, in call
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent return self._original_context.call(ctxt, method, **kwargs)
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py, line 158, in call
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent retryself.retry)
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/oslo_messaging/transport.py, line 90, in _send
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent timeouttimeout, retryretry)
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 470, in send
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent retryretry)
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent File /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 461, in _send
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent raise result
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent RemoteError: Remote error: MechanismDriverError update_port_postcommit failed.
2023-03-09 01:28:42.780 78890 ERROR neutron.plugins.ml2.drivers.agent._common_agent [uTraceback (most recent call last):\n, u File /usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 138, in _dispatch_and_reply\n incoming.message))\n, u File /usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 183, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n, u File /usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 127, in _do_dispatch\n result func(ctxt, **new_args)\n, u File /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py, line 190, in update_device_down\n rpc_context, port_id, n_const.PORT_STATUS_DOWN, host))\n, u File /usr/lib/python2.7/site-packages/oslo_db/api.py, line 148, in wrapper\n ectxt.value e.inner_exc\n, u File /usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 220, in __exit__\n self.force_reraise()\n, u File /usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 196, in force_reraise\n six.reraise(self.type_, self.value, self.tb)\n, u File /usr/lib/python2.7/site-packages/oslo_db/api.py, line 138, in wrapper\n return f(*args, **kwargs)\n, u File /usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py, line 1572, in update_port_status\n self.mechanism_manager.update_port_postcommit(mech_context)\n, u File /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py, line 638, in update_port_postcommit\n continue_on_failureTrue)\n, u File /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py, line 412, in _call_on_drivers\n methodmethod_name\n, uMechanismDriverError: update_port_postcommit failed.\n].问题定位到了因为这个error的虚拟机导致这2台都迁移失败了。尝试去数据库修改状态和底层用命令修改状态都是疏散就报错。。。。 大概率是这个虚拟机本身就损坏了。
解决方法
所以这个error的虚拟机是没解决的这个虚拟机2021年就是关机状态了也没人用所以这个迁移就ERROR状态的虚拟机直接就删除了。 【不解决这个虚拟机迁移就乎会一直报错啊因为到这就卡住了后面虚拟机就疏散不了】
再次迁移并验证
直接再次迁移 成功 底层查看compute02【down机的主机】是否没虚拟机了
[rootcontroller01 ~]# nova list --all --host computer02
WARNING: Option --all_tenants is deprecated; use --all-tenants; this option will be removed in novaclient 3.3.0.
----------------------------------------------------------------
| ID | Name | Tenant ID | Status | Task State | Power State | Networks |
----------------------------------------------------------------
----------------------------------------------------------------
[rootcontroller01 ~]# 再登录到迁移的目标主机上查看新主机是否存在存在查出uuidnova show看到ip后去dashboard上进入控制台看虚拟机系统是否正常。如果都正常问题解决。
[rootcomputer04 ~]# virsh list --allId Name State
----------------------------------------------------27 instance-000004a5 running28 instance-000004a2 running33 instance-000001ea running- instance-000004a8 shut off[rootcomputer04 ~]#
[rootcomputer04 ~]# virsh domuuid 27
51a3aa27-30c6-4e91-95f5-f8a59bd25fc6[rootcomputer04 ~]#