[KVM] KVM挂起状态恢复失败与KVM存储池迁移
背景:发现KVM host上的几台虚拟机挂起了(paused),但是并没有执行virsh suspend <vm_hostname>,且使用virsh resume <vm_hostname> 无法恢复。原因是这个几个虚拟机归属的存储池所在的磁盘满了。所以想把虚拟机迁移到磁盘空间富余的存储池。
1.KVM挂起无法恢复
[root@KVMHost-xxx ~]# virsh list --allId Name State
----------------------------2 vm1-rh7-116 paused4 vm-rh8-115 paused6 vm2-100-114 paused[root@KVMHost-xxx ~]# virsh resume vm-rh8-115
Domain 'vm-rh8-115' resumed[root@KVMHost-xxx ~]# virsh start vm-rh8-115
error: Domain is already active[root@KVMHost-xxx ~]# virsh list --allId Name State
----------------------------2 vm1-rh7-116 paused4 vm-rh8-115 paused6 vm2-100-114 paused[root@KVMHost-xxx ~]# virsh listId Name State
----------------------------4 vm-rh8-115 paused6 vm2-100-114 paused[root@KVMHost-xxx ~]# virsh edit vm-rh8-115
error: g_mkstemp_full: failed to create temporary file: No space left on device
2.查看磁盘
[root@KVMHost-xxx ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 189G 0 189G 0% /dev
tmpfs 189G 0 189G 0% /dev/shm
tmpfs 189G 67M 188G 1% /run
tmpfs 189G 0 189G 0% /sys/fs/cgroup
/dev/mapper/rhel-root 70G 70G 20K 100% /
/dev/sda1 1014M 273M 742M 27% /boot
/dev/mapper/rhel-home 372G 2.7G 369G 1% /home
/dev/loop0 14G 14G 0 100% /var/www/html/image
tmpfs 38G 0 38G 0% /run/user/0
[root@KVMHost-xxx ~]#
发现/目录下满了,先清理一些/下的大文件,
这几个虚机的存储池在/opt/<somePath>下,需要迁移到较大的磁盘/
3./home下新加存储池+迁移
创建目录类型的存储池参考
(1)定义存储池(仅写配置)
virsh pool-define-as <new_pool_name dir> - - - - "</pool/path/>"
(2) 构建存储池(创建目录)
virsh pool-build <new_pool_name> # 若目录已存在可跳过
(3)启动存储池
virsh pool-start <new_pool_name>
(4)设置开机自启(可选)
virsh pool-autostart <new_pool_name>
[root@KVMHost-xxx iso]# virsh pool-define-as new_pool dir - - - - "/home/kvm_images"
Pool new_pool defined[root@KVMHost-xxx iso]# virsh pool-start new_pool
error: Failed to start pool new_pool
error: cannot open directory '/home/kvm_images': 没有那个文件或目录[root@KVMHost-xxx iso]# cd /home
[root@KVMHost-xxx home]# mkdir kvm_images
[root@KVMHost-xxx home]# virsh pool-start new_pool
Pool new_pool started[root@KVMHost-xxx home]# virsh pool-autostart new_pool
Pool new_pool marked as autostarted[root@KVMHost-xxx home]# virsh pool-list --allName State Autostart
----------------------------------iso active yesiso-1 active yeskvm_image active yeskvm_images active nonew_pool active yes[root@KVMHost-xxx home]# virsh listId Name State
----------------------------4 vm-rh8-115 paused6 vm2-100-114 paused[root@KVMHost-xxx home]# virsh destroy vm-rh8-115
Domain 'vm-rh8-115' destroyed[root@KVMHost-xxx home]# mv /opt/data/kvm_images/vm-115.qcow2 /home/kvm_images/
[root@KVMHost-xxx home]# virsh edit vm-rh8-115
Domain 'vm-rh8-115' XML configuration edited.[root@KVMHost-xxx home]#
[root@KVMHost-xxx home]# virsh start vm-rh8-115
Domain 'vm-rh8-115' started[root@KVMHost-xxx home]# virsh listId Name State
-----------------------------6 vm2-100-114 paused7 vm-rh8-115 running[root@KVMHost-xxx home]#
4.所有都迁移完了,如果旧的存储池不需要了可以删除
删除存储池参考
# 1. 查看当前存储池状态
virsh pool-list --all
# 2. 停止存储池(若处于活跃状态)
virsh pool-destroy <old_pool_name>
# 3. 取消存储池定义
virsh pool-undefine <old_pool_name>
# 4. 删除存储池数据(可选)
rm -rf </path/to/old_pool_data/>
[root@KVMHost-xxx data]# virsh pool-list --allName State Autostart
----------------------------------kvm_image active yeskvm_images active nonew_pool active yes[root@KVMHost-xxx data]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 189G 0 189G 0% /dev
tmpfs 189G 0 189G 0% /dev/shm
tmpfs 189G 75M 188G 1% /run
tmpfs 189G 0 189G 0% /sys/fs/cgroup
/dev/mapper/rhel-root 70G 33G 38G 47% /
/dev/sda1 1014M 273M 742M 27% /boot
/dev/mapper/rhel-home 372G 58G 315G 16% /home
/dev/loop0 14G 14G 0 100% /var/www/html/image
tmpfs 38G 0 38G 0% /run/user/0
[root@KVMHost-xxx data]# virsh pool-destroy kvm_image
Pool kvm_image destroyed[root@KVMHost-xxx data]# virsh pool-undefine kvm_image
Pool kvm_image has been undefined[root@KVMHost-xxx data]# ll
total 0
drwxr-xr-x. 2 root root 71 Apr 14 09:57 iso
drwxr-xr-x. 2 root root 6 Apr 15 10:28 kvm_image
drwxr-xr-x. 2 root root 6 Apr 15 10:32 kvm_images确认下面没有有用文件了再删除
[root@KVMHost-xxx data]# rm -rf kvm_image
[root@KVMHost-xxx data]#
=========================================
相关参考:
Q: 新的存储池下看不到东西? 需要刷新一下
virsh vol-list <new_pool_name>
[root@KVMHost-xxx qemu]# virsh vol-list new_poolName Path
--------------[root@KVMHost-xxx qemu]# virsh pool-refresh new_pool
Pool new_pool refreshed[root@KVMHost-xxx qemu]# virsh vol-list new_poolName Path
---------------------------------------------------------vm1-116.qcow2 /home/kvm_images/vm1-116.qcow2vm-115.qcow2 /home/kvm_images/vm-115.qcow2vm2-100-114.qcow2 /home/kvm_images/vm2-100-114.qcow2[root@KVMHost-xxx qemu]#
Q: 如何看KVM的qcow2放在哪里?
方法1,virsh edit <虚拟机名称>,例如virsh edit vm-rh8-115 查看source file
方法2,去下面这个路径找找
默认情况下,KVM 虚拟机的 XML 配置文件存储在以下路径
/etc/libvirt/qemu/<虚拟机名称>.xml
方法3,可通过以下命令直接查看 XML 内容(无需手动找路径):
virsh dumpxml <虚拟机名称>
Q: 查看虚拟机使用的磁盘文件路径
virsh domblklist <虚拟机名称>
[root@KVMHost-xxx qemu]# virsh listId Name State
-----------------------------7 vm-rh8-115 running8 vm2-100-114 running[root@KVMHost-xxx qemu]# virsh domblklist vm-rh8-115Target Source
-----------------------------------------sda /home/kvm_images/vm-115.qcow2sdb -[root@KVMHost-xxx qemu]#
Q: 查看存储池
virsh pool-list --all
[root@KVMHost-xxx home]# virsh pool-list --allName State Autostart
----------------------------------kvm_image active yeskvm_images active nonew_pool active yes[root@KVMHost-xxx home]#
Q: 查看存储池路径
方法1,virsh pool-edit <池名称>
方法2,virsh pool-dumpxml <池名称> | grep -i "<path>"
Q: 查看所有存储池信息
virsh pool-list --all --details
[root@KVMHost-xxx ~]# virsh pool-list --all --detailsName State Autostart Persistent Capacity Allocation Available
-------------------------------------------------------------------------------------new_pool running yes yes 371.44 GiB 57.10 GiB 314.34 GiB[root@KVMHost-xxx ~]#
Q: 查看存储池类型
virsh pool-dumpxml <池名称> | grep "<pool type="
查看存储池类型
[root@KVMHost-xxx ~]# virsh pool-dumpxml new_pool | grep "<pool type="
<pool type='dir'>
[root@KVMHost-xxx ~]#
常见存储池类型总结
类型(type) | 描述 | 典型用途 |
dir | 本地目录存储 | 默认存储池,存放 qcow2 镜像 |
iscsi | iSCSI 网络存储 | 共享块设备 |
rbd | Ceph RBD 分布式存储 | 云环境或高可用集群 |
logical | LVM 逻辑卷管理 | 动态扩展本地磁盘 |
fs | 预格式化的磁盘分区或文件系统 | 直接挂载已有文件系统 |
nfs | NFS 网络文件存储 | 共享文件存储 |