百度云服务器CDS磁盘扩容到50G教程实战
先把挂载分区进行卸载
[root@instance-549sbowh /]# umount /www
umount: /www: target is busy.
(In some cases useful info about processes that usethe device is found by lsof(8) or fuser(1))
显示无法卸载成功。
方法1、查看是否有进程占用
[root@instance-549sbowh /]# lsof | grep /www
如果发现有被占用的进程,我们可以使用kill命令,直接杀掉进程。
方法2:编辑vi /etc/fstab文件
[root@instance-549sbowh ~]# vi /etc/fstab
直接注销挂载点,重新启动服务器。
查看磁盘信息,已经卸载成功。
[root@instance-549sbowh ~]# df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 7.3G 31G 20% /
devtmpfs 980M 0 980M 0% /dev
tmpfs 991M 0 991M 0% /dev/shm
tmpfs 991M 9.1M 982M 1% /run
tmpfs 991M 0 991M 0% /sys/fs/cgroup
tmpfs 199M 0 199M 0% /run/user/0
使用parted -l查看磁盘情况:
[root@instance-549sbowh ~]# parted -l
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 42.9GB 42.9GB primary ext4 boot
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 53.7GB 53.7GB ext4
[root@instance-549sbowh ~]# cat /proc/partitions
major minor #blocks name
253 0 41943040 vda
253 1 41941999 vda1
253 16 52428800 vdb
检查文件系统
[root@instance-549sbowh ~]# e2fsck -f /dev/vdb
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vdb: 31325/327680 files (0.2% non-contiguous), 713819/1310720 blocks
变更文件系统大小
[root@instance-549sbowh ~]# resize2fs /dev/vdb
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/vdb to 13107200 (4k) blocks.
The filesystem on /dev/vdb is now 13107200 blocks long.
重新挂载分区
[root@instance-549sbowh ~]# mount /dev/vdb /www
数据盘扩容成功
[root@instance-549sbowh ~]# df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 7.3G 31G 20% /
devtmpfs 980M 0 980M 0% /dev
tmpfs 991M 0 991M 0% /dev/shm
tmpfs 991M 9.1M 982M 1% /run
tmpfs 991M 0 991M 0% /sys/fs/cgroup
tmpfs 199M 0 199M 0% /run/user/0
/dev/vdb 50G 12.6G 35G 6% /www
文章来源网络,作者:运维,如若转载,请注明出处:https://shuyeidc.com/wp/115374.html<