• 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏吧

mount和umount

互联网 diligentman 6天前 9次浏览

我们经常遇到一种这样的场景,就是系统有几个盘,我们主用其中一个盘,而还有几个盘完全空着还没用。如果主盘越来越满,而空盘又空着浪费,此时,我们可以通过mount命令来进行挂载操作,让空盘也用起来。我这次就是打docker镜像比较大,主盘满了,报 no space left on device。

mount

开始的磁盘情况

df

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/vda1             20511244  16900912   2545380  87% /
tmpfs                  8133952     10244   8123708   1% /dev/shm
/dev/vdb              20183676     79952  19038764   1% /mnt
cgroup                 8133952         0   8133952   0% /sys/fs/cgroup

可以看到,/dev/vdb才用1%,于是我把所需要用的目录挂载到这里

mount --bind /home/docker/runtime /mnt

然后继续打镜像,发现还是失败。同时注意到,/dev/vda1 和 /dev/vdb 的 Use% 也就是使用率完全一样。于是怀疑,这两个是同一个盘。后来仔细阅读 mount –bind 命令,才发现命令写错了,正确的写法是:

mount --bind /tmp/storage/path /origin/storage/path

就是说,把临时存放的目录放在前面,原本运行的目录要在后面。我这里docker运行时,原本要使用的路径是 /home/docker/runtime,应该放在命令的后面。所以正确的命令是:

mount --bind /mnt /home/docker/runtime

再去执行docker镜像构建命令,就发现,占用率不一样了。

 

进一步

我这边的硬盘是40G,还有一个50G的云盘在浪费,所以我这里进一步地把云盘当做临时存放地址来使用了。

设备信息

su root
cd /dev
ls -al
[root@yiyunjie dev]# ls -al -H
total 4
drwxr-xr-x 18 root root        3460 Nov 25 22:26 .
drwxr-xr-x 29 root root        4096 Nov 19 14:37 ..
crw-rw----  1 root root     10, 235 Nov 16 03:47 autofs
drwxr-xr-x  2 root root         140 Nov 24 20:39 block
drwxr-xr-x  3 root root          60 Nov 16 11:47 bus
drwxr-xr-x  2 root root        2920 Nov 16 03:47 char
crw-------  1 root root      5,   1 Nov 16 03:47 console
lrwxrwxrwx  1 root root          11 Nov 16 11:47 core -> /proc/kcore
drwxr-xr-x  6 root root         140 Nov 16 11:47 cpu
crw-rw----  1 root root     10,  61 Nov 16 03:47 cpu_dma_latency
crw-rw----  1 root root     10,  62 Nov 16 03:47 crash
drwxr-xr-x  6 root root         120 Nov 16 11:47 disk
drwxr-xr-x  2 root root          80 Nov 16 11:47 dri
lrwxrwxrwx  1 root root           3 Nov 16 11:47 fb -> fb0
crw-rw----  1 root video    29,   0 Nov 16 11:47 fb0
lrwxrwxrwx  1 root root          13 Nov 16 11:47 fd -> /proc/self/fd
crw-rw-rw-  1 root root      1,   7 Nov 16 03:47 full
crw-rw-rw-  1 root root     10, 229 Nov 16 11:47 fuse
crw-rw----  1 root root    249,   0 Nov 16 11:47 hidraw0
crw-rw----  1 root root     10, 228 Nov 16 03:47 hpet
drwxr-xr-x  2 root root          40 Nov 16 11:47 hugepages
crw-------  1 root root    229,   0 Nov 16 11:47 hvc0
crw-rw----  1 root root     10, 183 Nov 16 03:47 hwrng
crw-------  1 root root     89,   0 Nov 16 11:47 i2c-0
...
drwxr-xr-x  4 root root         280 Nov 16 03:47 input
crw-rw----  1 root root      1,  11 Nov 16 03:47 kmsg
srw-rw-rw-  1 root root           0 Nov 16 03:47 log
brw-r-----  1 root disk      7,   0 Nov 16 11:47 loop0
...
crw-rw----  1 root lp        6,   0 Nov 16 11:47 lp0
...
lrwxrwxrwx  1 root root          13 Nov 16 11:47 MAKEDEV -> /sbin/MAKEDEV
drwxr-xr-x  2 root root          60 Nov 16 11:47 mapper
crw-rw----  1 root root     10, 227 Nov 16 03:47 mcelog
drwxr-xr-x  2 root root          40 Nov 16 11:47 .mdadm
crw-r-----  1 root kmem      1,   1 Nov 16 03:47 mem
drwxr-xr-x  2 root root          60 Nov 16 11:47 net
crw-rw----  1 root root     10,  60 Nov 16 03:47 network_latency
crw-rw----  1 root root     10,  59 Nov 16 03:47 network_throughput
crw-rw-rw-  1 root root      1,   3 Nov 16 03:47 null
crw-r-----  1 root kmem     10, 144 Nov 16 03:47 nvram
crw-rw----  1 root root      1,  12 Nov 16 03:47 oldmem
crw-r-----  1 root kmem      1,   4 Nov 16 03:47 port
crw-------  1 root root    108,   0 Nov 16 11:47 ppp
crw-rw-rw-  1 root tty       5,   2 Jan 11 21:00 ptmx
drwxr-xr-x  2 root root           0 Nov 16 11:47 pts
crw-rw-rw-  1 root root      1,   8 Nov 16 03:47 random
drwxr-xr-x  2 root root          60 Nov 16 11:47 raw
lrwxrwxrwx  1 root root           4 Nov 16 03:47 root -> vda1
lrwxrwxrwx  1 root root           4 Nov 16 11:47 rtc -> rtc0
crw-rw----  1 root root    253,   0 Nov 16 11:47 rtc0
drwxrwxrwt  2 root root          40 Jan  4 10:18 shm
crw-rw----  1 root root     10, 231 Nov 16 03:47 snapshot
lrwxrwxrwx  1 root root          15 Nov 16 11:47 stderr -> /proc/self/fd/2
lrwxrwxrwx  1 root root          15 Nov 16 11:47 stdin -> /proc/self/fd/0
lrwxrwxrwx  1 root root          15 Nov 16 11:47 stdout -> /proc/self/fd/1
lrwxrwxrwx  1 root root           4 Nov 16 11:47 systty -> tty0
crw-rw-rw-  1 root tty       5,   0 Jan 11 18:06 tty
crw--w----  1 root tty       4,   0 Nov 16 03:47 tty0
...
drwxr-xr-x  6 root root         140 Nov 25 22:26 .udev
crw-rw-rw-  1 root root      1,   9 Nov 16 03:47 urandom
crw-rw----  1 root root    250,   0 Nov 16 03:47 usbmon0
...
crw-rw----  1 vcsa tty       7,   0 Nov 16 03:47 vcs
...
brw-rw----  1 root disk    252,   0 Nov 16 11:47 vda
brw-rw----  1 root disk    252,   1 Nov 16 03:47 vda1
brw-rw----  1 root disk    252,  16 Nov 24 20:38 vdb
brw-rw----  1 root disk    252,  32 Nov 24 20:39 vdc
brw-rw----  1 root disk    252,  34 Nov 24 20:39 vdc2
crw-rw----  1 root root     10,  63 Nov 16 03:47 vga_arbiter
drwxr-xr-x  2 root root          60 Nov 16 03:47 virtio-ports
crw-rw----  1 root root    248,   1 Nov 16 03:47 vport4p1
crw-rw----  1 root root     10, 130 Nov 16 03:47 watchdog
crw-rw----  1 root root    252,   0 Nov 16 03:47 watchdog0
crw-rw-rw-  1 root root      1,   5 Nov 16 03:47 zero

我们关注的是 vda, vda1, vdb, vdc, vdc2 这几个。

从上面的操作可以看到,其中vda1就是vda,是20G,vdb是另一个20G的硬盘,而vdc2就是vdc,就是那个50G的网盘。

格式化

mkfs.ext3 /dev/vdc
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6553600 inodes, 13107200 blocks
655360 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

挂载

mount -t ext3 /dev/vdc /mnt

查看磁盘

df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/vda1               22G    18G   2.7G  87% /
tmpfs                  8.4G    11M   8.4G   1% /dev/shm
cgroup                 8.4G      0   8.4G   0% /sys/fs/cgroup
/dev/vdc                52G    55M    50G   1% /mnt

绑定

mount --bind /mnt /home/docker/runtime

 

检测

再去执行镜像构建,发现用的就是新磁盘了。

df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/vda1               22G    18G   2.7G  87% /
tmpfs                  8.4G    11M   8.4G   1% /dev/shm
cgroup                 8.4G      0   8.4G   0% /sys/fs/cgroup
/dev/vdc                52G   6.5G    43G  14% /mnt

从这里可以看出,docker打一个镜像所需要的空间,并不仅限于镜像本身的大小,在打镜像的过程中,docker进行的拉取,解压,构建等操作所需的空间远大于镜像本身的大小。

umount

在挂载错误的情况下,可以使用umount命令来解除挂载,如

umount /home/docker/tmp

umount /mnt
{{o.name}}


{{m.name}}


程序员灯塔
转载请注明原文链接:https://www.wangt.cc/2021/01/mount%e5%92%8cumount/
喜欢 (0)