Hello folks, long time no post... (Mailman has changed...)
So, big question re ssm and LUKS on LVM - I'm looking to increase /var by 3G and create another LUKS volume for /usr/local of 3G.
I've reduced the size of two file-systems by 3G each: /home and /
And, then the LVMs on top.
The LVM on top of volume /var seems to have been expanded from 3G to 6G and the LVMs on top of /home and / seem to have been reduced by 3G, but the volumes /home and / are still showing a 3G difference between their volume and file-system sizes...
I'm now attempting to increase the size of /var by 3G from 3G to 6G, but no joy - no doubt I've got my logic on the what, were and how mixed up...
Terminal output of my commands to date are below - any help on where to next would be very much appreciated :)
Thanks Morgan
[root@morgansmachine ~]# ssm list ---------------------------------------------------------------------------- Device Free Used Total Pool Mount point ---------------------------------------------------------------------------- /dev/dm-0 0.00 KB 20.00 GB 20.00 GB crypt_pool /dev/dm-1 0.00 KB 16.00 GB 16.00 GB crypt_pool /dev/dm-4 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-5 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-6 0.00 KB 195.73 GB 195.73 GB crypt_pool /dev/sda 238.47 GB PARTITIONED /dev/sda1 260.00 MB /boot/efi /dev/sda2 500.00 MB /boot /dev/sda3 0.00 KB 237.73 GB 237.73 GB fedora_morgansmachine ---------------------------------------------------------------------------- ------------------------------------------------------------------- Pool Type Devices Free Used Total ------------------------------------------------------------------- fedora_morgansmachine lvm 1 0.00 KB 237.73 GB 237.73 GB ------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point -------------------------------------------------------------------------------------------------------------------- /dev/fedora_morgansmachine/root fedora_morgansmachine 20.00 GB linear /dev/fedora_morgansmachine/swap fedora_morgansmachine 16.00 GB linear /dev/fedora_morgansmachine/var fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/opt fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/home fedora_morgansmachine 195.73 GB linear /dev/mapper/luks-5c87de06-4ea4-4e6d-a3c3-86826791a892 crypt_pool 3.00 GB ext4 3.00 GB 2.30 GB crypt /opt /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f crypt_pool 3.00 GB ext4 3.00 GB 780.92 MB crypt /var /dev/mapper/luks-6e1926c9-3b75-403b-917e-7f92388d71c6 crypt_pool 16.00 GB crypt /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 195.73 GB ext4 195.73 GB 21.42 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 20.00 GB ext4 20.00 GB 14.82 GB crypt / /dev/sda1 260.00 MB vfat part /boot/efi /dev/sda2 500.00 MB ext4 500.00 MB 354.80 MB part /boot -------------------------------------------------------------------------------------------------------------------- [root@morgansmachine ~]# ssm resize -s-3G /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 [root@morgansmachine ~]# ssm resize -s-3G /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 [root@morgansmachine ~]# ssm list ---------------------------------------------------------------------------- Device Free Used Total Pool Mount point ---------------------------------------------------------------------------- /dev/dm-0 0.00 KB 20.00 GB 20.00 GB crypt_pool /dev/dm-1 0.00 KB 16.00 GB 16.00 GB crypt_pool /dev/dm-4 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-5 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-6 0.00 KB 195.73 GB 195.73 GB crypt_pool /dev/sda 238.47 GB PARTITIONED /dev/sda1 260.00 MB /boot/efi /dev/sda2 500.00 MB /boot /dev/sda3 0.00 KB 237.73 GB 237.73 GB fedora_morgansmachine ---------------------------------------------------------------------------- ------------------------------------------------------------------- Pool Type Devices Free Used Total ------------------------------------------------------------------- fedora_morgansmachine lvm 1 0.00 KB 237.73 GB 237.73 GB ------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point -------------------------------------------------------------------------------------------------------------------- /dev/fedora_morgansmachine/root fedora_morgansmachine 20.00 GB linear /dev/fedora_morgansmachine/swap fedora_morgansmachine 16.00 GB linear /dev/fedora_morgansmachine/var fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/opt fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/home fedora_morgansmachine 195.73 GB linear /dev/mapper/luks-5c87de06-4ea4-4e6d-a3c3-86826791a892 crypt_pool 3.00 GB ext4 3.00 GB 2.30 GB crypt /opt /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f crypt_pool 3.00 GB ext4 3.00 GB 780.92 MB crypt /var /dev/mapper/luks-6e1926c9-3b75-403b-917e-7f92388d71c6 crypt_pool 16.00 GB crypt /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 192.73 GB ext4 195.73 GB 21.40 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 17.00 GB ext4 20.00 GB 14.70 GB crypt / /dev/sda1 260.00 MB vfat part /boot/efi /dev/sda2 500.00 MB ext4 500.00 MB 354.80 MB part /boot -------------------------------------------------------------------------------------------------------------------- [root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/home WARNING: Reducing active and open logical volume to 192.73 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce fedora_morgansmachine/home? [y/n]: y Size of logical volume fedora_morgansmachine/home changed from 195.73 GiB (50107 extents) to 192.73 GiB (49339 extents). Logical volume home successfully resized. [root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/root WARNING: Reducing active and open logical volume to 17.00 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce fedora_morgansmachine/root? [y/n]: y Size of logical volume fedora_morgansmachine/root changed from 20.00 GiB (5120 extents) to 17.00 GiB (4352 extents). Logical volume root successfully resized. [root@morgansmachine ~]# ssm list ---------------------------------------------------------------------------- Device Free Used Total Pool Mount point ---------------------------------------------------------------------------- /dev/dm-0 0.00 KB 17.00 GB 17.00 GB crypt_pool /dev/dm-1 0.00 KB 16.00 GB 16.00 GB crypt_pool /dev/dm-4 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-5 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-6 0.00 KB 192.73 GB 192.73 GB crypt_pool /dev/sda 238.47 GB PARTITIONED /dev/sda1 260.00 MB /boot/efi /dev/sda2 500.00 MB /boot /dev/sda3 6.00 GB 231.73 GB 237.73 GB fedora_morgansmachine ---------------------------------------------------------------------------- ------------------------------------------------------------------- Pool Type Devices Free Used Total ------------------------------------------------------------------- fedora_morgansmachine lvm 1 6.00 GB 231.73 GB 237.73 GB ------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point -------------------------------------------------------------------------------------------------------------------- /dev/mapper/luks-5c87de06-4ea4-4e6d-a3c3-86826791a892 crypt_pool 3.00 GB ext4 3.00 GB 2.30 GB crypt /opt /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f crypt_pool 3.00 GB ext4 3.00 GB 780.92 MB crypt /var /dev/mapper/luks-6e1926c9-3b75-403b-917e-7f92388d71c6 crypt_pool 16.00 GB crypt /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 192.73 GB ext4 195.73 GB 21.40 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 17.00 GB ext4 20.00 GB 14.70 GB crypt / /dev/fedora_morgansmachine/root fedora_morgansmachine 17.00 GB linear /dev/fedora_morgansmachine/swap fedora_morgansmachine 16.00 GB linear /dev/fedora_morgansmachine/var fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/opt fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/home fedora_morgansmachine 192.73 GB linear /dev/sda1 260.00 MB vfat part /boot/efi /dev/sda2 500.00 MB ext4 500.00 MB 354.80 MB part /boot -------------------------------------------------------------------------------------------------------------------- [root@morgansmachine ~]# ssm resize -s+3G /dev/fedora_morgansmachine/var Size of logical volume fedora_morgansmachine/var changed from 3.00 GiB (768 extents) to 6.00 GiB (1536 extents). Logical volume var successfully resized. [root@morgansmachine ~]# ssm resize -s+3G /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f SSM Error (2005): There is not enough space in the pool 'none' to grow volume '/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f' to size 6289408.0 KB! [root@morgansmachine ~]# ssm list ---------------------------------------------------------------------------- Device Free Used Total Pool Mount point ---------------------------------------------------------------------------- /dev/dm-0 0.00 KB 17.00 GB 17.00 GB crypt_pool /dev/dm-1 0.00 KB 16.00 GB 16.00 GB crypt_pool /dev/dm-4 0.00 KB 6.00 GB 6.00 GB crypt_pool /dev/dm-5 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-6 0.00 KB 192.73 GB 192.73 GB crypt_pool /dev/sda 238.47 GB PARTITIONED /dev/sda1 260.00 MB /boot/efi /dev/sda2 500.00 MB /boot /dev/sda3 3.00 GB 234.73 GB 237.73 GB fedora_morgansmachine ---------------------------------------------------------------------------- ------------------------------------------------------------------- Pool Type Devices Free Used Total ------------------------------------------------------------------- fedora_morgansmachine lvm 1 3.00 GB 234.73 GB 237.73 GB ------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point -------------------------------------------------------------------------------------------------------------------- /dev/fedora_morgansmachine/root fedora_morgansmachine 17.00 GB linear /dev/fedora_morgansmachine/swap fedora_morgansmachine 16.00 GB linear /dev/fedora_morgansmachine/var fedora_morgansmachine 6.00 GB linear /dev/fedora_morgansmachine/opt fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/home fedora_morgansmachine 192.73 GB linear /dev/mapper/luks-5c87de06-4ea4-4e6d-a3c3-86826791a892 crypt_pool 3.00 GB ext4 3.00 GB 2.30 GB crypt /opt /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f crypt_pool 3.00 GB ext4 3.00 GB 780.92 MB crypt /var /dev/mapper/luks-6e1926c9-3b75-403b-917e-7f92388d71c6 crypt_pool 16.00 GB crypt /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 192.73 GB ext4 195.73 GB 21.40 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 17.00 GB ext4 20.00 GB 14.70 GB crypt / /dev/sda1 260.00 MB vfat part /boot/efi /dev/sda2 500.00 MB ext4 500.00 MB 354.80 MB part /boot -------------------------------------------------------------------------------------------------------------------- [root@morgansmachine ~]# ssm resize /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f Traceback (most recent call last): File "/usr/bin/ssm", line 48, in <module> sys.exit(main.main()) File "/usr/lib/python3.5/site-packages/ssmlib/main.py", line 1875, in main args.func(args) File "/usr/lib/python3.5/site-packages/ssmlib/main.py", line 1060, in resize ret = args.volume['fs_info'].resize() File "/usr/lib/python3.5/site-packages/ssmlib/main.py", line 177, in resize return self._get_fs_func("resize", *args, **kwargs) File "/usr/lib/python3.5/site-packages/ssmlib/main.py", line 167, in _get_fs_func return func(*args, **kwargs) File "/usr/lib/python3.5/site-packages/ssmlib/main.py", line 230, in extN_resize new_size < self.data['fs_size']): TypeError: unorderable types: NoneType() < int() [root@morgansmachine ~]#
Hmm, after trying to reboot and falling into emergency recovery, this doesn't look good. And, after booting from live media and attempting the following, this looks very bad indeed... I'm not sure system-storage-manager should have allowed this... bugs, bugs, bugs.... And more bugs, serious bugs
[root@localhost ~]# e2fsck -fy /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 e2fsck 1.42.13 (17-May-2015) The filesystem size (according to the superblock) is 51309056 blocks The physical size of the device is 50522624 blocks Either the superblock or the partition table is likely to be corrupt! Abort? yes
[root@localhost ~]# resize2fs -fp /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 50522624 resize2fs 1.42.13 (17-May-2015) Resizing the filesystem on /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 to 50522624 (4k) blocks. resize2fs: Can't read a block bitmap while trying to resize /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 Please run 'e2fsck -fy /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316' to fix the filesystem after the aborted resize operation. [root@localhost ~]# e2fsck -fy /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 e2fsck 1.42.13 (17-May-2015) The filesystem size (according to the superblock) is 5242368 blocks The physical size of the device is 4455936 blocks Either the superblock or the partition table is likely to be corrupt! Abort? yes
[root@localhost ~]# resize2fs -fp /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 4455936 resize2fs 1.42.13 (17-May-2015) Resizing the filesystem on /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 to 4455936 (4k) blocks. resize2fs: Can't read a block bitmap while trying to resize /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 Please run 'e2fsck -fy /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6' to fix the filesystem after the aborted resize operation. [root@localhost ~]#
On 10 July 2016 at 14:42, Morgan Read mstuff@read.org.nz wrote:
Hello folks, long time no post... (Mailman has changed...)
So, big question re ssm and LUKS on LVM - I'm looking to increase /var by 3G and create another LUKS volume for /usr/local of 3G.
I've reduced the size of two file-systems by 3G each: /home and /
And, then the LVMs on top.
The LVM on top of volume /var seems to have been expanded from 3G to 6G and the LVMs on top of /home and / seem to have been reduced by 3G, but the volumes /home and / are still showing a 3G difference between their volume and file-system sizes...
I'm now attempting to increase the size of /var by 3G from 3G to 6G, but no joy - no doubt I've got my logic on the what, were and how mixed up...
Terminal output of my commands to date are below - any help on where to next would be very much appreciated :)
Thanks Morgan
[root@morgansmachine ~]# ssm list
Device Free Used Total Pool Mount point
/dev/dm-0 0.00 KB 20.00 GB 20.00 GB crypt_pool /dev/dm-1 0.00 KB 16.00 GB 16.00 GB crypt_pool /dev/dm-4 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-5 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-6 0.00 KB 195.73 GB 195.73 GB crypt_pool /dev/sda 238.47 GB PARTITIONED /dev/sda1 260.00 MB /boot/efi /dev/sda2 500.00 MB /boot /dev/sda3 0.00 KB 237.73 GB 237.73 GB fedora_morgansmachine
Pool Type Devices Free Used Total
fedora_morgansmachine lvm 1 0.00 KB 237.73 GB 237.73 GB
Volume Pool Volume size FS FS size Free Type Mount point
/dev/fedora_morgansmachine/root fedora_morgansmachine 20.00 GB linear /dev/fedora_morgansmachine/swap fedora_morgansmachine 16.00 GB linear /dev/fedora_morgansmachine/var fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/opt fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/home fedora_morgansmachine 195.73 GB linear /dev/mapper/luks-5c87de06-4ea4-4e6d-a3c3-86826791a892 crypt_pool 3.00 GB ext4 3.00 GB 2.30 GB crypt /opt /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f crypt_pool 3.00 GB ext4 3.00 GB 780.92 MB crypt /var /dev/mapper/luks-6e1926c9-3b75-403b-917e-7f92388d71c6 crypt_pool 16.00 GB crypt /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 195.73 GB ext4 195.73 GB 21.42 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 20.00 GB ext4 20.00 GB 14.82 GB crypt / /dev/sda1 260.00 MB vfat part /boot/efi /dev/sda2 500.00 MB ext4 500.00 MB 354.80 MB part /boot
[root@morgansmachine ~]# ssm resize -s-3G /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 [root@morgansmachine ~]# ssm resize -s-3G /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 [root@morgansmachine ~]# ssm list
Device Free Used Total Pool Mount point
/dev/dm-0 0.00 KB 20.00 GB 20.00 GB crypt_pool /dev/dm-1 0.00 KB 16.00 GB 16.00 GB crypt_pool /dev/dm-4 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-5 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-6 0.00 KB 195.73 GB 195.73 GB crypt_pool /dev/sda 238.47 GB PARTITIONED /dev/sda1 260.00 MB /boot/efi /dev/sda2 500.00 MB /boot /dev/sda3 0.00 KB 237.73 GB 237.73 GB fedora_morgansmachine
Pool Type Devices Free Used Total
fedora_morgansmachine lvm 1 0.00 KB 237.73 GB 237.73 GB
Volume Pool Volume size FS FS size Free Type Mount point
/dev/fedora_morgansmachine/root fedora_morgansmachine 20.00 GB linear /dev/fedora_morgansmachine/swap fedora_morgansmachine 16.00 GB linear /dev/fedora_morgansmachine/var fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/opt fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/home fedora_morgansmachine 195.73 GB linear /dev/mapper/luks-5c87de06-4ea4-4e6d-a3c3-86826791a892 crypt_pool 3.00 GB ext4 3.00 GB 2.30 GB crypt /opt /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f crypt_pool 3.00 GB ext4 3.00 GB 780.92 MB crypt /var /dev/mapper/luks-6e1926c9-3b75-403b-917e-7f92388d71c6 crypt_pool 16.00 GB crypt /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 192.73 GB ext4 195.73 GB 21.40 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 17.00 GB ext4 20.00 GB 14.70 GB crypt / /dev/sda1 260.00 MB vfat part /boot/efi /dev/sda2 500.00 MB ext4 500.00 MB 354.80 MB part /boot
[root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/home WARNING: Reducing active and open logical volume to 192.73 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce fedora_morgansmachine/home? [y/n]: y Size of logical volume fedora_morgansmachine/home changed from 195.73 GiB (50107 extents) to 192.73 GiB (49339 extents). Logical volume home successfully resized. [root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/root WARNING: Reducing active and open logical volume to 17.00 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce fedora_morgansmachine/root? [y/n]: y Size of logical volume fedora_morgansmachine/root changed from 20.00 GiB (5120 extents) to 17.00 GiB (4352 extents). Logical volume root successfully resized. [root@morgansmachine ~]# ssm list
Device Free Used Total Pool Mount point
/dev/dm-0 0.00 KB 17.00 GB 17.00 GB crypt_pool /dev/dm-1 0.00 KB 16.00 GB 16.00 GB crypt_pool /dev/dm-4 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-5 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-6 0.00 KB 192.73 GB 192.73 GB crypt_pool /dev/sda 238.47 GB PARTITIONED /dev/sda1 260.00 MB /boot/efi /dev/sda2 500.00 MB /boot /dev/sda3 6.00 GB 231.73 GB 237.73 GB fedora_morgansmachine
Pool Type Devices Free Used Total
fedora_morgansmachine lvm 1 6.00 GB 231.73 GB 237.73 GB
Volume Pool Volume size FS FS size Free Type Mount point
/dev/mapper/luks-5c87de06-4ea4-4e6d-a3c3-86826791a892 crypt_pool 3.00 GB ext4 3.00 GB 2.30 GB crypt /opt /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f crypt_pool 3.00 GB ext4 3.00 GB 780.92 MB crypt /var /dev/mapper/luks-6e1926c9-3b75-403b-917e-7f92388d71c6 crypt_pool 16.00 GB crypt /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 192.73 GB ext4 195.73 GB 21.40 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 17.00 GB ext4 20.00 GB 14.70 GB crypt / /dev/fedora_morgansmachine/root fedora_morgansmachine 17.00 GB linear /dev/fedora_morgansmachine/swap fedora_morgansmachine 16.00 GB linear /dev/fedora_morgansmachine/var fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/opt fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/home fedora_morgansmachine 192.73 GB linear /dev/sda1 260.00 MB vfat part /boot/efi /dev/sda2 500.00 MB ext4 500.00 MB 354.80 MB part /boot
[root@morgansmachine ~]# ssm resize -s+3G /dev/fedora_morgansmachine/var Size of logical volume fedora_morgansmachine/var changed from 3.00 GiB (768 extents) to 6.00 GiB (1536 extents). Logical volume var successfully resized. [root@morgansmachine ~]# ssm resize -s+3G /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f SSM Error (2005): There is not enough space in the pool 'none' to grow volume '/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f' to size 6289408.0 KB! [root@morgansmachine ~]# ssm list
Device Free Used Total Pool Mount point
/dev/dm-0 0.00 KB 17.00 GB 17.00 GB crypt_pool /dev/dm-1 0.00 KB 16.00 GB 16.00 GB crypt_pool /dev/dm-4 0.00 KB 6.00 GB 6.00 GB crypt_pool /dev/dm-5 0.00 KB 3.00 GB 3.00 GB crypt_pool /dev/dm-6 0.00 KB 192.73 GB 192.73 GB crypt_pool /dev/sda 238.47 GB PARTITIONED /dev/sda1 260.00 MB /boot/efi /dev/sda2 500.00 MB /boot /dev/sda3 3.00 GB 234.73 GB 237.73 GB fedora_morgansmachine
Pool Type Devices Free Used Total
fedora_morgansmachine lvm 1 3.00 GB 234.73 GB 237.73 GB
Volume Pool Volume size FS FS size Free Type Mount point
/dev/fedora_morgansmachine/root fedora_morgansmachine 17.00 GB linear /dev/fedora_morgansmachine/swap fedora_morgansmachine 16.00 GB linear /dev/fedora_morgansmachine/var fedora_morgansmachine 6.00 GB linear /dev/fedora_morgansmachine/opt fedora_morgansmachine 3.00 GB linear /dev/fedora_morgansmachine/home fedora_morgansmachine 192.73 GB linear /dev/mapper/luks-5c87de06-4ea4-4e6d-a3c3-86826791a892 crypt_pool 3.00 GB ext4 3.00 GB 2.30 GB crypt /opt /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f crypt_pool 3.00 GB ext4 3.00 GB 780.92 MB crypt /var /dev/mapper/luks-6e1926c9-3b75-403b-917e-7f92388d71c6 crypt_pool 16.00 GB crypt /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 192.73 GB ext4 195.73 GB 21.40 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 17.00 GB ext4 20.00 GB 14.70 GB crypt / /dev/sda1 260.00 MB vfat part /boot/efi /dev/sda2 500.00 MB ext4 500.00 MB 354.80 MB part /boot
[root@morgansmachine ~]# ssm resize /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f Traceback (most recent call last): File "/usr/bin/ssm", line 48, in <module> sys.exit(main.main()) File "/usr/lib/python3.5/site-packages/ssmlib/main.py", line 1875, in main args.func(args) File "/usr/lib/python3.5/site-packages/ssmlib/main.py", line 1060, in resize ret = args.volume['fs_info'].resize() File "/usr/lib/python3.5/site-packages/ssmlib/main.py", line 177, in resize return self._get_fs_func("resize", *args, **kwargs) File "/usr/lib/python3.5/site-packages/ssmlib/main.py", line 167, in _get_fs_func return func(*args, **kwargs) File "/usr/lib/python3.5/site-packages/ssmlib/main.py", line 230, in extN_resize new_size < self.data['fs_size']): TypeError: unorderable types: NoneType() < int() [root@morgansmachine ~]#
-- Morgan Read United Kingdom mailto:mstuffATreadDOTorgDOTnz
Confused about DRM? Get all the info you need at: http://drm.info/
On Sun, Jul 10, 2016 at 4:13 PM, Morgan Read mstuff@read.org.nz wrote:
Hmm, after trying to reboot and falling into emergency recovery, this doesn't look good. And, after booting from live media and attempting the following, this looks very bad indeed... I'm not sure system-storage-manager should have allowed this... bugs, bugs, bugs.... And more bugs, serious bugs
Why the first resizes resulted in exactly no messages at all? I can't reproduce that with system-storage-manager-0.4-10.fc24.noarch. It asks about the mounted volume, whether to umount it first.
But it does seem clear in your case the file system was not resized.
I would say this part is improper design and a valid bug to file:
[root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/home WARNING: Reducing active and open logical volume to 192.73 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce fedora_morgansmachine/home? [y/n]: y Size of logical volume fedora_morgansmachine/home changed from 195.73 GiB (50107 extents) to 192.73 GiB (49339 extents). Logical volume home successfully resized.
By definition this is going to destroy data, not merely it may destroy data. It should have all available information to know the file system is size X, and that this operation will make the LV size X - 3G, which *will* with 100% certainly obliterate the file system. And then it permits it.
This type of resize operation should fail. It should not be possible to do a resize through ssm (or any GUI resizer) and lose data in this fashion. It should require that you delete the LV in order to destroy it, not destroy it via resizing. Or require that wipefs be used on the LV or LUKS volumes before the resize will work.
It needs to be more fail safe that this. But it did to exactly what you asked it to do. And ssm lit very clearly showed that it had NOT shrunk your file system volume before you decided to make the LV smaller than the file system.
[root@localhost ~]# e2fsck -fy /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 e2fsck 1.42.13 (17-May-2015) The filesystem size (according to the superblock) is 51309056 blocks The physical size of the device is 50522624 blocks Either the superblock or the partition table is likely to be corrupt! Abort? yes
These file systems are toast. Hopefully you have a backup of /home at least.
On 11/07/16 21:46, Chris Murphy wrote:
On Sun, Jul 10, 2016 at 4:13 PM, Morgan Read mstuff@read.org.nz wrote:
Hmm, after trying to reboot and falling into emergency recovery, this doesn't look good. And, after booting from live media and attempting the following, this looks very bad indeed... I'm not sure system-storage-manager should have allowed this... bugs, bugs, bugs.... And more bugs, serious bugs
Why the first resizes resulted in exactly no messages at all? I can't reproduce that with system-storage-manager-0.4-10.fc24.noarch. It asks about the mounted volume, whether to umount it first.
But it does seem clear in your case the file system was not resized.
I would say this part is improper design and a valid bug to file:
[root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/home WARNING: Reducing active and open logical volume to 192.73 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce fedora_morgansmachine/home? [y/n]: y Size of logical volume fedora_morgansmachine/home changed from 195.73 GiB (50107 extents) to 192.73 GiB (49339 extents). Logical volume home successfully resized.
As, ext4 doesn't require any defragging, I guessed that it would arrange the data at the head of the file system and this operation would take space from the rear - but, I have to say, I was aware of taking the risk...
By definition this is going to destroy data, not merely it may destroy data. It should have all available information to know the file system is size X, and that this operation will make the LV size X - 3G, which *will* with 100% certainly obliterate the file system. And then it permits it.
This type of resize operation should fail. It should not be possible to do a resize through ssm (or any GUI resizer) and lose data in this fashion. It should require that you delete the LV in order to destroy it, not destroy it via resizing. Or require that wipefs be used on the LV or LUKS volumes before the resize will work.
I entirely agree - as in my last email, I took some comfort from the changelog that indicated that ssm would fail where it could not perform on a particular filesystem
It needs to be more fail safe that this. But it did to exactly what you asked it to do. And ssm lit very clearly showed that it had NOT shrunk your file system volume before you decided to make the LV smaller than the file system.
[root@localhost ~]# e2fsck -fy /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 e2fsck 1.42.13 (17-May-2015) The filesystem size (according to the superblock) is 51309056 blocks The physical size of the device is 50522624 blocks Either the superblock or the partition table is likely to be corrupt! Abort? yes
These file systems are toast. Hopefully you have a backup of /home at least.
Thanks Chris for that encouragement :) Yes, I do have back up of /home.
Re the link in my last email https://www.linuxquestions.org/questions/linux-hardware-18/size-in-superbloc... I'll see if I can mount anything from live image and see if that helps.
Regards Morgan.
On Sun, Jul 10, 2016 at 12:42 PM, Morgan Read mstuff@read.org.nz wrote:
[root@morgansmachine ~]# ssm resize -s-3G /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 [root@morgansmachine ~]# ssm resize -s-3G /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6
These two commands should have at the least reduced the size of the file system volumes mounted at /home and / but I have no idea why this was permitted because online shrink is not supported by ext4.
--------------- # resize2fs /dev/VG/test4 40G resize2fs 1.42.13 (17-May-2015) Filesystem at /dev/VG/test4 is mounted on /mnt/0; on-line resizing required resize2fs: On-line shrinking not supported
# ssm resize -s-3G /dev/VG/test4 Do you want to unmount "/mnt/0"? [Y|n] n fsadm: Cannot proceed with mounted filesystem "/mnt/0" fsadm failed: 1 Filesystem resize failed. SSM Error (2012): ERROR running command: "lvm lvresize -r -L 49283072.0k /dev/VG/test4" ---------------
If I allow the unmount:
--------------- [root@f24s ~]# ssm resize -s-3G /dev/VG/test4 Do you want to unmount "/mnt/0"? [Y|n] y fsck from util-linux 2.28
/dev/mapper/VG-test4: 11/3276800 files (0.0% non-contiguous), 251699/13107200 blocks resize2fs 1.42.13 (17-May-2015) Resizing the filesystem on /dev/mapper/VG-test4 to 12320768 (4k) blocks. The filesystem on /dev/mapper/VG-test4 is now 12320768 (4k) blocks long.
Size of logical volume VG/test4 changed from 50.00 GiB (12800 extents) to 47.00 GiB (12032 extents). Logical volume test4 successfully resized. ---------------
So why don't you have any messages about what actually ssm resize did? I can't tell if it did anything at all, which means it probably did not resize the file system. This appears to be true when looking at ssm list results after you did this resize. The file systems are still mounted at /home and /, and they are still the same size as before, no change.
[root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/home WARNING: Reducing active and open logical volume to 192.73 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce fedora_morgansmachine/home? [y/n]: y Size of logical volume fedora_morgansmachine/home changed from 195.73 GiB (50107 extents) to 192.73 GiB (49339 extents). Logical volume home successfully resized. [root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/root WARNING: Reducing active and open logical volume to 17.00 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce fedora_morgansmachine/root? [y/n]: y Size of logical volume fedora_morgansmachine/root changed from 20.00 GiB (5120 extents) to 17.00 GiB (4352 extents). Logical volume root successfully resized.
Yeah I think you did just indeed destroy you data on both of these because the file system was not resized in the first step and then you asked it to change the size of the LV. So those extents revert back to the VG.
Had the file system resize happened correctly, ssm will resize the LV for you, so you didn't need to do this step anyway.
[root@morgansmachine ~]# ssm resize -s+3G /dev/fedora_morgansmachine/var Size of logical volume fedora_morgansmachine/var changed from 3.00 GiB (768 extents) to 6.00 GiB (1536 extents). Logical volume var successfully resized. [root@morgansmachine ~]# ssm resize -s+3G /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f SSM Error (2005): There is not enough space in the pool 'none' to grow volume '/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f' to size 6289408.0 KB!
I think the problem here is now ssm is confused somehow. You should have just done the 2nd command on the file system itself, because ssm will know that it first must increase the size of the LV, and then the size of the LUKS volume, and then the fs. But you only increased the size of the LV, not the LUKS volume, which now has a different size than its underlying LV, so SSM seems to get stuck.
Further, the problem is that by shrinking some LVs, and then growing another, the extents for /home and / are now with some other LV and have probably been stepped on, so /home and / are likely a total loss. It would take some very tedious patience to unwind all of this in the *exact* reverse order in order to get the same extents linearly allocated back to the /home and / file systems.
[root@morgansmachine ~]# ssm resize /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f Traceback (most recent call last):
That's a bug. Anytime there's a crash it's a bug.
On 11/07/16 21:35, Chris Murphy wrote:
On Sun, Jul 10, 2016 at 12:42 PM, Morgan Read mstuff@read.org.nz wrote:
[root@morgansmachine ~]# ssm resize -s-3G /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 [root@morgansmachine ~]# ssm resize -s-3G /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6
These two commands should have at the least reduced the size of the file system volumes mounted at /home and / but I have no idea why this was permitted because online shrink is not supported by ext4.
Well, it seems to have reduced the volumes, but not the filesystems: -------------------------------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point -------------------------------------------------------------------------------------------------------------------- ... /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 192.73 GB ext4 195.73 GB 21.40 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 17.00 GB ext4 20.00 GB 14.70 GB crypt /
I've cut and pasted direct from the terminal - no omissions or additions to the series of commands and outputs.
Following the two above commands, ssm lists the two volumes as reduced in size by 3G, but the file system's size (FS size) as remaining the same...
I figured that was strange in itself - but stranger still as I seemed to be able to increase the LV size of [...]/var by 3G, but then trying to increase the underlying volume by the same amount failed politely with: SSM Error (2005): There is not enough space in the pool 'none' to grow volume '/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f' to size 6289408.0 KB!
And then simply attempting to increase the size of the underlying volume to fill the space caused ssm to fail very rudely and spit the dummy!
# resize2fs /dev/VG/test4 40G resize2fs 1.42.13 (17-May-2015) Filesystem at /dev/VG/test4 is mounted on /mnt/0; on-line resizing required resize2fs: On-line shrinking not supported
# ssm resize -s-3G /dev/VG/test4 Do you want to unmount "/mnt/0"? [Y|n] n fsadm: Cannot proceed with mounted filesystem "/mnt/0" fsadm failed: 1 Filesystem resize failed. SSM Error (2012): ERROR running command: "lvm lvresize -r -L 49283072.0k /dev/VG/test4"
If I allow the unmount:
[root@f24s ~]# ssm resize -s-3G /dev/VG/test4 Do you want to unmount "/mnt/0"? [Y|n] y fsck from util-linux 2.28
/dev/mapper/VG-test4: 11/3276800 files (0.0% non-contiguous), 251699/13107200 blocks resize2fs 1.42.13 (17-May-2015) Resizing the filesystem on /dev/mapper/VG-test4 to 12320768 (4k) blocks. The filesystem on /dev/mapper/VG-test4 is now 12320768 (4k) blocks long.
Size of logical volume VG/test4 changed from 50.00 GiB (12800 extents) to 47.00 GiB (12032 extents). Logical volume test4 successfully resized.
So why don't you have any messages about what actually ssm resize did?
Hmm, don't know - I generally understand that no message is a good message: 'all done' ?
I can't tell if it did anything at all, which means it probably did not resize the file system. This appears to be true when looking at ssm list results after you did this resize. The file systems are still mounted at /home and /, and they are still the same size as before, no change.
Hmm, yes - but volume size has changed - re my following email, it seems to be a discrepancy between the superblock or partition table, which I was trying to correct - is there a way to edit the superblock to conform to the partition table?
This scenario looks similar to what's described here: https://www.linuxquestions.org/questions/linux-hardware-18/size-in-superbloc...
[root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/home WARNING: Reducing active and open logical volume to 192.73 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce fedora_morgansmachine/home? [y/n]: y Size of logical volume fedora_morgansmachine/home changed from 195.73 GiB (50107 extents) to 192.73 GiB (49339 extents). Logical volume home successfully resized. [root@morgansmachine ~]# ssm resize -s-3G /dev/fedora_morgansmachine/root WARNING: Reducing active and open logical volume to 17.00 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce fedora_morgansmachine/root? [y/n]: y Size of logical volume fedora_morgansmachine/root changed from 20.00 GiB (5120 extents) to 17.00 GiB (4352 extents). Logical volume root successfully resized.
Yeah I think you did just indeed destroy you data on both of these because the file system was not resized in the first step and then you asked it to change the size of the LV. So those extents revert back to the VG.
Had the file system resize happened correctly, ssm will resize the LV for you, so you didn't need to do this step anyway.
Docs, docs, docs!
[root@morgansmachine ~]# ssm resize -s+3G /dev/fedora_morgansmachine/var Size of logical volume fedora_morgansmachine/var changed from 3.00 GiB (768 extents) to 6.00 GiB (1536 extents). Logical volume var successfully resized. [root@morgansmachine ~]# ssm resize -s+3G /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f SSM Error (2005): There is not enough space in the pool 'none' to grow volume '/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f' to size 6289408.0 KB!
I think the problem here is now ssm is confused somehow. You should have just done the 2nd command on the file system itself, because ssm will know that it first must increase the size of the LV, and then the size of the LUKS volume, and then the fs. But you only increased the size of the LV, not the LUKS volume, which now has a different size than its underlying LV, so SSM seems to get stuck.
Isn't
[root@morgansmachine ~]# ssm resize -s+3G
/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f An attempt to increase the size of the LUKS volume?
Further, the problem is that by shrinking some LVs, and then growing another, the extents for /home and / are now with some other LV and have probably been stepped on, so /home and / are likely a total loss. It would take some very tedious patience to unwind all of this in the *exact* reverse order in order to get the same extents linearly allocated back to the /home and / file systems.
Well, the steps I've followed aren't so complicated they couldn't be retraced...
[root@morgansmachine ~]# ssm resize /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f Traceback (most recent call last):
That's a bug. Anytime there's a crash it's a bug.
In performing this operation I took some comfort using ssm - in the absence of any documentation I could find, other than this was the tool for the job - from the changelog Mon Jul 27 2015: - Error out if file system is not supported (#1196428) I figured that if an operation wasn't supported, then ssm would say so...
As to resizing a live system - I figured there was some risk there, but due to the lack of documentation and the 'error out if filesystem is not supported' figured again that I'd would be allowed to complete if it couldn't complete... I was most concerned that the operation wouldn't be supported on a crypt system as there was documentation about 3 years back that ssm only supported reading crypt filesystems - never thought ext 4 would be the weak point...
Re resizing the LV before the underlying system - I had no idea that ssm would take account of the LV when operating on the underlying system. Again, documentation seems underwhelming. What I'm trying to do seems to be precisely what ssm was made to simplify and do. But, in any case, doing what needed to be done to the LV before the underlying system seemed the safest option
The best documentation I could find was: https://fedoraproject.org/wiki/Features/SystemStorageManager http://storagemanager.sourceforge.net/ Both of which are at least 3 years old And https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm... Which is minimal
Yes, I have filed a bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1354681
Many thanks for your follow - I'd be interested in your comments?
Regards Morgan.
On Tue, Jul 12, 2016 at 5:35 AM, Morgan Read mstuff@read.org.nz wrote:
Well, it seems to have reduced the volumes, but not the filesystems:
Volume Pool Volume size FS FS size Free Type Mount point
... /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 192.73 GB ext4 195.73 GB 21.40 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 17.00 GB ext4 20.00 GB 14.70 GB crypt /
You're right, this is damn peculiar.
The way I'm reading this, each LV is separately encrypted, is that right? And then it's the dmcrypt/LUKS volume that is formatted ext4?
So:
ext4 | LUKS | LV | VG | PV | Disk
If so, as far as I can tell it's correct to point it at the LUKS volume, the thing that is mounted.
I've cut and pasted direct from the terminal - no omissions or additions to the series of commands and outputs.
Following the two above commands, ssm lists the two volumes as reduced in size by 3G, but the file system's size (FS size) as remaining the same...
But what I'm seeing is *ONLY* the LUKS volume was reduced. The LV is the same size as before.
ssm is apparently confused about the stack relationships. It's treating the command literally for only the dmcrypt volume, not the file system and not the LV. Off hand I'd say that's a really big bug.
So why don't you have any messages about what actually ssm resize did?
Hmm, don't know - I generally understand that no message is a good message: 'all done' ?
Nooo. It *had* to ask and it had to fail if it was an active mounted / or /home. There's no way to unmount it even if you give permission.
I can't tell if it did anything at all, which means it probably did not resize the file system. This appears to be true when looking at ssm list results after you did this resize. The file systems are still mounted at /home and /, and they are still the same size as before, no change.
Hmm, yes - but volume size has changed - re my following email, it seems to be a discrepancy between the superblock or partition table, which I was trying to correct - is there a way to edit the superblock to conform to the partition table?
Only for grow. For shrink it's too late, it has no way to access the now missing space at the end of the fs volume, but you can ask on the ext4 list what the chances are of resizing the file system once the partition (LUKS in this case) is already shrunk, out of the usual order.
The LUKS volume shrinking itself doesn't immediately cause a problem, it's the subsequent shrink of the LV, which will return extents used for the file system into the VG, and then after that there was a grow for a different LV which would have moved those extents from the VG to that LV, and then the fs resize would have stepped on all that data. So the portion removed from /home and / is just obliterated more than likely.
You'd have to ask on the ext4 list to be sure if this is not fixable. But my expectation from reading the resize.c code for ext4 is that it will not resize a file system after the fact, there's required accounting that has to be done and if it can't be done the operation fails. e2fsck might be able to estimate that there was no meaningful data or metadata in the missing portion, and could fix this *IF* the LUKS volume is returned to original size the FS thinks it's supposed to be on. But I expect e2fsck to fail also so long as the partition its on is smaller than the fs says it should be because it cannot fix the metadata in the missing 3GiB portion and e2fsck can't do a shrink while fixing.
So it's catch 22. It can't be shrunk now because the proper accounting can't be done. It can't be fixed until the partition is resized to match the volume size, and even then the fixing may fail for multiple reasons.
Docs, docs, docs!
[root@morgansmachine ~]# ssm resize -s+3G /dev/fedora_morgansmachine/var Size of logical volume fedora_morgansmachine/var changed from 3.00 GiB (768 extents) to 6.00 GiB (1536 extents). Logical volume var successfully resized. [root@morgansmachine ~]# ssm resize -s+3G /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f SSM Error (2005): There is not enough space in the pool 'none' to grow volume '/dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f' to size 6289408.0 KB!
I think the problem here is now ssm is confused somehow. You should have just done the 2nd command on the file system itself, because ssm will know that it first must increase the size of the LV, and then the size of the LUKS volume, and then the fs. But you only increased the size of the LV, not the LUKS volume, which now has a different size than its underlying LV, so SSM seems to get stuck.
Isn't
[root@morgansmachine ~]# ssm resize -s+3G /dev/mapper/luks-68a97c0e-00b5-4ab8-8628-2fae2605b35f
An attempt to increase the size of the LUKS volume?
Now I don't know. I'd expect ssm, the whole point of it is that it should understand the layering, would for a shrink operation know to first resize the file system, then LUKS volume header, then LV. In that order. And in exact reverse order for grow. But it only changed LUKS apparently.
When I tried it on Fedora 24, without LUKS, it did what I expected. But I'd have to retry with LUKS to see if it gets confused.
As to resizing a live system - I figured there was some risk there, but due to the lack of documentation and the 'error out if filesystem is not supported' figured again that I'd would be allowed to complete if it couldn't complete... I was most concerned that the operation wouldn't be supported on a crypt system as there was documentation about 3 years back that ssm only supported reading crypt filesystems - never thought ext 4 would be the weak point...
resize2fs was apparently never even asked, otherwise it would have failed. There is user space code to check for mount, it will not shrink a mounted file system. It had to fail. ssm must be missing some logical checks to totally silently reduce LUKS, which is a rather nonsensical option. How else do you resize the file system in such a case but to point to the logical block device the file system is on, which is exactly what you did? But it did not attempt an fs resize first.
I think it's a bug.
Re resizing the LV before the underlying system - I had no idea that ssm would take account of the LV when operating on the underlying system. Again, documentation seems underwhelming. What I'm trying to do seems to be precisely what ssm was made to simplify and do. But, in any case, doing what needed to be done to the LV before the underlying system seemed the safest option
I don't know what you mean by the last sentence. The LV is the underlying system, LUKS volumes is above that, and the fs is above that. Shrink has to be done top to bottom, which is what it seems you started out doing. But then before really confirming the fs was resized, you shrank the LV and ignored the warnings, which at that point just seemed like ass covering warnings rather than, you are definitely going to lose data now, kind of warnings.
On 12/07/16 16:24, Chris Murphy wrote:
On Tue, Jul 12, 2016 at 5:35 AM, Morgan Read mstuff@read.org.nz wrote:
Well, it seems to have reduced the volumes, but not the filesystems:
Volume Pool Volume size FS FS size Free Type Mount point
... /dev/mapper/luks-a69b434b-c409-4612-a51e-4bb0162cb316 crypt_pool 192.73 GB ext4 195.73 GB 21.40 GB crypt /home /dev/mapper/luks-d313ea5e-fe14-4967-b11c-ae0e03c348b6 crypt_pool 17.00 GB ext4 20.00 GB 14.70 GB crypt /
You're right, this is damn peculiar.
The way I'm reading this, each LV is separately encrypted, is that right? And then it's the dmcrypt/LUKS volume that is formatted ext4?
So:
ext4 | LUKS | LV | VG | PV | Disk
If so, as far as I can tell it's correct to point it at the LUKS volume, the thing that is mounted.
Yes, that's it - I prefer to look up at the underside of the disk though :')
Historically, I've always had / /home and /usr/local on separate partitions, then LVM came along, then LUKS came along - then I started to find I wanted to preserve /var and /opt during upgrades. Perhaps, if I didn't suffer from historical inertia then I would have encrypted the whole disk putting LUKS on top of the LVM (or, below from the way you're looking at disks - anyway, so individual partitions weren't encrypted but the whole disk, or should that be PV or LV...). I've never been able to see where the balance of benefits lies between LVM/LUKS and LUKS/LVM.
I've cut and pasted direct from the terminal - no omissions or additions to the series of commands and outputs.
Following the two above commands, ssm lists the two volumes as reduced in size by 3G, but the file system's size (FS size) as remaining the same...
But what I'm seeing is *ONLY* the LUKS volume was reduced. The LV is the same size as before.
ssm is apparently confused about the stack relationships. It's treating the command literally for only the dmcrypt volume, not the file system and not the LV. Off hand I'd say that's a really big bug.
Hmm, yes. The developer seems to have acknowledged that on the bug report.
...
So it's catch 22. It can't be shrunk now because the proper accounting can't be done. It can't be fixed until the partition is resized to match the volume size, and even then the fixing may fail for multiple reasons.
Bugger! (Thank god I wasn't so cavalier to do this without b/u first!)
Docs, docs, docs!
...
I don't know what you mean by the last sentence. The LV is the underlying system, LUKS volumes is above that, and the fs is above that. Shrink has to be done top to bottom, which is what it seems you
I'm a Kiwi - perhaps I see things the other way up? :)
Many thanks for your help getting to the bottom of this. Regards Morgan.
On Tue, Jul 12, 2016 at 5:35 AM, Morgan Read mstuff@read.org.nz wrote:
Yes, I have filed a bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1354681
I've reproduced the problem where ssm resize of a LUKS volume. It only resizes the LUKS volume, not the underlying LV, and not the fs on the LUKS volume. So it breaks the filesystem. It even does resize shrink for XFS which is not supported.
I've updated the bug with my findings.