On Tue, 2015-01-20 at 10:46 -0600, David Lehman wrote:
On 01/09/2015 09:45 AM, Vratislav Podzimek wrote:
If there is a big partition requested to be allocated on a disk with basically the same size (e.g. 1.8 GiB swap on a 2 GiB DASD with BTRFS autopart) we need to make one of the implicit partitions smaller (they default to 500 MiB) to fit into the disk. We choose the last of the biggest ones because the biggest ones have the biggest chance to "survive" shrinking and last of them won't collide with /boot being on the same disk (there's one implicit partition per disk so the first one will likely end up on the same disk as /boot when doing autopart).
Signed-off-by: Vratislav Podzimek vpodzime@redhat.com
blivet/partitioning.py | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/blivet/partitioning.py b/blivet/partitioning.py index 9f6eea8..ff744cf 100644 --- a/blivet/partitioning.py +++ b/blivet/partitioning.py @@ -130,6 +130,7 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0): :returns: None :rtype: None """
# basis for requests with requiredSpace is the sum of the sizes of the # two largest free regions all_free = (Size(reg.getLength(unit="B")) for reg in getFreeRegions(disks))@@ -238,10 +239,16 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0):
if storage.autoPartType in (AUTOPART_TYPE_LVM, AUTOPART_TYPE_LVM_THINP, AUTOPART_TYPE_BTRFS):
# doing LVM/BTRFS -- make sure the newly created partition fits in some# free space together with one of the implicitly requested partitionssmallest_implicit = sorted(implicit_devices, key=lambda d: d.size)[0]if (request.size + smallest_implicit.size) > all_free[0]:
# doing LVM/BTRFS -- make sure the newly created partition fits in# some free space together with one of the implicitly requested# partitions (the last of the biggest ones because the biggest ones# have the biggest chance to "survive" shrinking and last of them# won't collide with /boot being on the same disk)# HINT: sorted().reverse() is different from sorted(reverse=True)sorted_implicits = sorted(implicit_devices, key=lambda d: d.size)sorted_implicits.reverse()biggest_last_implicit = sorted_implicits[0]if (request.size + biggest_last_implicit.size) > all_free[0]:This all seems a bit fragile. I think we'd have been better off giving the member devices minimal base sizes once we identify this situation to use existing logic instead of adding all this.
This all is an attempt to have the existing logic applied (the 500 MiB base size, growing, etc.), but I'm not against making the implicit requests' sizes much smaller to fix the reported issue.
# not enough space to allocate the smallest implicit partition # and the request, make the implicit partition smaller with # fixed size in order to make space for the request@@ -253,10 +260,10 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0): all_free.sort(reverse=True)
if new_size > Size(0):
smallest_implicit.size = new_size
biggest_last_implicit.size = new_size / 2I know this isn't what you're changing now, but this should really be checking for more than > 0, especially if you're about to divide it in half. Shouldn't you be checking something like
new_size > biggest_last_implicit.format.minSize * 2
instead?
This is actually a leftover from debugging when I tried if making the size requested even that small would change anything. But generally your right, it should check the format's minSize.
I'm sending a new version of the patch that makes the implicit requests smaller instead of doing this magic. Hopefully it won't make the growPartitions() call take too much time.