If there is a big partition requested to be allocated on a disk with basically the same size (e.g. 1.8 GiB swap on a 2 GiB DASD with BTRFS autopart) we need to make one of the implicit partitions smaller (they default to 500 MiB) to fit into the disk. We choose the last of the biggest ones because the biggest ones have the biggest chance to "survive" shrinking and last of them won't collide with /boot being on the same disk (there's one implicit partition per disk so the first one will likely end up on the same disk as /boot when doing autopart).
Signed-off-by: Vratislav Podzimek vpodzime@redhat.com --- blivet/partitioning.py | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/blivet/partitioning.py b/blivet/partitioning.py index 9f6eea8..ff744cf 100644 --- a/blivet/partitioning.py +++ b/blivet/partitioning.py @@ -130,6 +130,7 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0): :returns: None :rtype: None """ + # basis for requests with requiredSpace is the sum of the sizes of the # two largest free regions all_free = (Size(reg.getLength(unit="B")) for reg in getFreeRegions(disks)) @@ -238,10 +239,16 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0):
if storage.autoPartType in (AUTOPART_TYPE_LVM, AUTOPART_TYPE_LVM_THINP, AUTOPART_TYPE_BTRFS): - # doing LVM/BTRFS -- make sure the newly created partition fits in some - # free space together with one of the implicitly requested partitions - smallest_implicit = sorted(implicit_devices, key=lambda d: d.size)[0] - if (request.size + smallest_implicit.size) > all_free[0]: + # doing LVM/BTRFS -- make sure the newly created partition fits in + # some free space together with one of the implicitly requested + # partitions (the last of the biggest ones because the biggest ones + # have the biggest chance to "survive" shrinking and last of them + # won't collide with /boot being on the same disk) + # HINT: sorted().reverse() is different from sorted(reverse=True) + sorted_implicits = sorted(implicit_devices, key=lambda d: d.size) + sorted_implicits.reverse() + biggest_last_implicit = sorted_implicits[0] + if (request.size + biggest_last_implicit.size) > all_free[0]: # not enough space to allocate the smallest implicit partition # and the request, make the implicit partition smaller with # fixed size in order to make space for the request @@ -253,10 +260,10 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0): all_free.sort(reverse=True)
if new_size > Size(0): - smallest_implicit.size = new_size + biggest_last_implicit.size = new_size / 2 else: - implicit_devices.remove(smallest_implicit) - storage.destroyDevice(smallest_implicit) + implicit_devices.remove(biggest_last_implicit) + storage.destroyDevice(biggest_last_implicit)
return implicit_devices
On 01/09/2015 09:45 AM, Vratislav Podzimek wrote:
If there is a big partition requested to be allocated on a disk with basically the same size (e.g. 1.8 GiB swap on a 2 GiB DASD with BTRFS autopart) we need to make one of the implicit partitions smaller (they default to 500 MiB) to fit into the disk. We choose the last of the biggest ones because the biggest ones have the biggest chance to "survive" shrinking and last of them won't collide with /boot being on the same disk (there's one implicit partition per disk so the first one will likely end up on the same disk as /boot when doing autopart).
Signed-off-by: Vratislav Podzimek vpodzime@redhat.com
blivet/partitioning.py | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/blivet/partitioning.py b/blivet/partitioning.py index 9f6eea8..ff744cf 100644 --- a/blivet/partitioning.py +++ b/blivet/partitioning.py @@ -130,6 +130,7 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0): :returns: None :rtype: None """
# basis for requests with requiredSpace is the sum of the sizes of the # two largest free regions all_free = (Size(reg.getLength(unit="B")) for reg in getFreeRegions(disks))@@ -238,10 +239,16 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0):
if storage.autoPartType in (AUTOPART_TYPE_LVM, AUTOPART_TYPE_LVM_THINP, AUTOPART_TYPE_BTRFS):
# doing LVM/BTRFS -- make sure the newly created partition fits in some# free space together with one of the implicitly requested partitionssmallest_implicit = sorted(implicit_devices, key=lambda d: d.size)[0]if (request.size + smallest_implicit.size) > all_free[0]:
# doing LVM/BTRFS -- make sure the newly created partition fits in# some free space together with one of the implicitly requested# partitions (the last of the biggest ones because the biggest ones# have the biggest chance to "survive" shrinking and last of them# won't collide with /boot being on the same disk)# HINT: sorted().reverse() is different from sorted(reverse=True)sorted_implicits = sorted(implicit_devices, key=lambda d: d.size)sorted_implicits.reverse()biggest_last_implicit = sorted_implicits[0]if (request.size + biggest_last_implicit.size) > all_free[0]:
This all seems a bit fragile. I think we'd have been better off giving the member devices minimal base sizes once we identify this situation to use existing logic instead of adding all this.
# not enough space to allocate the smallest implicit partition # and the request, make the implicit partition smaller with # fixed size in order to make space for the request@@ -253,10 +260,10 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0): all_free.sort(reverse=True)
if new_size > Size(0):
smallest_implicit.size = new_size
biggest_last_implicit.size = new_size / 2
I know this isn't what you're changing now, but this should really be checking for more than > 0, especially if you're about to divide it in half. Shouldn't you be checking something like
new_size > biggest_last_implicit.format.minSize * 2
instead?
else:
implicit_devices.remove(smallest_implicit)storage.destroyDevice(smallest_implicit)
implicit_devices.remove(biggest_last_implicit)storage.destroyDevice(biggest_last_implicit) return implicit_devices
On Tue, 2015-01-20 at 10:46 -0600, David Lehman wrote:
On 01/09/2015 09:45 AM, Vratislav Podzimek wrote:
If there is a big partition requested to be allocated on a disk with basically the same size (e.g. 1.8 GiB swap on a 2 GiB DASD with BTRFS autopart) we need to make one of the implicit partitions smaller (they default to 500 MiB) to fit into the disk. We choose the last of the biggest ones because the biggest ones have the biggest chance to "survive" shrinking and last of them won't collide with /boot being on the same disk (there's one implicit partition per disk so the first one will likely end up on the same disk as /boot when doing autopart).
Signed-off-by: Vratislav Podzimek vpodzime@redhat.com
blivet/partitioning.py | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/blivet/partitioning.py b/blivet/partitioning.py index 9f6eea8..ff744cf 100644 --- a/blivet/partitioning.py +++ b/blivet/partitioning.py @@ -130,6 +130,7 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0): :returns: None :rtype: None """
# basis for requests with requiredSpace is the sum of the sizes of the # two largest free regions all_free = (Size(reg.getLength(unit="B")) for reg in getFreeRegions(disks))@@ -238,10 +239,16 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0):
if storage.autoPartType in (AUTOPART_TYPE_LVM, AUTOPART_TYPE_LVM_THINP, AUTOPART_TYPE_BTRFS):
# doing LVM/BTRFS -- make sure the newly created partition fits in some# free space together with one of the implicitly requested partitionssmallest_implicit = sorted(implicit_devices, key=lambda d: d.size)[0]if (request.size + smallest_implicit.size) > all_free[0]:
# doing LVM/BTRFS -- make sure the newly created partition fits in# some free space together with one of the implicitly requested# partitions (the last of the biggest ones because the biggest ones# have the biggest chance to "survive" shrinking and last of them# won't collide with /boot being on the same disk)# HINT: sorted().reverse() is different from sorted(reverse=True)sorted_implicits = sorted(implicit_devices, key=lambda d: d.size)sorted_implicits.reverse()biggest_last_implicit = sorted_implicits[0]if (request.size + biggest_last_implicit.size) > all_free[0]:This all seems a bit fragile. I think we'd have been better off giving the member devices minimal base sizes once we identify this situation to use existing logic instead of adding all this.
This all is an attempt to have the existing logic applied (the 500 MiB base size, growing, etc.), but I'm not against making the implicit requests' sizes much smaller to fix the reported issue.
# not enough space to allocate the smallest implicit partition # and the request, make the implicit partition smaller with # fixed size in order to make space for the request@@ -253,10 +260,10 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0): all_free.sort(reverse=True)
if new_size > Size(0):
smallest_implicit.size = new_size
biggest_last_implicit.size = new_size / 2I know this isn't what you're changing now, but this should really be checking for more than > 0, especially if you're about to divide it in half. Shouldn't you be checking something like
new_size > biggest_last_implicit.format.minSize * 2
instead?
This is actually a leftover from debugging when I tried if making the size requested even that small would change anything. But generally your right, it should check the format's minSize.
I'm sending a new version of the patch that makes the implicit requests smaller instead of doing this magic. Hopefully it won't make the growPartitions() call take too much time.
In case of BTRFS/LVM autopartitioning we schedule one parition on every disk used for the installation to be part of the new BTRFS/LVM setup. But in case some other partition needs to be allocated (e.g. swap in case of BTRFS) we need to fit this partition to some disk together with one of the implicitly scheduled ones. In case we find out the implicitly scheduled partitions are too big to make this possible we need to make them smaller so that autopartitioning works.
Resolves: rhbz#1171116 Signed-off-by: Vratislav Podzimek vpodzime@redhat.com --- blivet/devices.py | 7 ++++++- blivet/partitioning.py | 20 +++++--------------- 2 files changed, 11 insertions(+), 16 deletions(-)
diff --git a/blivet/devices.py b/blivet/devices.py index f67c5b5..576fe07 100644 --- a/blivet/devices.py +++ b/blivet/devices.py @@ -53,6 +53,11 @@ from .i18n import P_ import logging log = logging.getLogger("blivet")
+DEFAULT_PART_SIZE = Size("500MiB") + +# in case the default partition size doesn't fit +FALLBACK_DEFAULT_PART_SIZE = Size("10MiB") + def get_device_majors(): majors = {} for line in open("/proc/devices").readlines(): @@ -1368,7 +1373,7 @@ class PartitionDevice(StorageDevice): """ _type = "partition" _resizable = True - defaultSize = Size("500MiB") + defaultSize = DEFAULT_PART_SIZE
def __init__(self, name, fmt=None, size=None, grow=False, maxsize=None, start=None, end=None, diff --git a/blivet/partitioning.py b/blivet/partitioning.py index 24cb982..f909de2 100644 --- a/blivet/partitioning.py +++ b/blivet/partitioning.py @@ -28,7 +28,7 @@ from pykickstart.constants import AUTOPART_TYPE_BTRFS, AUTOPART_TYPE_LVM, AUTOPA
from .errors import DeviceError, NoDisksError, NotEnoughFreeSpaceError, PartitioningError, SanityError, SanityWarning from .flags import flags -from .devices import PartitionDevice, LUKSDevice, devicePathToName +from .devices import PartitionDevice, LUKSDevice, devicePathToName, FALLBACK_DEFAULT_PART_SIZE from .formats import getFormat from .devicelibs.lvm import get_pool_padding from .size import Size @@ -243,20 +243,10 @@ def _schedulePartitions(storage, disks, implicit_devices, min_luks_entropy=0): smallest_implicit = sorted(implicit_devices, key=lambda d: d.size)[0] if (request.size + smallest_implicit.size) > all_free[0]: # not enough space to allocate the smallest implicit partition - # and the request, make the implicit partition smaller with - # fixed size in order to make space for the request - new_size = all_free[0] - request.size - - # subtract the size from the biggest free region and reorder the - # list - all_free[0] -= request.size - all_free.sort(reverse=True) - - if new_size > Size(0): - smallest_implicit.size = new_size - else: - implicit_devices.remove(smallest_implicit) - storage.destroyDevice(smallest_implicit) + # and the request, make the implicit partitions smaller in + # attempt to make space for the request + for implicit_req in implicit_devices: + implicit_req.size = FALLBACK_DEFAULT_PART_SIZE
return implicit_devices
On Wed, Jan 21, 2015 at 10:06:34AM +0100, Vratislav Podzimek wrote:
# and the request, make the implicit partitions smaller in# attempt to make space for the requestfor implicit_req in implicit_devices:implicit_req.size = FALLBACK_DEFAULT_PART_SIZE
What happens if this new size is still not small enough? Does that need to be dealt with here or is it caught someplace else?
On Wed, 2015-01-21 at 10:06 +0100, Vratislav Podzimek wrote:
In case of BTRFS/LVM autopartitioning we schedule one parition on every disk used for the installation to be part of the new BTRFS/LVM setup. But in case some other partition needs to be allocated (e.g. swap in case of BTRFS) we need to fit this partition to some disk together with one of the implicitly scheduled ones. In case we find out the implicitly scheduled partitions are too big to make this possible we need to make them smaller so that autopartitioning works.
Resolves: rhbz#1171116 Signed-off-by: Vratislav Podzimek vpodzime@redhat.com
blivet/devices.py | 7 ++++++- blivet/partitioning.py | 20 +++++--------------- 2 files changed, 11 insertions(+), 16 deletions(-)
diff --git a/blivet/devices.py b/blivet/devices.py index f67c5b5..576fe07 100644 --- a/blivet/devices.py +++ b/blivet/devices.py @@ -53,6 +53,11 @@ from .i18n import P_ import logging log = logging.getLogger("blivet")
+DEFAULT_PART_SIZE = Size("500MiB")
+# in case the default partition size doesn't fit +FALLBACK_DEFAULT_PART_SIZE = Size("10MiB")
dlehman has pointed out that this is too small for BTRFS which CURRENTLY requires 16 MiB as the minimum member size and 256 MiB as the minimum volume size. So this should be 256 MiB which still fixes the issue reported in the bug report (tested). A proper general solution to this issue would be more complicated and much more invasive.
On 01/26/2015 03:48 AM, Vratislav Podzimek wrote:
On Wed, 2015-01-21 at 10:06 +0100, Vratislav Podzimek wrote:
In case of BTRFS/LVM autopartitioning we schedule one parition on every disk used for the installation to be part of the new BTRFS/LVM setup. But in case some other partition needs to be allocated (e.g. swap in case of BTRFS) we need to fit this partition to some disk together with one of the implicitly scheduled ones. In case we find out the implicitly scheduled partitions are too big to make this possible we need to make them smaller so that autopartitioning works.
Resolves: rhbz#1171116 Signed-off-by: Vratislav Podzimek vpodzime@redhat.com
blivet/devices.py | 7 ++++++- blivet/partitioning.py | 20 +++++--------------- 2 files changed, 11 insertions(+), 16 deletions(-)
diff --git a/blivet/devices.py b/blivet/devices.py index f67c5b5..576fe07 100644 --- a/blivet/devices.py +++ b/blivet/devices.py @@ -53,6 +53,11 @@ from .i18n import P_ import logging log = logging.getLogger("blivet")
+DEFAULT_PART_SIZE = Size("500MiB")
+# in case the default partition size doesn't fit +FALLBACK_DEFAULT_PART_SIZE = Size("10MiB")
dlehman has pointed out that this is too small for BTRFS which CURRENTLY requires 16 MiB as the minimum member size and 256 MiB as the minimum volume size. So this should be 256 MiB which still fixes the issue reported in the bug report (tested). A proper general solution to this issue would be more complicated and much more invasive.
Looks okay to me with 256 MiB as fallback. ACK.
David
anaconda-patches@lists.fedorahosted.org