The first commit just removes some old, naïve comments from the code. The second and third commits make our calculations related to LVM on RAID much more precise and realistic -- from ~150 MiB free space in a VG where some LV was requested to grow to a maximum size to ~4-8 MiB (1 or 2 extents). But what's more important, the calculations now reflect the real calculations that happen in the "storage land below us" and they make sense. :)
We still have the "emergency brake" making an LV smaller if it's about to be created in a VG that doesn't have enough space for it so this should be safe. Plus I'm only proposing this for *master* right now and I'll make anaconda's kickstart tests stricter so that they reveal any potential issues this could cause and any future changes breaking this.
From: Vratislav Podzimek vpodzime@redhat.com
It's not going to be a working/usable reality any time soon so stop pretending the opposite as that's just confusing. --- blivet/devices/lvm.py | 3 --- blivet/formats/lvmpv.py | 2 -- 2 files changed, 5 deletions(-)
diff --git a/blivet/devices/lvm.py b/blivet/devices/lvm.py index 04ac753..8047aec 100644 --- a/blivet/devices/lvm.py +++ b/blivet/devices/lvm.py @@ -326,9 +326,6 @@ def _removeParent(self, member): # We can't rely on lvm to tell us about our size, free space, &c # since we could have modifications queued, unless the VG and all of # its PVs already exist. - # - # -- liblvm may contain support for in-memory devices - @property def isModified(self): """ Return True if the VG has changes queued that LVM is unaware of. """ diff --git a/blivet/formats/lvmpv.py b/blivet/formats/lvmpv.py index c19221c..ee3e29e 100644 --- a/blivet/formats/lvmpv.py +++ b/blivet/formats/lvmpv.py @@ -78,8 +78,6 @@ def __init__(self, **kwargs): DeviceFormat.__init__(self, **kwargs) self.vgName = kwargs.get("vgName") self.vgUuid = kwargs.get("vgUuid") - # liblvm may be able to tell us this at some point, even - # for not-yet-created devices self.peStart = kwargs.get("peStart", lvm.LVM_PE_START) self.dataAlignment = kwargs.get("dataAlignment", Size(0))
From: Vratislav Podzimek vpodzime@redhat.com
libblockdev inherited from blivet a very precise algorithm to calculate MD RAID superblock (meta data) size which was taken from the mdadm's sources. However, in mdadm, this algorithm is applied to a size of a single member device not the size of the whole RAID. Which makes sense, because the superblock area is on every member and it contains meta data about that particular member. Thus we need to pass the size of the smallest member device to the algorithm calculating the superblock size, not the total size of the whole RAID. --- blivet/devicelibs/raid.py | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/blivet/devicelibs/raid.py b/blivet/devicelibs/raid.py index 0f0f1b4..17ea0e8 100644 --- a/blivet/devicelibs/raid.py +++ b/blivet/devicelibs/raid.py @@ -281,8 +281,7 @@ def get_size(self, member_sizes, num_members=None, chunk_size=None, superblock_s raise RaidError("superblock_size_func value of None is not acceptable")
min_size = min(member_sizes) - total_space = self.get_net_array_size(num_members, min_size) - superblock_size = superblock_size_func(total_space) + superblock_size = superblock_size_func(min_size) min_data_size = self._trim(min_size - superblock_size, chunk_size) return self.get_net_array_size(num_members, min_data_size)
From: Vratislav Podzimek vpodzime@redhat.com
As explained in the inline comment LVM aligns data and meta data according to the underlying block device. In case of a MD RAID device where typically a 512KiB chunks are used, this may result in twice as big space allocated for meta data as if a plain partition was used. Since the default size is 1 MiB, we can safely just use a double size in case an MD RAID device is used as a PV because we can be at most 1 MiB and thus one extent off in the end which is okay. --- blivet/devices/lvm.py | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/blivet/devices/lvm.py b/blivet/devices/lvm.py index 8047aec..3396a8a 100644 --- a/blivet/devices/lvm.py +++ b/blivet/devices/lvm.py @@ -354,8 +354,25 @@ def size(self): """ The size of this VG """ # TODO: just ask lvm if isModified returns False
- # sum up the sizes of the PVs and align to pesize - return sum((max(Size(0), self.align(pv.size - pv.format.peStart)) for pv in self.pvs), Size(0)) + # sum up the sizes of the PVs, subtract the unusable (meta data) space + # and align to pesize + # NOTE: we either specify data alignment in a PV or the default is used + # which is both handled by pv.format.peStart, but LVM takes into + # account also the underlying block device which means that e.g. + # for an MD RAID device, it tries to align everything also to chunk + # size and alignment offset of such device which may result in up + # to a twice as big non-data area + # TODO: move this to either LVMPhysicalVolume's peStart property once + # formats know about their devices or to a new LVMPhysicalVolumeDevice + # class once it exists + avail = Size(0) + for pv in self.pvs: + if isinstance(pv, MDRaidArrayDevice): + avail += self.align(pv.size - 2 * pv.format.peStart) + else: + avail += self.align(pv.size - pv.format.peStart) + + return avail
@property def extents(self):
I believe this is the last missing piece to call the issue #31 resolved.
Added label: ACK.
Closed.
anaconda-patches@lists.fedorahosted.org