The problem I have been able to reliably reproduce is active logical volumes on the system when the storage code goes to commit changes to the disk.
Added a status() property to LVMLogicalVolumeDevice that calls lvm.lvs() and reads the lv_attr field looking for an 'a' which indicates the logical volume is active. This patch causes the rest of the LVM device status code to fall in to place and the devices are torn down/deactivated correctly.
Include the lv_attr field in the lvm.lvs() hash table returned. Needed to tell if the logical volume is active or not. --- storage/devicelibs/lvm.py | 7 ++++--- 1 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/storage/devicelibs/lvm.py b/storage/devicelibs/lvm.py index 2196170..e92350b 100644 --- a/storage/devicelibs/lvm.py +++ b/storage/devicelibs/lvm.py @@ -319,7 +319,7 @@ def vginfo(vg_name): def lvs(vg_name): args = ["lvs", "--noheadings", "--nosuffix"] + \ ["--units", "m"] + \ - ["-o", "lv_name,lv_uuid,lv_size"] + \ + ["-o", "lv_name,lv_uuid,lv_size,lv_attr"] + \ config_args + \ [vg_name]
@@ -332,9 +332,10 @@ def lvs(vg_name): line = line.strip() if not line: continue - (name, uuid, size) = line.split() + (name, uuid, size, attr) = line.split() lvs[name] = {"size": size, - "uuid": uuid} + "uuid": uuid, + "attr": attr}
if not lvs: raise LVMError(_("lvs failed for %s" % vg_name))
This should fix up a majority, if not all, of the 'cannot commit to disk after 5 attempts' errors. I was hitting this today. The cause I found to be active logical volumes still around when the storage code wanted to commit changes to the disk. When a logical volume is active, libparted is getting EBUSY when it tries to commit changes to the disk and tell the kernel to reread the partition table. If the LV is active, we need to deactivate it and the volume group before we start committing changes to disk.
Adding a status() property to LVMLogicalVolumeDevice that looks at the lv_attr field for an 'a' fixes the problem for me on F-11. All of the code to down LVM devices is there, we just weren't checking the LV status correctly. --- storage/devices.py | 16 ++++++++++++++++ 1 files changed, 16 insertions(+), 0 deletions(-)
diff --git a/storage/devices.py b/storage/devices.py index 40501d7..817d019 100644 --- a/storage/devices.py +++ b/storage/devices.py @@ -2090,6 +2090,22 @@ class LVMLogicalVolumeDevice(DMDevice): """ Test if vg exits and if it has all pvs. """ return self.vg.complete
+ @property + def status(self): + """ True if the LV is active, False otherwise. """ + try: + lvstatus = lvm.lvs(self.vg.name) + except lvm.LVMError: + return False + + try: + if lvstatus[self._name]['attr'].find('a') == -1: + return False + else: + return True + except KeyError: + return False + def setup(self, intf=None): """ Open, or set up, a device. """ log_method_call(self, self.name, status=self.status)
Hi,
Good catch ! Patches also look good,
Regards,
Hans
On 07/08/2009 06:31 AM, David Cantrell wrote:
The problem I have been able to reliably reproduce is active logical volumes on the system when the storage code goes to commit changes to the disk.
Added a status() property to LVMLogicalVolumeDevice that calls lvm.lvs() and reads the lv_attr field looking for an 'a' which indicates the logical volume is active. This patch causes the rest of the LVM device status code to fall in to place and the devices are torn down/deactivated correctly.
anaconda-devel@lists.fedoraproject.org