Dear All,
I am in desperate need for LVM data rescue for my server. I have an VG call vg_hosting consisting of 4 PVs each contained in a separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1). And this LV: lv_home was created to use all the space of the 4 PVs.
Right now, the third hard drive is damaged; and therefore the third PV (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
I have tried with the following:
1. Removing the broken PV:
# vgreduce --force vg_hosting /dev/sdc1 Physical volume "/dev/sdc1" still in use
# pvmove /dev/sdc1 No extents available for allocation
2. Replacing the broken PV:
I was able to create a new PV and restore the VG Config/meta data:
# pvcreate --restorefile ... --uuid ... /dev/sdc1 # vgcfgrestore --file ... vg_hosting
However, vgchange would give this error:
# vgchange -a y device-mapper: resume ioctl on failed: Invalid argument Unable to resume vg_hosting-lv_home (253:4) 0 logical volume(s) in volume group "vg_hosting" now active
Could someone help me please??? I'm in dire need for help to save the data, at least some of it if possible.
Regards, Khem
On Fri, Feb 27, 2015 at 5:35 PM, Khemara Lyn lin.kh@wicam.com.kh wrote:
Dear All,
I am in desperate need for LVM data rescue for my server. I have an VG call vg_hosting consisting of 4 PVs each contained in a separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1). And this LV: lv_home was created to use all the space of the 4 PVs.
Mirrored? Or linear (default)? If this is linear, then it's like one big hard drive. The single drive equivalent of losing 1 of 4 drives in linear would be ablating 1/4 of the surface of the drive making it neither readable nor writable. Because critical filesystem metadata is distributed across the whole volume, the filesystem is almost certainly irreparably damaged.[1]
Right now, the third hard drive is damaged; and therefore the third PV (/dev/sdc1) cannot be accessed anymore.
Damaged how? Is it dead?
What file system?
I would like to recover whatever left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
I have tried with the following:
- Removing the broken PV:
# vgreduce --force vg_hosting /dev/sdc1 Physical volume "/dev/sdc1" still in use
# pvmove /dev/sdc1 No extents available for allocation
- Replacing the broken PV:
I was able to create a new PV and restore the VG Config/meta data:
# pvcreate --restorefile ... --uuid ... /dev/sdc1 # vgcfgrestore --file ... vg_hosting
However, vgchange would give this error:
# vgchange -a y device-mapper: resume ioctl on failed: Invalid argument Unable to resume vg_hosting-lv_home (253:4) 0 logical volume(s) in volume group "vg_hosting" now active
Could someone help me please???
# vgcfgrestore -tv vg_hosting
If this produces some viable sign of success then do it again without the -t. If you get scary messages, I advise not proceeding. If the command without -t succeeds then try this:
# lvs -a -o +devices
In any case, there's a huge hole where both filesystem metadata and file data was located, so I'd be shocked (like, really shocked) if either ext4 or XFS will mount, even read only. So I expect this is going to be a scraping operation with testdisk or debugfs.
[1] Btrfs can survive this to some degree because by default the filesystem (metadata) is dup on single drive (except SSD) and raid1 on multiple devices. So while you lose files (data) on the missing drive, the fs itself is intact and will even mount normally.
This inquiry was cross posted here: http://lists.centos.org/pipermail/centos/2015-February/150351.html which is contrary to the guidelines on no cross posting: http://fedoraproject.org/wiki/Mailing_list_guidelines#Do_not_Cross_Post
I suggest this thread be closed.
Chris Murphy
On 02/27/2015 06:21 PM, Chris Murphy wrote:
This inquiry was cross posted here: http://lists.centos.org/pipermail/centos/2015-February/150351.html which is contrary to the guidelines on no cross posting: http://fedoraproject.org/wiki/Mailing_list_guidelines#Do_not_Cross_Post
I suggest this thread be closed.
I'd like to see it stay open. The OP was not trolling and is obviously in desperate need of a solution. I'm sure every one of us has lost critical data at one time or another.
LVM is still considered arcane by many and any shared wisdom regarding OP's situation would accrue greatly to our cumulative knowledge base.
I can attest that I once lost a volume that was so important to me that I got sick to my stomach. I am loathe to wish that upon anyone.
my 2p, Mike Wright
On Fri, Feb 27, 2015 at 10:07 PM, Mike Wright nobody@nospam.hostisimo.com wrote:
I'd like to see it stay open. The OP was not trolling and is obviously in desperate need of a solution. I'm sure every one of us has lost critical data at one time or another.
I suggested closing this thread mainly because there were already a bunch of replies on the centos list and seemed the best place to carry on the discussion rather than fragmenting it.
I went ahead and tried to replicate the conditions (idealistically) in a VM, with XFS, ext4, and Btrfs and simulated the failure.
tl;dr Btrfs faired the best by far permitting all data on surviving drives to be copied off; the ext4 fs imploded at the first ls command and nothing could be copied so it's a scrape operation, and XFS copied ~1/7 of the data, even though 3/4 of the drives were working.
The main solution, I think, in the LVM dead PV case is this command: # vgchange -a y --activationmode partial
This makes the LV active with the PV missing. The least amount of change in a case like this, the better, in order to avoid user induced data loss (really, really common), so I don't recommend removing or replacing the dead PV. If the LV type were mirror (legacy) or raid1, then it would be a different story altogether.
Details are in the CentOS thread I previously included the URL for.
LVM is still considered arcane by many and any shared wisdom regarding OP's situation would accrue greatly to our cumulative knowledge base.
Quite.
I can attest that I once lost a volume that was so important to me that I got sick to my stomach. I am loathe to wish that upon anyone.
Backups!
On Fri, 2015-02-27 at 21:07 -0800, Mike Wright wrote:
I'd like to see it stay open. The OP was not trolling and is obviously in desperate need of a solution. I'm sure every one of us has lost critical data at one time or another.
The strictures against cross-posting are there for a reason. Anyone who isn't subscribed to *all* the lists involved is going to see only a partial view of any subsequent thread, reply only to those lists they are on, and cause the whole thing to degenerate into multiple subsets of people not listening to each other.
poc