LVM problem
Gijs
info at bsnw.nl
Sat Sep 17 13:29:36 UTC 2011
Hello List,
I had some trouble with a raid 5 setup, but I managed to get it
activated again. However, when I tried to activate the LVM-volumes on
the raid 5 system, LVM had trouble activating one of the volumes.
When I type in "lvchange -a y /dev/raid-5/data", it returns the
following errors:
device-mapper: resume ioctl failed: Invalid argument
Unable to resume raid--5-data (253:2)
Checking dmesg, it says this:
device-mapper: table: 253:2: md127 too small for target:
start=5897914368, len=1908400128, dev_size=7806312448
I calculated that LVM is missing exactly 1MB for some reason. Why and
how, I have no idea. The other logical volumes of the raid-5 physical
volume activated with no problem at all. It's just this one.
My current setup includes 5 1TB disks, with a software raid-5 partition
on top of it. On top of this, is LVM with 3 logical volumes.
Some other info:
[root at localhost ~]# fdisk /dev/sda -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 * 63 1943864 971901 fd Linux raid
autodetect
/dev/sda2 1943865 1953525167 975790651+ fd Linux raid
autodetect
(all disks have the exact same partitioning)
[root at localhost ~]# pvdisplay /dev/md127
--- Physical volume ---
PV Name /dev/md127
VG Name raid-5
PV Size 3.64 TiB / not usable 0
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 952919
Free PE 0
Allocated PE 952919
PV UUID ZmJtA4-cZBL-kuXT-53Ie-7o1C-7oro-uw5GB6
[root at localhost ~]# lvdisplay /dev/raid-5
/dev/mapper/raid--5-data: open failed: No such file or directory
--- Logical volume ---
LV Name /dev/raid-5/data
VG Name raid-5
LV UUID vCg6p6-UGWG-zWqp-qLj3-V8nF-YkgQ-iMwglM
LV Write Access read/write
LV Status NOT available
LV Size 3.61 TiB
Current LE 947647
Segments 2
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Name /dev/raid-5/swap
VG Name raid-5
LV UUID uOqpQL-TJCA-TG33-3x9a-N1t2-kOF2-Ixsua7
LV Write Access read/write
LV Status available
# open 2
LV Size 1.00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:3
--- Logical volume ---
LV Name /dev/raid-5/root
VG Name raid-5
LV UUID WEP2KA-q1bm-o5VM-anlR-reuO-mvDA-E0Z1KC
LV Write Access read/write
LV Status available
# open 0
LV Size 19.59 GiB
Current LE 5016
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:4
[root at localhost ~]# cat /etc/lvm/backup/raid-5
# Generated by LVM2 version 2.02.84(2) (2011-02-09): Sat Sep 17 13:20:02
2011
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'vgchange -a y --sysinit'"
creation_host = "localhost.localdomain" # Linux localhost.localdomain
2.6.38.6-26.rc1.fc15.i686 #1 SMP Mon May 9 20:43:14 UTC 2011 i686
creation_time = 1316280002 # Sat Sep 17 13:20:02 2011
raid-5 {
id = "A5i9Wi-1FKN-M3bf-yY7e-kd6b-WbeY-CkL4d5"
seqno = 14
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 256
max_pv = 256
metadata_copies = 0
physical_volumes {
pv0 {
id = "ZmJtA4-cZBL-kuXT-53Ie-7o1C-7oro-uw5GB6"
device = "/dev/md127" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 7806314496 # 3.6351 Terabytes
pe_start = 2048
pe_count = 952919 # 3.6351 Terabytes
}
}
logical_volumes {
data {
id = "vCg6p6-UGWG-zWqp-qLj3-V8nF-YkgQ-iMwglM"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 2
segment1 {
start_extent = 0
extent_count = 714688 # 2.72632 Terabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 714688
extent_count = 232959 # 909.996 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 719960
]
}
}
swap {
id = "uOqpQL-TJCA-TG33-3x9a-N1t2-kOF2-Ixsua7"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 256 # 1024 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 714688
]
}
}
root {
id = "WEP2KA-q1bm-o5VM-anlR-reuO-mvDA-E0Z1KC"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5016 # 19.5938 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 714944
]
}
}
}
}
So my question is, is it possible to tell LVM to perform a check on the
LVM volume to correct the size-mismatch? Or is there any other way to
tell LVM to shrink the size of the volume to the correct size?
Preferably without touching any data of the volume itself, since I don't
want to risk loosing data. Or maybe increase the size of the volume so
that LVM can actually activate the data volume?
Kind regards,
Gijs
More information about the users
mailing list