Expert Help needed (raid1+lvm) Did I destroy 2TB of backups?

Laurentiu Coica laurentiu.coica at gmail.com
Fri Nov 12 21:20:36 UTC 2010


Hi Dean,
I just make some test for you, in a virtual machine

First, I buid a similar environment, with lvm and raid1, over two
disk, with VG vg_medulla_bkup  and LV lv_bkup

[root at rhel6 ~]# mdadm --assemble /dev/md1 /dev/sdb1 /dev/sdc1
mdadm: /dev/md1 has been started with 2 drives.

[root at rhel6 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb1[0] sdc1[1]
      2095415 blocks super 1.2 [2/2] [UU]

unused devices: <none>

[root at rhel6 ~]# pvscan
  PV /dev/sda2   VG vg00              lvm2 [7.90 GiB / 1.04 GiB free]
  PV /dev/md1    VG vg_medulla_bkup   lvm2 [2.00 GiB / 1020.00 MiB free]
  Total: 2 [9.89 GiB] / in use: 2 [9.89 GiB] / in no VG: 0 [0   ]

[root at rhel6 ~]# vgchange -a y vg_medulla_bkup
  1 logical volume(s) in volume group "vg_medulla_bkup" now active

[root at rhel6 ~]# mount /dev/vg_medulla_bkup/lv_bkup /mnt/deanm/

[root at rhel6 ~]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg00-root
                       6047492   1162084   4578208  21% /
tmpfs                   510236         0    510236   0% /dev/shm
/dev/sda1                99150     25380     68650  27% /boot
/dev/mapper/vg_medulla_bkup-lv_bkup
                       1032088     34088    945572   4% /mnt/deanm

Then, I remove disk /dev/sdb and /dev/sdc

[root at rhel6 ~]# pvscan
  /dev/md1: read failed after 0 of 1024 at 0: Input/output error
  /dev/md1: read failed after 0 of 1024 at 2145583104: Input/output error
  /dev/md1: read failed after 0 of 1024 at 2145693696: Input/output error
  /dev/md1: read failed after 0 of 1024 at 4096: Input/output error
  /dev/md1: read failed after 0 of 2048 at 0: Input/output error
  /dev/vg_medulla_bkup/lv_bkup: read failed after 0 of 4096 at
1073676288: Input/output error
  /dev/vg_medulla_bkup/lv_bkup: read failed after 0 of 4096 at
1073733632: Input/output error
  /dev/vg_medulla_bkup/lv_bkup: read failed after 0 of 4096 at 0:
Input/output error
  /dev/vg_medulla_bkup/lv_bkup: read failed after 0 of 4096 at 4096:
Input/output error
  /dev/sdb1: read failed after 0 of 1024 at 2146697216: Input/output error
  /dev/sdb1: read failed after 0 of 1024 at 2146754560: Input/output error
  /dev/sdb1: read failed after 0 of 1024 at 0: Input/output error
  /dev/sdb1: read failed after 0 of 1024 at 4096: Input/output error
  /dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
  /dev/sdc1: read failed after 0 of 1024 at 2146697216: Input/output error
  /dev/sdc1: read failed after 0 of 1024 at 2146754560: Input/output error
  /dev/sdc1: read failed after 0 of 1024 at 0: Input/output error
  /dev/sdc1: read failed after 0 of 1024 at 4096: Input/output error
  /dev/sdc1: read failed after 0 of 2048 at 0: Input/output error
  PV /dev/sda2   VG vg00   lvm2 [7.90 GiB / 1.04 GiB free]
  Total: 1 [7.90 GiB] / in use: 1 [7.90 GiB] / in no VG: 0 [0   ]

[root at rhel6 ~]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg00-root
                       6047492   1162104   4578188  21% /
tmpfs                   510236         0    510236   0% /dev/shm
/dev/sda1                99150     25380     68650  27% /boot
/dev/mapper/vg_medulla_bkup-lv_bkup
                       1032088     34088    945572   4% /mnt/deanm
You have new mail in /var/spool/mail/root

[root at rhel6 ~]# ls -la /mnt/deanm/
ls: reading directory /mnt/deanm/: Input/output error
total 0

----------

In nets spet, I add disk /dev/sdb and /dev/sdc to this virtual machine

----------
[root at rhel6 ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaa95dac1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         261     2096451   fd  Linux raid autodetect

[root at rhel6 ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xad4c606c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         261     2096451   fd  Linux raid autodetect

[root at rhel6 ~]# mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Sat Nov 13 00:42:57 2010
     Raid Level : raid1
     Array Size : 2095415 (2046.65 MiB 2145.70 MB)
  Used Dev Size : 2095415 (2046.65 MiB 2145.70 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Nov 13 01:08:50 2010
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       33        1      active sync   /dev/sdc1

       0       8       17        -      faulty spare   /dev/sdb1

[root at rhel6 ~]# mdadm  /dev/md1 -r /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md1

[root at rhel6 ~]# mdadm  /dev/md1 -a /dev/sdb1
mdadm: re-added /dev/sdb1

[root at rhel6 ~]# mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Sat Nov 13 00:42:57 2010
     Raid Level : raid1
     Array Size : 2095415 (2046.65 MiB 2145.70 MB)
  Used Dev Size : 2095415 (2046.65 MiB 2145.70 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Nov 13 01:10:54 2010
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 36% complete

           Name : rhel6.local:1  (local to host rhel6.local)
           UUID : 0794f631:cdffe9e8:a25618e5:f98452f8
         Events : 101

    Number   Major   Minor   RaidDevice State
       0       8       17        0      spare rebuilding   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

[root at rhel6 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb1[0] sdc1[1]
      2095415 blocks super 1.2 [2/1] [_U]
      [===========>.........]  recovery = 57.2% (1200000/2095415)
finish=0.0min speed=200000K/sec

unused devices: <none>

Check the output of pvscan command:

[root at rhel6 ~]# pvscan
  PV /dev/sda2   VG vg00              lvm2 [7.90 GiB / 1.04 GiB free]
  PV /dev/md1    VG vg_medulla_bkup   lvm2 [2.00 GiB / 1020.00 MiB free]
  Total: 2 [9.89 GiB] / in use: 2 [9.89 GiB] / in no VG: 0 [0   ]

Reactivate VG:

[root at rhel6 ~]# vgchange -a y vg_medulla_bkup
  1 logical volume(s) in volume group "vg_medulla_bkup" now active


[root at rhel6 ~]# mount /dev/vg_medulla_bkup/lv_bkup /mnt/deanm/

[root at rhel6 ~]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg00-root
                       6047492   1162148   4578144  21% /
tmpfs                   510236         0    510236   0% /dev/shm
/dev/sda1                99150     25380     68650  27% /boot
/dev/mapper/vg_medulla_bkup-lv_bkup
                       1032088     34088    945572   4% /mnt/deanm

[root at rhel6 ~]# ls -la /mnt/deanm/
total 24
drwxr-xr-x. 3 root root  4096 Nov 13 00:44 .
drwxr-xr-x. 3 root root  4096 Nov 13 00:44 ..
drwx------. 2 root root 16384 Nov 13 00:44 lost+found
[root at rhel6 ~]#


Regards,

-- 
Laurentiu Coica


More information about the users mailing list