mdadm: How to access windows raids?

Joshua C. joshuacov at gmail.com
Sun Dec 2 12:15:42 UTC 2012


I have two 1tb disk that have been running in raid1 with the
intel-p67-built-in controller. Those were created in bios (uefi) and
are actually fake(software) raids. Windows sees only his own (ntfs)
raid and everything is fine. Under linux I set the partition type to
0xfd (linux auto raid) and mdadm can successfully see and access the
linux raid partion.

However mdadm cannot access the windows raid and I have each partition
three(!) times under KDE-Dolphin. So each time I click
(unintentionally) on the real hdd-partition instead of the
mdadm-partition I trigger the check of the raid, which takes about 3
hours.

How to make mdadm access the windows raid and ONLY show the raid
partitions in the kde-dolphin???

When I omit "rd.dm=0 and rd.md=0" on the command line then I see only
the raid partitions under kde-dolphin but for some reason I cannot
access them (so this is not a solution). Here is what mdadm says about
my disks (done with kernel-3.6.8.fc17.x86_64):


=============================
[root at localhost]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdb[1] sdc[0]
      976759808 blocks super external:/md127/0 [2/2] [UU]

md127 : inactive sdb[1](S) sdc[0](S)
      5288 blocks super external:imsm

unused devices: <none>
=============================
[root at localhost]# mdadm --examine /dev/md126
/dev/md126:
   MBR Magic : aa55
Partition[0] :   1743810560 sectors at         2048 (type fd)
Partition[1] :    209711104 sectors at   1743812608 (type 07)
=============================
[root at localhost]# mdadm --examine /dev/md127
/dev/md127:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.1.00
    Orig Family : 88e429ad
         Family : 1e27d1ba
     Generation : 0001acff
     Attributes : All supported
           UUID : 516f8fc8:bd5e663f:c254ea22:d30bb530
       Checksum : 2438ebd3 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk00 Serial : WD-xxxxxxxxxxxx
          State : active
             Id : 00000002
    Usable Size : 1953519880 (931.51 GiB 1000.20 GB)

[Volume0]:
           UUID : fa85ec01:25e9c177:da74f1e6:03284c35
     RAID Level : 1
        Members : 2
          Slots : [UU]
    Failed disk : none
      This Slot : 0
     Array Size : 1953519616 (931.51 GiB 1000.20 GB)
   Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
  Sector Offset : 0
    Num Stripes : 7630936
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk01 Serial : WD-xxxxxxxxxxxx
          State : active
             Id : 00000003
    Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
=============================
[root at localhost]# cat /proc/partitions
major minor  #blocks  name

   7        0         16 loop0
   7        1       2796 loop1
   7        2     900352 loop2
   7        3   10485760 loop3
   7        4     524288 loop4
   8        0   62522712 sda
   8        1   18350080 sda1
   8        2   44171264 sda2
   8       16  976762584 sdb
   8       17  871905280 sdb1
   8       18  104855552 sdb2
   8       32  976762584 sdc
   8       33  871905280 sdc1
   8       34  104855552 sdc2
  11        0    1048575 sr0
   8       48    3903488 sdd
   8       49    3903456 sdd1
 253        0   10485760 dm-0
 253        1   10485760 dm-1
   9      126  976759808 md126
 259        0  871905280 md126p1
 259        1  104853504 md126p2
=============================
[root at localhost]# mdadm --examine /dev/md126p2
/dev/md126p2:
   MBR Magic : aa55
Partition[0] :   1917848077 sectors at      6579571 (type 70)
Partition[1] :   1818575915 sectors at   1953251627 (type 43)
Partition[2] :           10 sectors at    225735265 (type 72)
Partition[3] :        51890 sectors at   2642411520 (type 00)
=============================
[root at localhost]# dmesg | grep md
[    5.966318] md: bind<sdc>
[    6.168403] md: bind<sdb>
[    6.171261] md: bind<sdc>
[    6.171463] md: bind<sdb>
[    6.213554] md: raid1 personality registered for level 1
[    6.215861] md/raid1:md126: active with 2 out of 2 mirrors
[    6.216006] md126: detected capacity change from 0 to 1000202043392
[    6.216775]  md126: p1 p2
[    6.216955] md126: p2 size 209711104 extends beyond EOD, truncated
[    6.226735] md: md126 switched to read-write mode.
[    6.436537] md: export_rdev(sdc)
[    6.436733] md: export_rdev(sdb)
[  227.889293] EXT4-fs (md126p1): warning: maximal mount count
reached, running e2fsck is recommended
[  227.929448] EXT4-fs (md126p1): mounted filesystem with ordered data
mode. Opts: (null)
[  227.929454] SELinux: initialized (dev md126p1, type ext4), uses xattr
=============================
[root at localhost]# mdadm --detail --scan
ARRAY /dev/md/imsm0 metadata=imsm UUID=516f8fc8:bd5e663f:c254ea22:d30bb530
ARRAY /dev/md/Volume0_0 container=/dev/md/imsm0 member=0
UUID=fa85ec01:25e9c177:da74f1e6:03284c35
=============================


--joshua


More information about the users mailing list