<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi all, I'm having some issues. I'm a little confused, so I was
checking our servers today and saw something strange. cat
/proc/mdstat shows that 1 device md0 is inactive. I'm not really
sure why. id did a bit more digging and testing using smartctl and
it says that the device /dev/sdg (part of md0) is failing, estimated
to fail within 24 hrs. but if i do df -h it doesn't even show md0,
and was talking to a friend and we disagreed. I believe that based
on what smrtctl says the drive is failing but not failed yet. he
doesn't think its a problem with the drive. do you have any thoughts
on this? and why would the device (md0) suddenly be inactive but
still show 2 working devices (sdg, sdh)?<br>
<br>
<b>(proc/mdstat)</b><br>
[root@csdatastandby3 bin]# cat /proc/mdstat <br>
Personalities : [raid1] [raid10] <br>
md125 : active raid10 sdf1[5] sdc1[2] sde1[4] sda1[0] sdb1[1]
sdd1[3]<br>
11720655360 blocks super 1.2 512K chunks 2 near-copies [6/6]
[UUUUUU]<br>
<br>
md126 : active raid1 sdg[1] sdh[0]<br>
463992832 blocks super external:/md0/0 [2/2] [UU]<br>
<br>
md0 : inactive sdh[1](S) sdg[0](S)<br>
6306 blocks super external:imsm<br>
<br>
unused devices: <none><br>
[root@csdatastandby3 bin]#<br>
<br>
<br>
<b>(smartctl)</b><br>
[root@csdatastandby3 bin]# smartctl -H /dev/sdg <br>
smartctl 5.43 2012-06-30 r3573
[x86_64-linux-2.6.32-431.17.1.el6.x86_64] (local build)<br>
Copyright (C) 2002-12 by Bruce Allen,
<a class="moz-txt-link-freetext" href="http://smartmontools.sourceforge.net">http://smartmontools.sourceforge.net</a><br>
<br>
=== START OF READ SMART DATA SECTION ===<br>
SMART overall-health self-assessment test result: FAILED!<br>
Drive failure expected in less than 24 hours. SAVE ALL DATA.<br>
Failed Attributes:<br>
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE<br>
5 Reallocated_Sector_Ct 0x0033 002 002 036 Pre-fail
Always FAILING_NOW 32288<br>
<br>
[root@csdatastandby3 bin]# <br>
<br>
<b>(df -h)</b><br>
[root@csdatastandby3 bin]# df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/md126p4 404G 4.3G 379G 2% /<br>
tmpfs 16G 172K 16G 1% /dev/shm<br>
/dev/md126p2 936M 74M 815M 9% /boot<br>
/dev/md126p1 350M 272K 350M 1% /boot/efi<br>
/dev/md125 11T 4.2T 6.1T 41% /data<br>
[root@csdatastandby3 bin]# <br>
<br>
<br>
<b>(mdadm -D /dev/md0</b><br>
[root@csdatastandby3 bin]# mdadm -D /dev/md0<br>
/dev/md0:<br>
Version : imsm<br>
Raid Level : container<br>
Total Devices : 2<br>
<br>
Working Devices : 2<br>
<br>
<br>
UUID : 32c1fbb7:4479296b:53c02d9b:666a08f6<br>
Member Arrays : /dev/md/Volume0<br>
<br>
Number Major Minor RaidDevice<br>
<br>
0 8 96 - /dev/sdg<br>
1 8 112 - /dev/sdh<br>
[root@csdatastandby3 bin]# <br>
<br>
<br>
thanks<br>
<br>
<br>
-dustink<br>
</body>
</html>