On Sat, Aug 01, 2009 at 20:20:58 -0400,
Tony Nelson <tonynelson(a)georgeanelson.com> wrote:
OK, so you want to rewrite those sectors, but don't know where they
are. `fsck` is unlikely to help, as most sectors are not holding the
filesystem metadata, but actual data or are free. (As your sectors
are known bad, they probably are not free.)
There are various rescue tools. You could use `dd` on every file
(from `find`, perhaps), and note where it complains. Some files might
be fixable, others might be recoverable or replaceable, and some will
just be damaged. You could use `badblocks` on the disk, and just nuke
the offending sectors, accepting the damage to unknown files.
If you look at the output from the tests they usually say which sector
cause the selftest to fail. Unfortunately you'll endup up losing whole
blocks instead of sectors. If you are using raid 1, you can copy over
the bad blocks from the mirror. Otherwise you can just write zeros
or something over them. This will trigger the pending reallocation.
There are some tools for some file systems for trying to figure out which
files (if any) were affected.
You might want to look at:
http://smartmontools.sourceforge.net/badblockhowto.html
I see that Auto Offline Data Collection is enabled. Usually the
"every-four-hour" scan will recover sectors as they go bad. It is a
bad sign that some have accumulated, if it happened when you had Auto
Offline Data Collection enabled.
Only if they get a good read. If they can't read a sector they won't
replace it. This lets you decide when to give up on the data.