Recovering/Restoring Boot Partition

Don Levey fedora-list at the-leveys.us
Mon Feb 24 15:05:48 UTC 2014


On 2/22/2014 21:37, Chris Murphy wrote:
> 
> On Feb 22, 2014, at 12:46 PM, Don Levey <fedora-list at the-leveys.us>
> wrote:
> 
>> On 2/21/2014 11:38, Chris Murphy wrote:
...
>> 
>>> What do you get for:
>>> 
>>> mdadm -E /dev/sda mdadm -E /dev/sdc
>>> 
>>> 
>> (this time, with commands, because I can learn. :-) )
>> 
>> [root at localhost don]# mdadm -E /dev/sda /dev/sda: MBR Magic : aa55 
>> Partition[0] :    102760449 sectors at           63 (type 07) 
>> Partition[1] :       409600 sectors at    102760512 (type 83) 
>> Partition[2] :    209326268 sectors at    103170112 (type 8e) 
>> [root at localhost don]# mdadm -E /dev/sdc /dev/sdc: MBR Magic : aa55 
>> Partition[0] :    102760449 sectors at           63 (type 07) 
>> Partition[1] :       409600 sectors at    102760512 (type 83) 
>> Partition[2] :    209326268 sectors at    103170112 (type 8e)
>> 
>> So yes, they seem to be relating the same thing.
> 
> It's not finding the IMSM metadata if this is firmware raid. In the
> BIOS setup, you should find some setting related to this. Typically
> the option setting is either raid, ide, or ahci. Maybe it was set to
> raid when you initially did the install and then got changed to ide?
> I wouldn't change it from what it is now, but it's worth checking the
> setting to see if you recall it being changed recently.
> 
I will check when I can reboot next - hopefully tonight.

> Also show the result from each:
> 
> cat /proc/mdstat pvck /dev/sda3 pvck /dev/sdc3
> 
[root at localhost don]# cat /proc/mdstat
Personalities :
unused devices: <none>


[root at localhost don]# pvck /dev/sda3
  WARNING: Failed to connect to lvmetad: No such file or directory.
Falling back to internal scanning.
  Found label on /dev/sda3, sector 1, type=LVM2 001
  Found text metadata area: offset=4096, size=192512


[root at localhost don]# pvck /dev/sdc3
  WARNING: Failed to connect to lvmetad: No such file or directory.
Falling back to internal scanning.
  Found label on /dev/sdc3, sector 1, type=LVM2 001
  Found text metadata area: offset=4096, size=192512

> What I'm betting on at this point (guessing), is that the raid array
> is not active, and therefore two identical UUID physical volumes are
> present rather than appearing as one, and grub is becoming confused.
> 
On a separate machine, running software RAID1, I see active raid
devices.  That I am not seeing them here would support your suggestion
that the raid array isn't active, no?

> And the other problem is that there simply isn't enough space to
> embed core.img in the MBR gap as usual, because the support modules
> for RAID and LVM make it too big. There is a way to use block lists,
> but before doing that the whole raid thing needs to be figured out.
> 
Interesting.  This makes me wonder why it all worked before.

This was a machine I inherited from a friend, with the raid devices
already there, and running Windows.  When I installed on it, from what I
recall, it presented me with a single volume, and never behaved as if
there were two devices.

> I haven't done a lot of testing in this area with actual hardware but
> some people say using software raid between Windows and Linux is
> flakey. I think it ought to work or it's a bug. But the thing to
> realize is that you have a Windows driver doing software raid when
> Windows is running; and then you have a linux driver doing software
> raid when linux is running. So it's absolutely got to work correctly
> or you've totally defeated the purpose of even having raid1, and
> instead you're better off with just a regular consistent backup, like
> once an hour or something.
> 
> 
I can appreciate that.  If this were a mission-critical machine I might
be able to justify the setup for a regular, hourly, backup.  At this
point, though, that seems overkill.  If this hadn't been working before,
I would probably just accept that it's not going to work and move on.
Thank you again,
 -Don


More information about the users mailing list