FC5 test3 -- dmraid broken?

Dax Kelson dax at gurulabs.com
Wed Feb 22 05:55:45 UTC 2006


On Tue, 2006-02-21 at 10:36 -0500, Peter Jones wrote:
> On Mon, 2006-02-20 at 22:31 -0700, Dax Kelson wrote:
> 
> > mkdmnod
> > mkblkdevs
> > rmparts sda
> > rmparts sdb
> > dm create nvidia_hcddcidd 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
> > dm partadd nvidia_hcddcidd
> > echo Scanning logical volumes
> > lvm vgscan --ignorelockingfailure
> > echo Activating logical volumes
> > lvm vgchange -ay --ignorelockingfailure  VolGroup00
> > resume /dev/VolGroup00/LogVol01
> 
> Ok, so this does all look fine -- can you add some sleeps in here and
> see if you can copy down exactly what these output, and see which one
> actually fails?
> 
> If that fails we can build you an initrd by hand that has tools in it...
> -- 
>   Peter

I added echos such as "about to dm create" and then some "sleep 5" after
each of those commands.

There is zero output from mkdmnod on down until the "lvm vgscan" runs.

It produces this output:

device-mapper: 4.5.0-ioctl (2005-10-04) initialised: dm-devel at redhat.com
  Reading all physical volumes. This may take a while...
  No volume groups found
  Unable to find volume group "VolGroup00"
...

Booting into the rescue environment the dm raid is brought up and LVM
activated automatically and correctly.

Incidentally in the rescue environment I chrooted into my rootfilesytem
and brought up my network interface (/etc/init.d/network start), and ran
yum -y update.

There were about 40 packages downloaded, but every rpm install attempt
puked out out with errors from the preinstall scripts. Unsurprisingly
running rpm -Uvh /path/to/yum/cache/kernel*rpm resulted in the same
error. :(

Dax Kelson




More information about the devel mailing list