On Sun, Nov 14, 2010 at 01:14:18AM +0100, Lennart Poettering wrote:
LVM actually slows down boot considerably. Not primarily because its
code was slow or anything, but simply because it isn't really written in
the way that things are expected to work these days. The LVM assembly at
boot is expected to be run at a time where all disks have been found by
the kernel and identified. However, the idea that such a time exists is
out-of-date on modern systems. There is simply no point in time where
all disks have been enumerated, because they can always come and go and
on many busses (for example USB), you never know whether you have
enumerated all devices, because the bus doesn't support a notion like
that. The right way how to implement a logic like this is to wait
exactly until all disks actually *needed* have shown up and at that time
assemble LVM. Currently, to make LVM work, we however try to wait until
*everything* thinkable is enumerated, not only the disks that are
actually needed. The fact that on many busses this point in time doesn't
really exist is ignored, and awful hacks such as "modprobe
scsi_wait_scan" are used to work around this out-of-date design on the
other busses. To get to a fast system however, you should minimize the
time you waste and continue withthe next step of booting the moment you
have collected all devices you need for assembly.
Thanks for explaining what the "assembly" issue is all about.
I'd really like to hear from an LVM expert or two about this, because
I can't believe that it's impossible to make this work better for the
common single-disk-is-boot-disk single-PV case. The LVM metadata
(which I've written code to read and decode in the past) contains the
information needed.
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into Xen guests.
http://et.redhat.com/~rjones/virt-p2v