install to disk for another system

idwsh6b02 at sneakemail.com idwsh6b02 at sneakemail.com
Wed Oct 30 23:33:46 UTC 2013


Reindl Harald wrote at 22:08 +0100 on Oct 30, 2013:
 >
 > Am 30.10.2013 22:00, schrieb idwsh6b02 at sneakemail.com:
 > > I'm looking for strategies that support installing fedora to a
 > > removable disk that will then be removed to run in another
 > > system.  Ideally this will be automated (as opposed to clicking
 > > on 'Next' buttons like the standard anaconda install from CD).
 > >
 > > Basically I'm picturing something like:
 > >
 > >  - set up partitions / filesystems based on some config file
 > >  - install packages from local repository
 > >  - post-package configuration (e.g., configure etc files)
 > >  - remove disk, insert another, repeat
 > >
 > > This sounds very kickstart-ish, but kickstart is usually used to
 > > install onto storage on the actual target system.  I want to
 > > install to a disk in a separate machine, then move the disk to
 > > the target system.
 > >
 > > This might even be an installation for a target system that is
 > > not the same architecture as the installer system.
 > >
 > > Fire away.  What solutions are out there?
 >
 > normally in such cases you install a template setup on one machine
 > followd by a image of the complete disk (from a live CD or stick)
 > and install the image on the spare disk
 >
 > man dd
 >
 > make sure you have a *full initrd* and if you avoid specific
 > graphics drivers the installation will boot on most hardware
 >
 > [root at localhost:~]$ cat /etc/dracut.conf.d/91-host-only.conf
 > hostonly="no"
 > ________________________
 >
 > in case of RAID10 systems i vene put two of the 4 disks to
 > the new machine and make a RAID-rebuild on both of them
 >
 > if the system does not boot up you chossed the wrong one
 > but since it wil not boot at all no harm - try different
 > disks to move

Thanks for the reply.  And thanks for the perfect segue to a
discussion _why_ I asked the question in the first place.

My current solution is based on dd (i.e., disk cloning) -
unfortunately.  This was actually why I started pondering better
ways.

I would like to get away from having a "gold" copy of an image.
For one thing, it doesn't lend itself very well to version control.

Let's say I create a system that uses a certain set of (picking some
arbitrary number) 1000 packages along with some modifications to
config files that may vary per installation.  Then 2 months later, I
want to update 15 of those packages to make a "version 2" of the
product, and still support fielded systems that were built with the
original set of packages ("version 1").

Checking in the entire dd image of the system to a version
control repository is not very effective, especially when
you have to start dealing with 10s and 100s of revisions.
Now multiply that by the number of supported architectures.

I'd much rather just mark the changed elements in a manifest
than regenerate a complete new image where.


It also doesn't lend itself very well to accomodate varying size of
storage.  For instance, I might want to build a system that gets
installed onto a 3 TB drive or a 32 GB SD card or (as you hinted) even
a set of disks in a raid setup.

It used to be, also, that using dd ran into trouble when disk
geometries of the destination drive varied slightly over time.  This
is less of an issue now with LBA, but it still becomes an issue when
your target device can vary widely in size from one install to the
next.


Also using image cloning leads to reproducibility issues where one
tends to do updates to generations of a clone (original + changes =>
clone + changes => another clone + more changes, etc.) which gets
polluted over time and is not very good version control (e.g., what
changes were made between dd image 23 and dd image 57?).  I'd rather
install from the basic set of component elements (packages that may
even get built from source each time - which also accomodates
differing target architectures).


Finally, producing the original master image may be hard to do.  Take
for instance a target of a Raspberry Pi.  Installing the first image
may be hard because of the extended build times required.  It'd be
much nicer to cross build on a fast machine and produce an image you
can just plop onto an SD card for some target with a small/slow
processor.

In this case, installing to a file backed loopback mount and then dd
the file to the final storage would be okay, but then we get back to
the same problem - how does one install the packages to the loopback
mount rather than the running system?


users@ might be the wrong list for installing fedora packages in
the way I'd like?  If so, what would be a more fruitful resource?




More information about the users mailing list