I don't have a full answer, but I can tell you why df is being so carelessly optimistic.

The build process creates an ext4 image of a certain size (defined in the kickstart file), and then installs the packages into it. This image gets compressed into a squashfs image.

When the ext4 partition gets mounted, it still has the same size as was defined in the kickstart. It is mounted read-write because of the overlay. But what df is seeing is the 'free' space in the ext4 image, i.e., the difference between the partition size and the installed contents. It doesn't take the overlay size into account. So if you have an 8GB partition and install 2GB of stuff to it, df will report 6GB free regardless of the size of your overlay. You can write to it until either the partition is full or the overlay is full; if it's the latter, then it'll suddenly become read-only and your system will almost certainly crash horribly.

James

On Mon, 2011-10-03 at 17:33 +0100, Phil Meyer wrote:
On 10/03/2011 09:59 AM, Phil Meyer wrote:
> The livecd images currently appear to have 4GB filesystems, which would
> correspond to single layer DVDs. This is ok.
>

Some follow up thoughts.

We have built other platforms using NFS with RHEL6 using the 
/etc/sysconfig/readonly-root and /etc/statetab.

Understanding that the basis of the livecd overlay is based upon a 
RAMDISK, I wonder if the same could be used as a solution for what I 
want to do.

Here is the goal:

Appliance computing, such that a generic livecd image self modifies upon 
boot up based upon class and/or system specific specifications.

We have completed some rather complex examples and the results are very 
desirable.  The only draw back is the inflexibility of the overlay size.

Permanent storage is always desirable for data collection, but many 
virtual machines and hardware based systems in use today are single 
task, and require little or no permanent storage.  We are targeting 
these types of servers for our appliance model.

Some examples are:

Virtual machine servers.
Load Balancers.
Database compute nodes (external storage, of course)

and the list goes on.

The benefits are greatly reduced systems administration.

My old work system can roll a new version of the generic livecd we are 
using in 6.5 minutes, and automatically copy it to the pxe servers 
without impacting any production servers.  As new servers come online or 
are rebooted, they pick up the new version.

For example:

We have completed a test of our Virtual hosting platform using a livecd 
image for the servers.  To upgrade the OS of the servers the following 
procedure is performed:

1. Rebuild the livecd image and copy it to the pxe servers (about 8 
minutes total).

2. Migrate away from host one.

3. Reboot host one.

4. Migrate away from host 2. (many of those will get migrated back to 
host one)

5. Reboot host two.

6. Migrate away from host three.

7. Reboot host three.

etc.

We have automated the migrate processes, so the total time is about 15 
minutes per server, including boot time, and no customer outage is realized.

This is a very simple and strait forward procedure, with very little 
risk, that 'any monkey' can perform.  The definition of best practice, 
in my book.

Ideas and suggestions are most welcome.
--
livecd mailing list
livecd@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/livecd