On Wed, 5 Mar 2014 10:16:21 -0500
Don Zickus <dzickus(a)redhat.com> wrote:
Also, I just arbitrarly threw out 100MB, if we should start higher,
say 150MB, then it doesn't matter to me. :-)
This entire disk size optimization seems kind of weird to me.
I just booted a f20 offiical cloud image in our openstack cloud. I used
the m1.tiny (smallest size, no persistent storage):
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 20G 1.1G 19G 6% /
Is a few 10's of MB's really worth making our kernel a bunch more
complex? Is disk space the right thing to be trying to optimize?
Perhaps I am missing it, but are there cases where the current cloud
image is too large? what are they?
kevin