On Thu, Mar 6, 2014 at 9:57 AM, Don Zickus <dzickus(a)redhat.com> wrote:
On Thu, Mar 06, 2014 at 08:16:00AM +0900, Sandro "red"
> That's the point, we want a reasonably small package while still
> providing the required functionality. Not sure how providing a fixed
> size number is helping in this. But most of all, I didn't throw in a
> number because I have no idea what is reasonably possible. I really
> only just said "every MB counts" because the question came up before
> (in Josh's old thread) and I hoped I could stop this discussion from
> happening again before we have any numbers for this.
Ever work in the embedded space? Every MB counts there too. :-) This was
solved by creating budgets for size and memory requirements. This helped
control bloat, which is going to be your biggest problem with cloud
What concerns me is that you don't know what size your cloud deployment is
but expect everyone to just chop chop chop. How do we know if the kernel
is already at the right size?
There is s a huge difference between re-architecting the kernel packaging
to save 1 or 2 MB (off ~143 MB size currently) vs. re-architecting to save
50 MB. The former is really a wasted exercise in the bigger picture,
wherease the latter (if proven needed) accomplishes something.
But again it comes down to understanding your environment. Understanding
your environment revovles around control. I get the impression you are
not sure what size your environment should be.
So I was proposing the kernel stay put or maybe create _one_ extras
package that gets installed in addition to the bzImage. But from the
Right. When I said I had kernel-core and kernel-drivers, I wasn't
being theoretical. I already did the work in the spec file to split
it into kernel-core and kernel-drivers. The kernel package becomes a
metapackage that requires the other two, so that existing installs and
anaconda don't have to change (assuming I did thinks correctly).
Cloud can just specify kernel-core in the kickstart or whatever.
sound of it, the chopping is really going to get you savings of
~30MB or so.
I spent some time yesterday hacking around on an existing VM and just
removing stuff from /lib/modules/ for an installed kernel. I was able
to get it down from 123MB to 58MB by axing entire subsystems that
clearly didn't apply and running depmod on the results to make sure
there weren't missing dependencies. Some stuff had to be added back
(want virtio_scsi? you need target and libfc), but a lot could be
removed. That brings the total to about 81MB for vmlinuz, initramfs,
and /lib/modules for that particular kernel.
In those 81MB, I still had all of the main GPU drivers, all of the
intel and a few other ethernet drivers, ext4, xfs, btrfs, nfs, the
vast majority of the networking modules (so iptables and netfilter),
scsi, acpi, block, char, etc. The major things missing were bluetooth
and wireless, infiniband, some firewire stuff. Basically it resulted
in a system that boots perfectly fine in a VM for a variety of
different use cases.
I think that's a reasonable start, and it's a significant reduction.
Beyond that, we get into much less reduced savings and having to move
stuff around on a finer level. For the curious, I uploaded the module
Again, this was just hacking around on an installed system. Still
work to do at a packaging level, but this is as good as anything to