Modular Kernel Packaging for Cloud

Don Zickus dzickus at redhat.com
Thu Mar 6 15:04:56 UTC 2014


On Thu, Mar 06, 2014 at 08:38:44AM +0900, Sandro "red" Mathys wrote:
> On Thu, Mar 6, 2014 at 1:45 AM, Kevin Fenzi <kevin at scrye.com> wrote:
> > On Wed, 05 Mar 2014 17:37:42 +0100
> > Reindl Harald <h.reindl at thelounge.net> wrote:
> >> in general you need to multiply the wasted space for each instance
> 
> Exactly, you usually have hundreds or even thousands of instances
> running. Sure, "every MB counts" isn't to be taken literal here, maybe
> I should rather have said "every 10 MB count".
> 
> > At least for my uses, the amount of non persistent disk space isn't a
> > big deal. If I need disk space, I would attach a persistent volume...
> 
> Figure you get your additional persistent volumes for free somehow, so
> all those Amazon AWS, HP Cloud, Rackspace, etc. users envy you. And
> those admins that need to buy physical disks to put into their private
> clouds, too.
> 
> Also, more data equals more network traffic and more time - both
> things that matter in terms of costs, at least in public clouds.

Sure, but what if the trade-off in size comes with a cost in speed?  Is
cloud ok with the kernel taking twice as long to boot?  Or maybe running
slower?  Or maybe crashing more often (because we removed safety checks?).

I mean if Josh wanted to he could make everything modular and have a
really small kernel footprint (like 40MB or so) running in 50MB of memory
(I have done this with kdump).  But it costs you speed in loading modules
(as opposed to built into the kernel).  You may lose other optional
optimizations that help speed things up.

Other SIGs made not like it, but again it depends on how you frame your
environment.  Maybe cloud really needs its own kernel.  We don't know.

What is cloud willing to sacrafice to obtain smaller size?

Cheers,
Don


More information about the kernel mailing list