Introducing wicked

Dan Williams dcbw at redhat.com
Thu Dec 2 16:31:51 UTC 2010


On Fri, 2010-11-26 at 10:24 +0100, Olaf Kirch wrote:
> On Thursday 25 November 2010 21:29:30 Richard W.M. Jones wrote:
> > On Thu, Nov 25, 2010 at 05:24:37PM +0100, Olaf Kirch wrote:
> > > You may ask, don't we have enough of those already? Don't we have
> > > NetworkManager, connman, netcf, and a few more?
> > 
> > Indeed ...  You don't explain how it's better than netcf.
> 
> That's because I'm not a huge fan of introducing my code by dissing other
> people's projects :-)
> 
> Okay, so here's where I see the significant differences
> 
> netcf, from what I have seen so far, converts between sysconfig files
> and XML using a combination of augeas and XSLT. To bring up and shut
> down interfaces, it continues to rely on ifup/ifdown scripts. Is that
> an accurate description?
> 
> This has a number of problems, I believe
> 
> 
> 1. ifcfg files are dead
> 
> First and foremost, ifup/ifdown and the shell scripts that do the dirty
> work in the background do not scale very well, and I believe as an
> architecture they've reached their best before date. It gets increasingly
> hard to represent complex topologies in sysconfig files. Setups like bridges
> on top of bonding devices on top of VLANs tend to result in rather messy
> ifcfg files to begin with - and I would guess that was one of the main 
> motivations for creating netcf, and representing network configuration using
> a hierarchical, extensible syntax such as XML.
> 
> I think we need to take this a step further, though. Converting the structured
> configuration data (be it XML or something else, I couldn't care less) to
> ifcfg files and then invoke the same old messy ifup is not going to work. It's
> a convenience to those people writing configuration front-ends, but it doesn't
> do anything to make things work better.
> 
> Thus, I believe we need to take out the ifcfg format in the long run, and take
> the structured configuration data all the way to the actual system calls that
> set up the interface. And that is what wicked does. At the system end of
> things, wicked doesn't call ip or route or ethtool or vconfig; for all its
> configuration requirements it talks to the kernel directly.
> 
> The only concession wicked makes to the "old school" ifcfg files is that
> it still supports them as one choice of where configuration data can be kept.
> That's because we have system management tools that mess with these files
> today, and you probably can't rewrite these all at once. But wicked doesn't
> *require* ifcfg files. You can easily configure it to store all network
> configuration in XML files (or any other structured format), and it will
> just continue to work.
> 
> (BTW if you look into the source code, you'll find a file named netcf.c
> which tries to provide a shim layer that translates netcf API calls to
> wicked's REST API - more of a case study than a complete shim though :-)
> 
> 
> 2. Why a daemon, not a library
> 
> The first reason to do this is pretty simple; hotplugging. Traditional ifup
> doesn't handle this at all; you need to rely on separate services like
> ifplugd or NetworkManager.
> 
> But it actually goes beyond that. You may actually want to do more things
> in response to network events than just hotplug an interface. It can be
> simple things like, starting a dhcp6 client when we see a Managed/Other flag
> in a router advertisement.
> 
> Or consider the usual fun to be had when you need to juggle the information
> obtained from DHCP running on several interfaces. You get DNS information
> from two DHCP servers and maybe on PPP uplink - does one of them take
> precedence? Or should you merge them? What do you restore when one interface
> goes down? How does that relate to information obtained from iBFT? Try to
> handle that in shell code, and you'll need a lot of aspirin. (To be honest
> here; wicked cannot do all of that yet, but it is getting close).
> 
> So, what I wanted to create with wicked was a network configuration facility
> that allows to integrate different aspects more closely than what is
> possible if things live in lots of different applications that were never
> designed to interact very closely.

Sounds a lot like NetworkManager :)

> 
> 
> 3. Why not NetworkManager?
> 
> On the other hand, there's NetworkManager (and I'm getting to this point
> because Pete Zaitcev brought this up). Right now, NetworkManager doesn't
> handle bridges, bonds, infiniband, token ring - that's why I say it's a bit
> desktopish, the server environment simply has never been the focus of
> NetworkManager's development. Also, it links against a somewhat longish lists
> of fairly heavy-weight libraries (including nspr), and requires dbus - all of
> which make it pretty much impossible to use in initrd or an environment where
> space is a premium.

It hasn't been a specific focus, but it's always been a goal.  We've
done a lot of work on server-oriented stuff in the past, like support
for various s390 configurations, better interaction with virtualization
tools, and command-line interface improvements.  The reason some stuff
like bridging isn't as important is simply lack of time and effort to
actually implement the stuff.  Bridging is the top of the
feature-request list; any help on any of the "serverish" features would
be greatly appreciated.

BTW, you can link NM against gnutls if you like, since the crypto
backend is selectable between NSS or gnutls.  That's required for
various crypto operations on certificates & keys that need to be done
when using 802.1x-enabled networks like WPA Enterprise.

Dan



More information about the devel mailing list