F21 Self Contained Change: Remote Journal Logging

Simo Sorce simo at redhat.com
Tue Apr 15 19:30:57 UTC 2014


On Tue, 2014-04-15 at 20:28 +0200, Zbigniew Jędrzejewski-Szmek wrote:
> On Tue, Apr 15, 2014 at 11:00:45AM -0400, Simo Sorce wrote:
> > On Mon, 2014-04-14 at 15:07 +0200, Jaroslav Reznik wrote:
> > > = Proposed Self Contained Change: Remote Journal Logging = 
> > 
> > > The communication between the two daemons is done over standard HTTPS, 
> > > following rather simple rules, so it is possible to create alternate 
> > > implementations without much work. For example, curl can be easily used to 
> > > upload journal entries from a text file containing entries in the export 
> > > format. Basically, the data are sent in an HTTP POST to /upload with Content-
> > > Type: application/vnd.fdo.journal. When doing "live" forwarding, the size of 
> > > the transfer cannot be known in advance, so Transfer-Encoding: chunked is 
> > > used. All communication is encrypted, and the identity of both sides is 
> > > verified by checking for appropriate signatures on the certificates.
> 
> > What are the pros of using HTTP if all you are doing are POSTS to a
> > hardcoded URL ?
> Using HTTP makes it possible to use e.g. use curl to upload some logs
> from the commandline. It should also be fairly easy for people to write
> e.g. Python code to upload logs. I also expect people to want to send
> json formatted logs at some point, and the HTTP headers make things
> fairly extensible. I think that using standard HTTP is easier than
> designing a biderectional protocol.

I understand  why HTTP looks convenient for some things, but really,
aside for development, would you ever expect someone lo log to your
interface except from a journald client ?

> > HTTP seem like a bad idea in terms of security, certificates are
> > notoriously very hard to manage, even with the help of things like
> > certmonger, and hard to properly validate in most libraries today.
> > 
> > Let alone dealing with setting up a CA just for enabling remote logging
> > (or otherwise painfully exchange fingerprints and white list
> > certificates for each client-server pair.
> >
> > And please do not tell me this is deferred to the admin to figure out,
> > because then it would mean this feature cannot seriously be used in
> > normal setups.
> I think you exaggarate a bit.

I think you haven't tried for real :-)

>  Managing certificates is annoying, true,
> but there's lots of advice on the web and howto and various helper
> software.

There is a lot of "wrong" advice, mostly you find people telling other
people to disable verification, and to use self signed certificates.

>  I'd imagine that in a setup with a few servers one would create
> the certificates on the receiver machine, copy&pasting some instructions
> from Fedora docs, and scp them to the other hosts.

What I am asking is:
How do you validate certificates on the client ?
Are you going to use white lists ?
Are you going to depend on a CA white listed ?
Are you going to create an extensions to be put in the certificates to
restrict their use to logging ? Or are you going to allow to use any
certificate as long as the CN matches the machine name ?
Are you going to have white lists on the server ?
Are you going to require client certificate authentication, or will you
allow any anonymous client to flood your server with garbage ?

How do you generate certificates ?
NSS comes with certutil, but it is not very flexible, I am not sure
about wat GnuTLS comes with, but that library is not something I would
like to depend on in the first place.

Also how are you going to harmonize client versus server management, the
2 stacks (NSS and GnuTLS) are quite different, having to learn not 1 but
2 stacks seem quite steep.

> > Is there any reason why a better custom protocol that can be secured
> > using things like SASL or GSSAPI is not used ?
> It doesn't really fit well with the overall approach of using HTTP. But
> I'm open to suggestions how to do this better. The Change page is so
> obnoxiously detailed so that I can get feedback :)

If you used something like protobuffers or dbus for the message
formatting and use a socket for the transport where you can choose
whether you want to do TLS vs GSSAPI easily (as you are not tied by the
inability of HTTP to use anything but TLS) then you would have a much
more flexible system. Also you wouldn't be tied to use 2 different
crypto stacks in the same project which is really a shame. You have to
duplicate everything and that will be prone to errors or
incompatibilities and you'll have to use the lower common denominator if
there are feature mismatches.

You could also eventually tunnel over HTTPS, after it is just buffers
going back and forth, but you wouldn't be tied to it.

HTH,
Simo.

-- 
Simo Sorce * Red Hat, Inc * New York



More information about the devel mailing list