F21 Self Contained Change: Remote Journal Logging

Martin Langhoff martin.langhoff at gmail.com
Wed Apr 16 16:50:53 UTC 2014

On Mon, Apr 14, 2014 at 9:07 AM, Jaroslav Reznik <jreznik at redhat.com> wrote:
> The communication between the two daemons is done over standard HTTPS,

Interesting. One quirk of current syslog-style remote logging over UDP
is that it is fairly tolerant to dataloss.

With quite a bit of experience in the field... I have to say that this
is both a bug, and a feature.

The bug is obvious, so let me explain the "feature" side

 - if the remote server is unreachable or unresponsive, clients
continue running without any adverse effects (other than loss of
logging data)

 - if the network link carrying the logging traffic is overwhelmed by
the traffic, client nodes continue running without any adverse effects
(other than...)  at least from the logging machinery -- other traffic
on that saturated link may lead other sw to misbehave)

I hear you holler "OMG you have to build full redundancy in your
logging backend"; and... I have not seen a single operation where the
logging backed was fully redundant.

And in fact it may be too much to ask -- in most setups log entries
are not _that_ precious. I know I can reconfigure a syslog server and
restart it without the 1K VMs that talk to it glitching. A recent
"loop" in our syslog configuration was a relatively minor problem
because it just dropped the traffic it couldn't handle for a brief
time while we fixed things.

This is the reality of system configs I know. Fully redundant,
"perfect" log servers are very hard to run, might not be worth it, and
anyway such a change won't happen overnight.

To avoid gridlocking operations, IMO this logging will need a local
queue, (for later retry), with a "drop anything older than X" escape


 martin.langhoff at gmail.com
 -  ask interesting questions
 - don't get distracted with shiny stuff  - working code first
 ~ http://docs.moodle.org/en/User:Martin_Langhoff

More information about the devel mailing list