F21 Self Contained Change: Remote Journal Logging

Alexander Bokovoy abokovoy at redhat.com
Mon Apr 14 14:19:17 UTC 2014

On Mon, 14 Apr 2014, Jaroslav Reznik wrote:
>= Proposed Self Contained Change: Remote Journal Logging =
>Change owner(s): Zbigniew Jędrzejewski-Szmek <zbyszek at in.waw.pl>
>Systemd journal can be configured to forward events to a remote server.
>Entries are forwarded including full metadata, and are stored in normal
>journal files, identically to locally generated logs.
>== Detailed Description ==
>Systemd's journal currently provides a replacement for most functionality
>offered by traditional syslog daemons,
>with two notable exceptions: arbitrary filtering of messages and forwarding of
>messages over the network. This Change targets the latter.
>The high-level goal is to have a mechanism where journal logging can be
>to keep a copy of logs on a remote server, without requiring any maintenance,
>fairly efficiently and in a secure way.
>Two new daemons are added as part of the systemd package:
>* on the receiver side systemd-journal-remote accepts messages in the Journal
>Export Format [1]. The export format is a simple serialization of journal
>entries, supporting both text and binary fields. This means that the messages
>are transferred intact, apart from the "cursors", which specify the location
>in the journal file. Received entries are stored in local journal files
>underneath /var/log/journal. Those files are subject to normal journald rules
>for rotation, and the older ones will be removed as necessary to stay within
>disk usage limits. Once entries have been written to the journal file, they
>can be read using journalctl and the journal APIs, and are available to all
>clients, e.g. Gnome Logs [2].
>* on the sender side systemd-journal-upload is a journal client, which exports
>all available journal messages and uploads them over the network. The (local)
>cursor of the last message successfully forwarded is stored on disk, so when
>systemd-journal-upload is restarted (possibly after a reboot of the machine),
>it will send all recent messages found in the journal and then new ones as
>they arrive.
>The communication between the two daemons is done over standard HTTPS,
>following rather simple rules, so it is possible to create alternate
>implementations without much work. For example, curl can be easily used to
>upload journal entries from a text file containing entries in the export
>format. Basically, the data are sent in an HTTP POST to /upload with Content-
>Type: application/vnd.fdo.journal. When doing "live" forwarding, the size of
>the transfer cannot be known in advance, so Transfer-Encoding: chunked is
>used. All communication is encrypted, and the identity of both sides is
>verified by checking for appropriate signatures on the certificates.
How certificates are managed for sender and receiver parts?
Who generates them? Do you require explicit placement of the
certificates prior to enabling the service?

/ Alexander Bokovoy

More information about the devel mailing list