Minor bug report - epylock missing, cron fails
by Jon Peck
Hi,
Thanks for all the recent development, epylog's a great resource!
I think there's a minor oversight in the installer or an OS specific
assumption. Full disclosure, I had been using epylog 1.03 on Ubuntu 10.04
LTS, did purge remove and installed 1.07 from source. With that said, I
don't think that was the cause of the issue.
xxx@yyy:~/tmp/epylog-1.0.7$ sudo /usr/sbin/epylog --cron
Traceback (most recent call last):
File "/usr/sbin/epylog", line 300, in <module>
main(sys.argv)
File "/usr/sbin/epylog", line 257, in main
epylock()
File "/usr/sbin/epylog", line 144, in epylock
if not msg.strerror == "File exists": raise msg
OSError: [Errno 2] No such file or directory: '/usr/var/run/epylog.pid'
xxx@yyy:~/tmp/epylog-1.0.7$ ls -la /usr/var
total 12
drwxr-xr-x 3 root root 4096 2011-09-06 13:25 .
drwxr-xr-x 13 root root 4096 2011-09-06 13:25 ..
drwxr-xr-x 3 root root 4096 2012-02-20 20:20 lib
That explains why there's no file.
xxx@yyy:~/tmp/epylog-1.0.7$ sudo mkdir -p /usr/var/run
Then it executed normally.
Best regards,
Jon Peck
12 years, 4 months
[epylog] Created tag v1.0.7
by Konstantin Ryabitsev
The signed tag 'v1.0.7' was created.
Tagger: Konstantin Ryabitsev <mricon(a)kernel.org>
Date: Fri Feb 10 22:22:35 2012 -0500
Really tag for 1.0.7
Changes since the last tag 'v1.0.6':
Konstantin Ryabitsev (2):
Do not return an error on 0-length log files.
Quickfix for 1.0.7 (I hate autoconf)
12 years, 4 months
[epylog] Deleted tag v1.0.7
by Konstantin Ryabitsev
The signed tag 'v1.0.7' was deleted. It previously pointed to:
e2f8780... Do not return an error on 0-length log files.
12 years, 4 months
[epylog/stable-1.0.x] Quickfix for 1.0.7 (I hate autoconf)
by Konstantin Ryabitsev
commit bf6188bbb334522a91f0626f6c95c9a854977269
Author: Konstantin Ryabitsev <mricon(a)kernel.org>
Date: Fri Feb 10 22:21:38 2012 -0500
Quickfix for 1.0.7 (I hate autoconf)
configure.in | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
---
diff --git a/configure.in b/configure.in
index b003f4e..fce006f 100644
--- a/configure.in
+++ b/configure.in
@@ -1,6 +1,6 @@
dnl Process this file with autoconf to produce a configure script
-AC_INIT(epylog.spec)
+AC_INIT(epylog.in)
PACKAGE_TARNAME=epylog
PACKAGE_VERSION=1.0.7
12 years, 4 months
[epylog] Created tag v1.0.7
by Konstantin Ryabitsev
The signed tag 'v1.0.7' was created.
Tagger: Konstantin Ryabitsev <mricon(a)kernel.org>
Date: Fri Feb 10 22:11:28 2012 -0500
Tag for Epylog-1.0.7
Changes since the last tag 'v1.0.6':
Konstantin Ryabitsev (1):
Do not return an error on 0-length log files.
12 years, 4 months
[epylog/stable-1.0.x] Do not return an error on 0-length log files.
by Konstantin Ryabitsev
commit e2f8780f9b26316a67e14cdafe2ba06815cb0dbf
Author: Konstantin Ryabitsev <mricon(a)kernel.org>
Date: Fri Feb 10 22:07:29 2012 -0500
Do not return an error on 0-length log files.
ChangeLog | 10 ++-
configure.in | 2 +-
epylog.spec | 199 -------------------------------------------------
py/epylog/__init__.py | 11 +++-
py/epylog/log.py | 32 +++++---
py/epylog/module.py | 7 ++
6 files changed, 45 insertions(+), 216 deletions(-)
---
diff --git a/ChangeLog b/ChangeLog
index a855c09..aca45a9 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,7 @@
+Epylog-1.0.7
+ * Re-apply a fix to not return an error with a 0-length log file.
+ * Remove unmaintained spec file.
+
Epylog-1.0.6
* Back out the unfinished work to support timestamped rotation
(will be implemented fully in 1.1). Fixes the cron-run problem
@@ -30,14 +34,14 @@ Epylog-1.0.1
not work.
* Automatically detect python version during .spec building.
* Cleaned up trojans.list so it's less ugly
-
+
Epylog-1.0
* Out with 1.0 already!
* Handle ::ffff: fake ipv6 addresses for hostname resolution
* Do not depend on elinks in RPM.
* Default setting is to send html-only (so we don't depend on lynx)
* Packets module can now sort by port, system, and source.
-
+
Epylog-0.9.7
* Accepted Makefile patches from Will Newton
* Accepted patches for missing logs from Will Newton (#135)
@@ -57,7 +61,7 @@ Epylog-0.9.5
* Fix for bug #57
* Fix for bug #53
* Cron mode of operation added -- checks for a lockfile (bug #79)
-
+
Epylog-0.9.4
* Fix for bug #38 (incorrect offsets were causing backtrace)
* Normalized logger calls (bug #9)
diff --git a/configure.in b/configure.in
index 50c5d4b..b003f4e 100644
--- a/configure.in
+++ b/configure.in
@@ -2,7 +2,7 @@ dnl Process this file with autoconf to produce a configure script
AC_INIT(epylog.spec)
PACKAGE_TARNAME=epylog
-PACKAGE_VERSION=1.0.6
+PACKAGE_VERSION=1.0.7
dnl Package information.
PACKAGE=$PACKAGE_TARNAME
diff --git a/py/epylog/__init__.py b/py/epylog/__init__.py
index a8cc68c..ab762de 100644
--- a/py/epylog/__init__.py
+++ b/py/epylog/__init__.py
@@ -47,7 +47,7 @@ from report import Report
from module import Module
from log import LogTracker
-VERSION = 'Epylog-1.0.6'
+VERSION = 'Epylog-1.0.7'
CHUNK_SIZE = 8192
GREP_LINES = 10000
QUEUE_LIMIT = 500
@@ -125,6 +125,15 @@ class NoSuchLogError(exceptions.Exception):
logger.put(5, '!NoSuchLogError: %s' % message)
self.args = message
+class EmptyLogError(exceptions.Exception):
+ """
+ This exception is raised when Epylog finds an empty logfile.
+ """
+ def __init__(self, message, logger):
+ exceptions.Exception.__init__(self)
+ logger.put(5, '!EmptyLogError: %s' % message)
+ self.args = message
+
class GenericError(exceptions.Exception):
"""
This exception is raised for all other Epylog conditions.
diff --git a/py/epylog/log.py b/py/epylog/log.py
index 30b2e39..f0750a8 100644
--- a/py/epylog/log.py
+++ b/py/epylog/log.py
@@ -436,10 +436,19 @@ class Log:
self.entry = entry
filename = self._get_filename()
logger.puthang(3, 'Initializing the logfile "%s"' % filename)
- logfile = LogFile(filename, tmpprefix, monthmap, logger)
+ self.loglist = []
+ self.cur_rot_ix = 0
+ try:
+ logfile = LogFile(filename, tmpprefix, monthmap, logger)
+ logger.put(3, 'Appending logfile to the loglist')
+ self.loglist.append(logfile)
+ except epylog.EmptyLogError:
+ logger.endhang(3)
+ logger.puthang(3, '%s is empty, using the previous rotated log'
+ % filename)
+ self._init_next_rotfile()
+ logfile = self.loglist[0]
logger.endhang(3)
- logger.put(3, 'Appending logfile to the loglist')
- self.loglist = [logfile]
self.orange = OffsetRange(0, 0, 0, logfile.end_offset, logger)
logger.endhang(3)
self.lp = None
@@ -805,24 +814,24 @@ class Log:
"""
logger = self.logger
logger.put(5, '>Log._init_next_rotfile')
- ix = len(self.loglist)
- rotname = self._get_rotname_by_ix(ix)
+ self.cur_rot_ix += 1
+ rotname = self._get_rotname_by_ix(self.cur_rot_ix)
try:
logger.put(3, 'Initializing log for rotated file "%s"' % rotname)
rotlog = LogFile(rotname, self.tmpprefix, self.monthmap, logger)
+ self.loglist.append(rotlog)
except epylog.AccessError:
msg = 'No further rotated files for entry "%s"' % self.entry
raise epylog.NoSuchLogError(msg, logger)
- self.loglist.append(rotlog)
+ except epylog.EmptyLogError:
+ msg = 'Found an empty rotated log, ignoring it.'
+ rotlog = self._init_next_rotfile()
logger.put(5, '<Log._init_next_rotfile')
return rotlog
def _get_rotname_by_ix(self, ix):
"""
- The good thing about rotated files is that they are exactly at the same
- position in the log list, as the identifier appended to them by
- logrotate. E.g. messages.1 will be at position 1, messages.2 at
- position 2, and just messages at position 0.
+ Figure out the rotated file name by index passed.
"""
logger = self.logger
logger.put(5, '>Log._get_rotname_by_ix')
@@ -1300,8 +1309,7 @@ class LogFile:
logger.put(3, 'Making it 0')
stamp = 0
else:
- logger.put(5, 'Nothing in the range')
- stamp = 0
+ raise epylog.EmptyLogError('%s is empty' % self.filename, logger)
logger.put(5, '<LogFile._get_stamp')
return stamp
diff --git a/py/epylog/module.py b/py/epylog/module.py
index 7eaded2..600c5a1 100644
--- a/py/epylog/module.py
+++ b/py/epylog/module.py
@@ -123,6 +123,13 @@ class Module:
#
logger.put(0, 'Could not init logfile for entry "%s"' % entry)
continue
+ except epylog.NoSuchLogError:
+ ##
+ # Looks like all logfiles for this log entry are empty.
+ # Ignore this log entry.
+ logger.put(1, ('No logs found for %s, or they are all empty, '+
+ 'ignoring.') % entry)
+ continue
logger.put(5, 'Appending the log object to self.logs[]')
self.logs.append(log)
if len(self.logs) == 0:
12 years, 4 months
[epylog] Big changes for 1.1.
by Konstantin Ryabitsev
commit 5a4dfaf114bab217c1196da530c1b2f3566bfe00
Author: Konstantin Ryabitsev <mricon(a)kernel.org>
Date: Thu Feb 9 10:31:24 2012 -0500
Big changes for 1.1.
- Ripped out autoconf
- Dropped any support for external modules
- Removed perl bits
- Switched to using python's own logging module
- Define logs in the logsources.conf file
- Switch notices from xml to yaml (not working at all yet)
- Move stuff around a bit for simplicity's sake
TODO:
- Modify modules to work with the new logging changes
- Fix notices module to work with yaml and the new format
- Support for pre-matching based on tag
- Fix support for storing and retrieving offsets
(running epylog from cron is broken for now, only with --last)
- Whatever else requires fixing
ChangeLog | 11 +-
Makefile.in | 92 --
compiledir.in | 12 -
configure.in | 159 ---
cron/Makefile.in | 66 -
cron/epylog.cron | 2 +
cron/epylog.cron.in | 6 -
epylog.in => epylog.py | 267 +++--
epylog.spec | 199 ---
{py/epylog => epylog}/__init__.py | 690 +++++-----
{py/epylog => epylog}/helpers.py | 109 +-
epylog/log.py | 1393 +++++++++++++++++++
epylog/module.py | 296 +++++
{py/epylog => epylog}/mytempfile.py | 0
epylog/publishers.py | 657 +++++++++
epylog/report.py | 232 ++++
etc/Makefile.in | 81 --
etc/{epylog.conf.in => epylog.conf} | 43 +-
etc/logsources.conf | 19 +
etc/modules.d/Makefile.in | 67 -
...ommon_unparsed.conf.in => common_unparsed.conf} | 9 +-
etc/modules.d/{logins.conf.in => logins.conf} | 18 +-
etc/modules.d/{mail.conf.in => mail.conf} | 9 +-
etc/modules.d/{notices.conf.in => notices.conf} | 13 +-
etc/modules.d/ntp.conf | 7 +
etc/modules.d/ntp.conf.in | 8 -
etc/modules.d/{packets.conf.in => packets.conf} | 11 +-
etc/modules.d/{rsyncd.conf.in => rsyncd.conf} | 7 +-
etc/modules.d/selinux.conf | 10 +
etc/modules.d/selinux.conf.in | 11 -
etc/modules.d/smart.conf | 7 +
etc/modules.d/smart.conf.in | 8 -
etc/modules.d/{spamd.conf.in => spamd.conf} | 7 +-
etc/modules.d/sudo.conf | 9 +
etc/modules.d/sudo.conf.in | 11 -
etc/modules.d/weeder.conf | 10 +
etc/modules.d/weeder.conf.in | 30 -
etc/modules.d/yum.conf | 9 +
etc/modules.d/yum.conf.in | 11 -
etc/notice_dist.xml | 105 --
etc/notice_dist.yaml | 85 ++
etc/notice_local.xml | 19 -
etc/notice_local.yaml | 18 +
etc/report_template.html | 14 +-
etc/trojans.list | 410 ------
install-sh | 251 ----
man/Makefile.in | 59 -
mkinstalldirs | 40 -
modules/Makefile.in | 62 -
modules/sudo_mod.py | 51 +-
modules/weeder_mod.py | 85 +-
modules/yum_mod.py | 53 +-
perl/Makefile.in | 56 -
perl/epylog.pm | 428 ------
py/Makefile.in | 58 -
py/epylog/log.py | 1401 --------------------
py/epylog/module.py | 403 ------
py/epylog/publishers.py | 608 ---------
py/epylog/report.py | 355 -----
59 files changed, 3505 insertions(+), 5662 deletions(-)
---
diff --git a/ChangeLog b/ChangeLog
index 9e0b3c7..9ffeec8 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,12 @@
+Epylog-1.1
+ * Rip out autoconf
+ * Rework to use python's internal logging module
+ * Use .format() wherever possible
+ * Define log sources in logsources.conf
+ * Use python's template language for most things
+
+-------------------------------------------------------------------------------
+
Epylog-1.0.4
* Be more lenient about syslog format (FC7 changes) (ticket #4)
* Add a "save_rawlogs" option to file publisher, and don't save them
@@ -28,7 +37,7 @@ Epylog-1.0
* Do not depend on elinks in RPM.
* Default setting is to send html-only (so we don't depend on lynx)
* Packets module can now sort by port, system, and source.
-
+
Epylog-0.9.7
* Accepted Makefile patches from Will Newton
* Accepted patches for missing logs from Will Newton (#135)
diff --git a/cron/epylog.cron b/cron/epylog.cron
new file mode 100644
index 0000000..9555571
--- /dev/null
+++ b/cron/epylog.cron
@@ -0,0 +1,2 @@
+#!/bin/sh
+/usr/sbin/epylog --cron
diff --git a/epylog.in b/epylog.py
similarity index 55%
rename from epylog.in
rename to epylog.py
index 68abbb5..40ceedf 100644
--- a/epylog.in
+++ b/epylog.py
@@ -1,6 +1,7 @@
-#!%%PYTHON_BIN%%
+#!/usr/bin/python -tt
##
-# Copyright (C) 2003 by Duke University
+# Copyright (C) 2003-2005 by Duke University
+# Copyright (C) 2005-2012 by Konstantin Ryabitsev and contributors
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
@@ -17,23 +18,19 @@
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
# 02111-1307, USA.
#
-# $Id$
-#
-# @Author Konstantin Ryabitsev <icon(a)linux.duke.edu>
-# @version $Date$
-#
import os
import sys
-import getopt
import time
-import libxml2
-sys.path.insert(0, '%%PY_MODULE_DIR%%')
-from epylog import *
+from optparse import OptionParser
+
+from epylog import Epylog, VERSION, ConsoleUi, ConfigError, ModuleError
-DEFAULT_EPYLOG_CONFIG = '%%pkgconfdir%%/epylog.conf'
-EPYLOG_PIDFILE = '%%localstatedir%%/run/epylog.pid'
+import logging
+
+DEFAULT_EPYLOG_CONFIG = '/etc/epylog/epylog.conf'
+EPYLOG_PIDFILE = '/var/run/epylog.pid'
def unxmlify_offsets(ofile, logger):
"""
@@ -173,133 +170,191 @@ def parselast(last):
"""
Make sense of the --last value
"""
- msg = "Unknown setting for --last: %s. See --help" % last
- if last == 'hour': last = '1h'
- elif last == 'day': last = '1d'
- elif last == 'week': last = '1w'
- elif last == 'month': last = '1m'
- cat = last[-1:].lower()
- try: num = int(last[:-1])
- except: sys.exit(msg)
- if cat == 'h': mult = 1
- elif cat == 'd': mult = 24
- elif cat == 'w': mult = 24*7
- elif cat == 'm': mult = 24*30
- else: sys.exit(msg)
- now = int(time.time())
- then = now - (num * mult * 60 * 60)
- return then
+ msg = "Unknown setting for --last: {}. Seek --help.".format(last)
-def usage():
- print """
- Usage: epylog [--quiet] [--store-offsets] [--last] [-c] [-d]
+ wordmap = {
+ 'hour' : '1h',
+ 'day' : '1d',
+ 'week' : '1w',
+ 'month' : '1m'
+ }
- -c config-file
- read a custom config file instead of /etc/epylog/epylog.conf
+ # seconds
+ multmap = {
+ 'h' : 3600,
+ 'd' : 86400,
+ 'w' : 604800,
+ 'm' : 2592000
+ }
- -d debug-level
- a number from 0 to 5. 0 means only critical output, while 5
- means lots and lots of debugging info.
+ if last in wordmap.keys():
+ last = wordmap[last]
- --store-offsets
- this will store an offset.xml file in /var/lib/epylog. This
- is useful when running epylog from cron, since then it relies
- on actual offsets as opposed to timestamps, which do not have to
- be accurate.
+ cat = last[-1:].lower()
- --quiet
- completely identical to -d 0
+ if cat not in multmap.keys():
+ sys.exit(msg)
- --cron
- Equivalent of --quiet --store-offsets, plus it will create a
- lock file that will not allow more than one cron instance of
- epylog to run.
+ try:
+ num = int(last[:-1])
+ except ValueError:
+ sys.exit(msg)
- --last [hour|day|week|month|Nh|Nd|Nw|Nm]
- will analyze strings from the past [time period] specified.
+ now = int(time.time())
+ then = now - (num * multmap[cat])
- If no command-line options are provided, then the logs will be
- processed in their entirety (WARNING: this can mean a LOT of logs).
- A useful way to init a system would be to run:
- epylog --last [hour|day] --store-offsets
+ return then
- Example:
- epylog --last day
+def legacyput(level, message):
+ """
+ Legacy wrapper for the old-style logger.
"""
- sys.exit(1)
+ logger = logging.getLogger('epylog')
+
+ message = 'LEGACY PUT: ' + message
+
+ if level < 3:
+ logger.warning(message)
+ return
+
+ if level == 3:
+ logger.info(message)
+ return
+
+ logger.debug(message)
+ return
+
+def legacyendhang(level, message='done'):
+ return
def main(args):
- debuglvl = 1
- o_stor = 0
- o_stamp = 0
- o_cron = 0
- config_file = DEFAULT_EPYLOG_CONFIG
- cmdargs = args[1:]
- try:
- gopts, cmds = getopt.getopt(cmdargs, 'd:c:h',
- ['quiet', 'store-offsets', 'last=',
- 'help', 'cron'])
- for o,a in gopts:
- if o == '-d': debuglvl = int(a)
- elif o == '--quiet': debuglvl = 0
- elif o == '--store-offsets': o_stor = 1
- elif o == '--cron': o_cron = 1
- elif o == '--last': o_stamp = parselast(a)
- elif o == '-c': config_file = a
- elif o == '-h' or o == '--help': usage()
- except getopt.error, e:
- print 'Error: %s' % e
- usage()
- if o_cron:
- ##
- # Cron mode. Try to lock, and set --quiet and --store-offsets
- #
+
+ usage = ''' Usage: %prog [options]
+ Analyze logs and produce a log report.'''
+
+ parser = OptionParser(usage=usage, version=VERSION)
+
+ parser.add_option('--debuglevel', '-d', dest='o_debug',
+ default=1,
+ help=('a number from 0 to 5. 0 means only critical output, while '
+ '5 means lots and lots of debugging info.'))
+ parser.add_option('--quiet', '-q', dest='o_quiet', action='store_true',
+ default=False,
+ help='Completely identical to -d 0')
+ parser.add_option('--store-offsets', '-s', dest='o_stor',
+ action='store_true', default=False,
+ help=('After processing the logs, record the position of the '
+ 'latest location of each logfile. This is useful when running '
+ 'epylog from cron, since then it relies on the latest-processed '
+ 'position within the logfile and not on timestamps, which do '
+ 'not have to be accurate.'))
+ parser.add_option('--cron', '-n', dest='o_cron', action='store_true',
+ default=False,
+ help=('Equivalent of --quiet --store-offsets, plus it will create a '
+ 'lock file that will not allow more than one cron instance of '
+ 'epylog to run.'))
+ parser.add_option('--last', '-l', dest='o_last',
+ default=None,
+ help=('Will analyze entries from the past '
+ '[hour|day|week|month|Nh|Nd|Nw|Nm] specified. A useful '
+ 'way to init epylog is to run:\n'
+ 'epylog --last day --store-offsets'))
+ parser.add_option('--config-file', '-c', dest='o_config',
+ default=DEFAULT_EPYLOG_CONFIG,
+ help=('read a custom config file instead of %default'))
+
+ (opts, args) = parser.parse_args()
+
+ if args:
+ parser.error('Epylog takes no arguments.')
+
+ if opts.o_quiet:
+ debuglvl = logging.CRITICAL
+ else:
+ debuglvl = 50 - (10*int(opts.o_debug))
+ if debuglvl < 10:
+ debuglvl = 10
+ elif debuglvl > 50:
+ debuglvl = 50
+
+ if opts.o_cron:
epylock()
- o_stor = 1
- debuglvl = 0
- logger = Logger(debuglvl)
- logger.puthang(1, 'Initializing epylog')
+
+ o_stor = True
+ debuglvl = logging.CRITICAL
+ opts.o_quiet = True
+
+ logger = logging.getLogger('epylog')
+ logger.setLevel(debuglvl)
+
+ ch = logging.StreamHandler()
+ ch.setLevel(debuglvl)
+ formatter = logging.Formatter("[%(levelname)s:%(module)s:%(funcName)s:"
+ "%(lineno)s] %(message)s")
+ ch.setFormatter(formatter)
+ logger.addHandler(ch)
+
+ o_stor = opts.o_stor
+ o_cron = opts.o_cron
+ config_file = opts.o_config
+
+ #logger.put = legacyput
+ #logger.puthang = legacyput
+ #logger.endhang = legacyendhang
+
+ logger.debug('Config file set to {}'.format(config_file))
+
+ if opts.o_last is not None:
+ o_stamp = parselast(opts.o_last)
+
+ ui = ConsoleUi(opts.o_quiet)
+ ui.puthang('Initializing {}'.format(VERSION))
+
try:
- epylog = Epylog(config_file, logger)
+ epylog = Epylog(config_file, ui)
except (ConfigError, ModuleError), e:
- logger.put(0, "Error returned: %s" % e)
+ logger.critical("Error returned: {}".format(e))
sys.exit(1)
- logger.endhang(1, 'done')
- if o_stamp == 0:
+
+ ui.endhang('done')
+
+ if not opts.o_last:
+ # TODO: Totally broken for now
logger.puthang(1, 'Restoring log offsets')
restore_offsets(epylog)
logger.endhang(1, 'done')
else:
- logger.puthang(1, 'Setting the offsets by timestamp')
+ ui.puthang('Setting the offsets by timestamp')
epylog.logtracker.set_range_by_timestamps(o_stamp, int(time.time()))
- logger.endhang(1)
- logger.put(1, 'Invoking the module execution routines:')
+ ui.endhang('done')
+
+ ui.put('Invoking the module execution routines:')
epylog.process_modules()
- logger.put(1, 'Finished processing modules')
- logger.puthang(1, 'Making the report')
+ ui.put('Finished processing modules')
+
+ ui.puthang('Making the report')
useful = epylog.make_report()
- logger.endhang(1, 'done')
+ ui.endhang('done')
+
if useful:
- logger.puthang(1, 'Publishing the report')
+ ui.puthang('Publishing the report')
epylog.publish_report()
- logger.endhang(1, 'done')
+ ui.endhang('done')
+
if o_stor:
logger.puthang(1, 'Storing the offsets')
store_offsets(epylog)
logger.endhang(1, 'done')
else:
- logger.put(1, 'Report is empty. Exiting.')
+ ui.put('Report is empty. Exiting.')
- logger.puthang(1, 'Cleaning up')
+ ui.puthang('Cleaning up')
epylog.cleanup()
- logger.endhang(1, 'done')
- if o_cron: epyunlock()
+ ui.endhang('done')
+ if o_cron:
+ epyunlock()
if __name__ == '__main__':
main(sys.argv)
-##
-# local variables:
-# mode: python
-# end:
diff --git a/py/epylog/__init__.py b/epylog/__init__.py
similarity index 50%
rename from py/epylog/__init__.py
rename to epylog/__init__.py
index ea1827a..be284a7 100644
--- a/py/epylog/__init__.py
+++ b/epylog/__init__.py
@@ -3,6 +3,7 @@ This module contains the main classes and methods for Epylog.
"""
##
# Copyright (C) 2003 by Duke University
+# Copyright (C) 2005-2012 by Konstantin Ryabitsev and contributors
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
@@ -19,10 +20,7 @@ This module contains the main classes and methods for Epylog.
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
# 02111-1307, USA.
#
-# $Id$
-#
-# @Author Konstantin Ryabitsev <icon(a)linux.duke.edu>
-# @version $Date$
+# @Author Konstantin Ryabitsev <icon(a)mricon.com>
#
import ConfigParser
@@ -36,73 +34,69 @@ import pwd
import socket
import sys
-if 'mkdtemp' not in dir(tempfile):
- ##
- # Must be python < 2.3
- #
- del tempfile
- import mytempfile as tempfile
+from string import Template
from report import Report
from module import Module
-from log import LogTracker
+from log import LogTracker
+
+import logging
-VERSION = 'Epylog-1.0.3'
-CHUNK_SIZE = 8192
-GREP_LINES = 10000
+VERSION = 'Epylog-1.0.3'
+CHUNK_SIZE = 8192
+GREP_LINES = 10000
QUEUE_LIMIT = 500
-LOG_SPLIT_RE = re.compile(r'(.{15,15})\s+(\S+)\s+(.*)$')
-SYSLOG_NG_STRIP = re.compile(r'.*[@/]')
+
+# TODO: support other timestamp formats
+LOG_SPLIT_RE = re.compile(r'(.{15,15})\s+(\S+)\s+(.*)$')
+SYSLOG_NG_STRIP = re.compile(r'.*[@/]')
MESSAGE_REPEATED_RE = re.compile(r'last message repeated (\S+) times')
+logger = logging.getLogger('epylog')
+
class FormatError(exceptions.Exception):
"""
This exception is raised when there are problems with the syslog
line processed.
"""
- def __init__(self, message, logger):
+ def __init__(self, message):
exceptions.Exception.__init__(self)
- logger.put(5, '!FormatError: %s' % message)
- self.args = message
+ logger.debug('!FormatError: {}'.format(message))
class ConfigError(exceptions.Exception):
"""
This exception is raised when there are misconfiguration problems.
"""
- def __init__(self, message, logger):
- exceptions.Exception.__init__(self)
- logger.put(5, '!ConfigError: %s' % message)
- self.args = message
+ def __init__(self, message):
+ exceptions.Exception.__init__(self, message)
+ logger.debug('!ConfigError: {}'.format(message))
class AccessError(exceptions.Exception):
"""
This exception is raised when there are errors accessing certain
components of Epylog, log files, or temporary writing spaces.
"""
- def __init__(self, message, logger):
- exceptions.Exception.__init__(self)
- logger.put(5, '!AccessError: %s' % message)
- self.args = message
+ def __init__(self, message):
+ exceptions.Exception.__init__(self, message)
+ logger.debug('!AccessError: {}'.format(message))
class OutOfRangeError(exceptions.Exception):
"""
This happens when Epylog tries to access a line in a logfile that is
outside the specified range.
"""
- def __init__(self, message, logger):
- exceptions.Exception.__init__(self)
- logger.put(5, '!OutOfRangeError: %s' % message)
- self.args = message
+ def __init__(self, message):
+ exceptions.Exception.__init__(self, message)
+ logger.debug('!OutOfRangeError: {}'.format(message))
class ModuleError(exceptions.Exception):
"""
This exception is raised when an Epylog module crashes or otherwise
creates a problem.
"""
- def __init__(self, message, logger):
- exceptions.Exception.__init__(self)
- logger.put(5, '!ModuleError: %s' % message)
- self.args = message
+ def __init__(self, message):
+ exceptions.Exception.__init__(self, message)
+ logger.debug('!ModuleError: {}'.format(message))
class SysCallError(exceptions.Exception):
"""
@@ -110,272 +104,252 @@ class SysCallError(exceptions.Exception):
successful. Most notable ones are grep (only used with external modules)
and lynx/links/w3m.
"""
- def __init__(self, message, logger):
- exceptions.Exception.__init__(self)
- logger.put(5, '!SysCallError: %s' % message)
- self.args = message
+ def __init__(self, message):
+ exceptions.Exception.__init__(self, message)
+ logger.debug('!SysCallError: {}'.format(message))
class NoSuchLogError(exceptions.Exception):
"""
This exception is raised when Epylog tries to access or initialize a
logfile that does not exist.
"""
- def __init__(self, message, logger):
- exceptions.Exception.__init__(self)
- logger.put(5, '!NoSuchLogError: %s' % message)
- self.args = message
+ def __init__(self, message):
+ exceptions.Exception.__init__(self, message)
+ logger.debug('!NoSuchLogError: {}'.format(message))
class GenericError(exceptions.Exception):
"""
This exception is raised for all other Epylog conditions.
"""
- def __init__(self, message, logger):
- exceptions.Exception.__init__(self)
- logger.put(5, '!GenericError: %s' % message)
- self.args = message
+ def __init__(self, message):
+ exceptions.Exception.__init__(self, message)
+ logger.debug('!GenericError: {}'.format(message))
class Epylog:
"""
This is the core class of Epylog. A UI would usually communicate
with it an it only.
"""
- def __init__(self, cfgfile, logger):
+ def __init__(self, cfgfile, ui):
"""
- UIs may override the included logger, which would be useful for
+ UIs may override the included ui, which would be useful for
things like a possible GTK interface, a web interface, etc.
"""
- self.logger = logger
- logger.put(5, '>Epylog.__init__')
+ self.ui = ui
config = ConfigParser.ConfigParser()
- logger.puthang(3, 'Reading the config file "%s"' % cfgfile)
- try: config.read(cfgfile)
+ logger.info('Reading the config file "{}"'.format(cfgfile))
+
+ try:
+ config.read(cfgfile)
except:
- msg = 'Could not read/parse config file "%s"' % cfgfile
- raise ConfigError(msg, logger)
- logger.endhang(3)
- ##
- # Read in the main configuration
- #
- logger.puthang(3, "Reading in main entries")
+ msg = 'Could not read/parse config file "{}"'.format(cfgfile)
+ raise ConfigError(msg)
+
+ logger.debug('Reading in main entries')
+
try:
self.cfgdir = config.get('main', 'cfgdir')
self.vardir = config.get('main', 'vardir')
except:
- msg = 'Could not parse the main config file "%s"' % cfgfile
- raise ConfigError(msg, logger)
- logger.put(5, 'cfgdir=%s' % self.cfgdir)
- logger.put(5, 'vardir=%s' % self.vardir)
- logger.endhang(3)
-
- logger.put(3, 'Checking if we can write to vardir')
+ msg = 'Could not parse the main config file "{}"'.format(cfgfile)
+ raise ConfigError(msg)
+
+ # use this to perform template operations
+ config.paths = {
+ 'cfgdir': self.cfgdir,
+ 'vardir': self.vardir
+ }
+
+ moduledir = config.get('main', 'moduledir')
+
+ self.moduledir = Template(moduledir).safe_substitute(config.paths)
+ config.paths['moduledir'] = self.moduledir
+
+ logger.debug(('cfgdir={self.cfgdir}, vardir={self.vardir}, '
+ 'moduledir={self.moduledir}').format(self=self))
+
if not os.access(self.vardir, os.W_OK):
- msg = 'Write access required for vardir "%s"' % self.vardir
- raise ConfigError(msg, logger)
+ msg = 'Write access required for vardir "{}"'.format(self.vardir)
+ raise ConfigError(msg)
##
# Set up a safe temp dir
#
- logger.put(3, 'Setting up a temporary directory')
+ logger.info('Setting up a temporary directory')
try:
tmpdir = config.get('main', 'tmpdir')
+ tmpdir = Template(tmpdir).safe_substitute(config.paths)
+ logger.debug('tmpdir={}'.format(tmpdir))
tempfile.tempdir = tmpdir
- except: pass
- logger.put(3, 'Creating a safe temporary directory')
- try: tmpprefix = tempfile.mkdtemp('EPYLOG')
+ except:
+ # Will use OS-default temp dir
+ pass
+
+ logger.info('Creating a safe temporary directory')
+ try:
+ tmpprefix = tempfile.mkdtemp('.EPYLOG')
except:
- msg = 'Could not create a safe temp directory in "%s"' % tmpprefix
- raise ConfigError(msg, logger)
- self.tmpprefix = tmpprefix
+ msg = 'Could not create a temp directory in "{}"'.format(tmpprefix)
+ raise ConfigError(msg)
+
+ self.tmpprefix = tmpprefix
tempfile.tempdir = tmpprefix
- logger.put(3, 'Temporary directory created in "%s"' % tmpprefix)
- logger.put(3, 'Sticking tmpprefix into config to pass to other objs')
- config.tmpprefix = self.tmpprefix
+
+ logger.info('Temporary directory created in "{}"'.format(tmpprefix))
+ config.paths['tmpprefix'] = tmpprefix
+ config.tmpprefix = tmpprefix
+
##
# Create a file for unparsed strings.
#
- self.unparsed = tempfile.mktemp('UNPARSED')
- logger.put(3, 'Unparsed strings will go into %s' % self.unparsed)
+ self.unparsed = tempfile.mktemp('.UNPARSED')
+ logger.info('Unparsed strings will go into {}'.format(self.unparsed))
+
##
# Get multimatch pref
#
- try: self.multimatch = config.getboolean('main', 'multimatch')
- except: self.multimatch = 0
- logger.put(5, 'multimatch=%d' % self.multimatch)
+ try:
+ self.multimatch = config.getboolean('main', 'multimatch')
+ except:
+ self.multimatch = False
+
+ logger.debug('multimatch={}'.format(self.multimatch))
+
##
# Get threading pref
#
try:
threads = config.getint('main', 'threads')
if threads < 2:
- logger.put(0, 'Threads set to less than 2, fixing')
+ logger.error('Threads set to less than 2, fixing')
threads = 2
self.threads = threads
except:
self.threads = 50
- logger.put(5, 'threads=%d' % self.threads)
+
+ logger.debug('threads={}'.format(self.threads))
+
##
# Initialize the Report object
#
- logger.puthang(3, 'Initializing the Report')
- self.report = Report(config, logger)
- logger.endhang(3)
+ logger.info('Initializing the Report')
+ self.report = Report(config, ui)
##
# Initialize the LogTracker object
#
- logger.puthang(3, 'Initializing the log tracker object')
- logtracker = LogTracker(config, logger)
+ logger.info('Initializing the log tracker object')
+ logtracker = LogTracker(config, ui)
+
self.logtracker = logtracker
- logger.endhang(3)
- ##
- # Process module configurations
- #
+
+ logger.info('Processing module configurations')
+
self.modules = []
- priorities = []
+ priorities = []
+
modcfgdir = os.path.join(self.cfgdir, 'modules.d')
- logger.put(3, 'Checking if module config dir "%s" exists' % modcfgdir)
+ logger.debug('modcfgdir={}'.format(modcfgdir))
+
if not os.path.isdir(modcfgdir):
- msg = 'Module configuration directory "%s" not found' % modcfgdir
- raise ConfigError(msg, logger)
- logger.put(3, 'Looking for module configs in %s' % modcfgdir)
+ msg = 'modules.d not found in "{}"'.format(modcfgdir)
+ raise ConfigError(msg)
+
for file in os.listdir(modcfgdir):
cfgfile = os.path.join(modcfgdir, file)
- if os.path.isfile(cfgfile):
- logger.put(3, 'Found file: %s' % cfgfile)
- if not re.compile('\.conf$').search(cfgfile, 1):
- logger.put(3, 'Not a module config file, skipping.')
- continue
- logger.puthang(3, 'Calling the Module init routines')
- try:
- module = Module(cfgfile, logtracker, tmpprefix, logger)
- except (ConfigError, ModuleError), e:
- msg = 'Module Error: %s' % e
- logger.put(0, msg)
- continue
- logger.endhang(3)
- if module.enabled:
- logger.put(3, 'Module "%s" is enabled' % module.name)
- module.sanity_check()
- self.modules.append(module)
- priorities.append(module.priority)
- else:
- logger.put(3, 'Module "%s" is not enabled, ignoring'
- % module.name)
+
+ if not os.path.isfile(cfgfile) or cfgfile[-5:] != '.conf':
+ logger.debug('Not a config file: {}'.format(cfgfile))
+ continue
+
+ logger.info('Found module config file: {}'.format(cfgfile))
+
+ logger.info('Calling the Module init routines')
+ try:
+ module = Module(cfgfile, logtracker, config, ui)
+ except (ConfigError, ModuleError), e:
+ logger.error('Module error: {}'.format(e))
+ continue
+
+ if module.enabled:
+ logger.info('Module "{}" is enabled'.format(module.name))
+ self.modules.append(module)
+ priorities.append(module.priority)
else:
- logger.put(3, '%s is not a regular file, ignoring' % cfgfile)
- logger.put(3, 'Total of %d modules initialized' % len(self.modules))
+ logger.info('Module "{}" is not enabled, ignoring'.format(
+ module.name))
+
+ logger.info('{} modules initialized'.format(len(self.modules)))
+
if len(self.modules) == 0:
- raise ConfigError('No modules are enabled. Exiting.', logger)
+ raise ConfigError('No modules are enabled. Exiting.')
+
##
# Sort modules by priority
#
- logger.put(3, 'sorting modules by priority')
priorities.sort()
for module in self.modules:
- logger.put(3, 'analyzing module: %s' % module.name)
+ logger.debug('analyzing module pritory: {}'.format(module.name))
for i in range(0, len(priorities)):
- try:
- logger.put(5, 'module.priority=%d, priorities[i]=%d'
- % (module.priority, priorities[i]))
- except:
- logger.put(5, 'priorities[i] is module: %s'
- % priorities[i].name)
if module.priority == priorities[i]:
priorities[i] = module
- logger.put(5, 'priorities[i] is now: %s' % module.name)
break
+
self.modules = priorities
- self.imodules = []
- self.emodules = []
- for module in self.modules:
- logger.put(5, 'module: %s, priority: %d'
- % (module.name, module.priority))
- if module.is_internal(): self.imodules.append(module)
- else: self.emodules.append(module)
- logger.put(5, '<Epylog.__init__')
def process_modules(self):
"""
Invoke the modules to process the logfile entries.
"""
- logger = self.logger
- logger.put(5, '>Epylog.process_modules')
- logger.put(3, 'Finding internal modules')
- if len(self.imodules):
+ if len(self.modules):
self._process_internal_modules()
- if len(self.emodules):
- logger.puthang(3, 'Processing external modules')
- for module in self.emodules:
- logger.puthang(1, 'Processing module "%s"' % module.name)
- try:
- module.invoke_external_module(self.cfgdir)
- except ModuleError, e:
- ##
- # Module execution error!
- # Do not die, but provide a visible warning.
- #
- logger.put(0, str(e))
- logger.endhang(1, 'done')
- logger.endhang(3)
- logger.put(5, '<Epylog.process_modules')
def make_report(self):
"""
Create the report based on the result of the Epylog run.
"""
- logger = self.logger
- logger.put(5, '>Epylog.make_report')
for module in self.modules:
- logger.put(3, 'Analyzing reports from module "%s"' % module.name)
- logger.put(5, 'logerport=%s' % module.logreport)
- logger.put(5, 'logfilter=%s' % module.logfilter)
+ logger.info('Analyzing reports from "{}"'.format(module.name))
+ logger.debug('logerport={}'.format(module.logreport))
+ logger.debug('logfilter={}'.format(module.logfilter))
+
if module.logreport is None and module.logfilter is None:
- logger.put(3, 'No output from module "%s"' % module.name)
- logger.put(3, 'Skipping module "%s"' % module.name)
+ logger.info('No output from "{}"'.format(module.name))
+ logger.info('Skipping module "{}"'.format(module.name))
continue
- logger.put(3, 'Preparing a report for module "%s"' % module.name)
+
+ logger.info('Preparing a report for "{}"'.format(module.name))
module_report = module.get_html_report()
+
if module_report is not None:
self.report.append_module_report(module.name, module_report)
- if self.emodules:
- ##
- # We only need filtered strings if we have external modules
- #
- fsfh = module.get_filtered_strings_fh()
- self.report.append_filtered_strings(module.name, fsfh)
- fsfh.close()
+
self.report.set_stamps(self.logtracker.get_stamps())
- logger.put(5, '<Epylog.make_report')
return self.report.is_report_useful()
def publish_report(self):
"""
Publish the report.
"""
- logger = self.logger
- logger.put(5, '>Epylog.publish_report')
- logger.put(3, 'Dumping all log strings into a temp file')
+ logger.info('Dumping all log strings into a temp file')
+
tempfile.tempdir = self.tmpprefix
- rawfh = open(tempfile.mktemp('RAW'), 'w+')
- logger.put(3, 'RAW strings file created in "%s"' % rawfh.name)
+ rawfh = open(tempfile.mktemp('.RAW'), 'w+')
+
+ logger.info('RAW strings file created in "{}"'.format(rawfh.name))
+
self.logtracker.dump_all_strings(rawfh)
- if not self.emodules:
- ##
- # All modules were internal, meaning we have all unparsed
- # strings in the self.unparsed file.
- #
- unparsed = self._get_unparsed()
- else: unparsed = None
+
+ unparsed = self._get_unparsed()
+
self.report.publish(rawfh, unparsed)
- logger.put(5, '<Epylog.publish_report')
def cleanup(self):
"""
Clean up after ourselves.
"""
- logger = self.logger
- logger.put(3, 'Cleanup routine called')
- logger.put(3, 'Removing the temp dir "%s"' % self.tmpprefix)
+ logger.info('Cleanup routine called')
+ logger.info('Removing the temp dir "{}"'.format(self.tmpprefix))
shutil.rmtree(self.tmpprefix)
def _get_unparsed(self):
@@ -391,133 +365,148 @@ class Epylog:
"""
Invoke and process internal (python) modules.
"""
- logger = self.logger
- logger.put(5, '>Epylog._process_internal_modules')
- logger.puthang(1, 'Processing internal modules')
- logger.put(3, 'Collecting logfiles used by internal modules')
upfh = open(self.unparsed, 'w')
- logger.put(3, 'Opened unparsed strings file in "%s"' % self.unparsed)
+ logger.info('Opened unparsed strings file in {}'.format(self.unparsed))
+
logmap = {}
- for module in self.imodules:
+ for module in self.modules:
for log in module.logs:
- try: logmap[log.entry].append(module)
- except KeyError: logmap[log.entry] = [module]
- logger.put(5, 'logmap follows')
- logger.put(5, logmap)
- pq = ProcessingQueue(QUEUE_LIMIT, logger)
- logger.put(3, 'Starting the processing threads')
+ try:
+ logmap[log.stubname].append(module)
+ except KeyError:
+ logmap[log.stubname] = [module]
+
+ logger.debug('logmap={}'.format(logmap))
+
+ # Vive le quebec libre!
+ pq = ProcessingQueue(QUEUE_LIMIT)
+
+ logger.debug('Starting the processing threads')
threads = []
+
try:
- while 1:
- t = ConsumerThread(pq, logger)
+ while True:
+ t = ConsumerThread(pq)
t.start()
threads.append(t)
- if len(threads) > self.threads: break
- for entry in logmap.keys():
- log = self.logtracker.getlog(entry)
- if log.is_range_empty(): continue
+ if len(threads) > self.threads:
+ break
+
+ for stubname in logmap.keys():
+ log = self.logtracker.getlog(stubname)
+
+ if log.is_range_empty():
+ continue
+
matched = 0
- lines = 0
- while 1:
- logger.put(3, 'Getting next line from "%s"' % entry)
+ lines = 0
+
+ while True:
+ logger.info('Getting next line from {}'.format(stubname))
+
try:
linemap = log.nextline()
except FormatError, e:
- logger.put(5, 'Writing the line to unparsed')
+ logger.debug('Writing the line to unparsed')
upfh.write(str(e))
continue
- except OutOfRangeError: break
+ except OutOfRangeError:
+ break
+
lines += 1
- logger.put(5, 'We have the following:')
- logger.put(5, 'line=%s' % linemap['line'])
- logger.put(5, 'stamp=%d' % linemap['stamp'])
- logger.put(5, 'system=%s' % linemap['system'])
- logger.put(5, 'message=%s' % linemap['message'])
- logger.put(5, 'multiplier=%d' % linemap['multiplier'])
+
+ logger.debug('We have the following:')
+ logger.debug('line={}'.format(linemap['line']))
+ logger.debug('stamp={}'.format(linemap['stamp']))
+ logger.debug('system={}'.format(linemap['system']))
+ logger.debug('message={}'.format(linemap['message']))
+ logger.debug('multiplier={}'.format(linemap['multiplier']))
+
match = 0
- for module in logmap[entry]:
- logger.put(5, 'Matching module "%s"' % module.name)
+
+ for module in logmap[stubname]:
+ logger.debug('Matching module: {}'.format(module.name))
+
message = linemap['message']
- handler, regex = module.message_match(message)
+ (handler, regex) = module.message_match(message)
+
linemap['regex'] = regex
+
if handler is not None:
match = 1
pq.put_linemap(linemap, handler, module)
if not self.multimatch:
- logger.put(5, 'multimatch is not set')
- logger.put(5, 'Not matching other modules')
break
+
matched += match
+
if not match:
- logger.put(5, 'Writing the line to unparsed')
+ logger.debug('Writing the line to unparsed')
upfh.write(linemap['line'])
- bartitle = log.entry
- message = '%d of %d lines parsed' % (matched, lines)
- logger.endbar(1, bartitle, message)
+
+ bartitle = log.loglist[0].filename
+ message = '{} of {} lines parsed'.format(matched, lines)
+ self.ui.endbar(bartitle, message)
+
finally:
- logger.put(3, 'Notifying the threads that they may die now')
+ logger.info('Notifying the threads that they may die now')
pq.tell_threads_to_quit(threads)
bartitle = 'Waiting for threads to finish'
bartotal = len(threads)
bardone = 1
for t in threads:
- logger.progressbar(1, bartitle, bardone, bartotal)
+ self.ui.progressbar(bartitle, bardone, bartotal)
t.join()
bardone += 1
- logger.endbar(1, bartitle, 'all threads done')
+ self.ui.endbar(bartitle, 'all threads done')
upfh.close()
- logger.puthang(1, 'Finished all matching, now finalizing')
- for module in self.imodules:
- logger.puthang(1, 'Finalizing "%s"' % module.name)
+
+ self.ui.puthang('Finished all matching, now finalizing')
+ for module in self.modules:
+ self.ui.puthang('Finalizing "{}"'.format(module.name))
try:
rs = pq.get_resultset(module)
try:
module.finalize_processing(rs)
except Exception, e:
- msg = ('Module %s crashed in finalize stage: %s' %
- (module.name, e))
- logger.put(0, msg)
+ msg = ('Module {} crashed in finalize stage: {}'.format(
+ module.name, e))
+ logger.error(msg)
module.no_report()
except KeyError:
module.no_report()
- logger.endhang(1)
- logger.endhang(1)
- logger.endhang(1)
- logger.put(5, '<Epylog._process_internal_modules')
+ self.ui.endhang()
+ self.ui.endhang()
class ProcessingQueue:
"""
This is a standard cookie-cutter helper class for using threads in a
Python application.
"""
- def __init__(self, limit, logger):
- self.logger = logger
- logger.put(5, '>ProcessingQueue.__init__')
- logger.put(3, 'Initializing ProcessingQueue')
+ def __init__(self, limit):
+ logger.info('Initializing ProcessingQueue')
+
self.mon = threading.RLock()
- self.iw = threading.Condition(self.mon)
- self.ow = threading.Condition(self.mon)
- self.lineq = []
+ self.iw = threading.Condition(self.mon)
+ self.ow = threading.Condition(self.mon)
+
+ self.lineq = []
self.resultsets = {}
- self.limit = limit
- self.working = 1
- logger.put(5, '<ProcessingQueue.__init__')
+ self.limit = limit
+ self.working = True
def put_linemap(self, linemap, handler, module):
"""
Accepts a linemap and stores it to be picked up by a thread.
"""
self.mon.acquire()
- logger = self.logger
- logger.put(5, '>ProcessingQueue.put_linemap')
while len(self.lineq) >= self.limit:
- logger.put(5, 'Line queue is full, waiting...')
+ logger.debug('Line queue is full, waiting...')
self.ow.wait()
- self.lineq.append([linemap, handler, module])
- logger.put(3, 'Added a new line in lineq')
- logger.put(5, 'items in lineq: %d' % len(self.lineq))
+ self.lineq.append((linemap, handler, module))
+ logger.debug('Added a new line in lineq')
+ logger.debug('items in lineq: {}'.format(len(self.lineq)))
self.iw.notify()
- logger.put(5, '<ProcessingQueue.put_linemap')
self.mon.release()
def get_linemap(self):
@@ -526,18 +515,18 @@ class ProcessingQueue:
and processes it.
"""
self.mon.acquire()
- logger = self.logger
- logger.put(5, '>ProcessingQueue.get_linemap')
while not self.lineq and self.working:
- logger.put(5, 'Line queue is empty, waiting...')
+ logger.debug('Line queue is empty, waiting...')
self.iw.wait()
if self.working:
item = self.lineq.pop(0)
- logger.put(3, 'Got new linemap for the thread.')
- logger.put(5, 'items in lineq: %d' % len(self.lineq))
+ logger.debug('Got new linemap for the thread.')
+ logger.debug('items in lineq: {}'.format(len(self.lineq)))
self.ow.notify()
- else: item = None
- logger.put(5, '<ProcessingQueue.get_linemap')
+
+ else:
+ item = None
+
self.mon.release()
return item
@@ -547,18 +536,18 @@ class ProcessingQueue:
result and places it here.
"""
self.mon.acquire()
- logger = self.logger
- logger.put(5, '>ProcessingQueue.put_result')
+
if result is not None:
- try: self.resultsets[module].add_result(result)
+ try:
+ self.resultsets[module].add_result(result)
except KeyError:
self.resultsets[module] = Result()
self.resultsets[module].add_result(result)
module.put_filtered(line)
- logger.put(3, 'Added result from module "%s"' % module.name)
+ logger.debug('Added result from module {}'.format(module.name))
else:
- logger.put(3, '"%s" returned result None. Skipping.' % module.name)
- logger.put(5, '<ProcessingQueue.put_result')
+ logger.debug('{} returned None. Skipping.'.format(module.name))
+
self.mon.release()
def get_resultset(self, module):
@@ -566,9 +555,7 @@ class ProcessingQueue:
When all threads are done, the resultset is returned to anyone
interested.
"""
- self.logger.put(5, '>ProcessingQueue.get_resultset')
rs = self.resultsets[module]
- self.logger.put(5, '<ProcessingQueue.get_resultset')
return rs
def tell_threads_to_quit(self, threads):
@@ -576,18 +563,18 @@ class ProcessingQueue:
Tell all threads that they should exit as soon as possible.
"""
self.mon.acquire()
- logger = self.logger
- logger.put(5, '>ProcessingQueue.tell_threads_to_quit')
- logger.put(1, 'Telling all threads to quit')
- logger.put(5, 'Waiting till queue is empty')
+ logger.info('Telling all threads to quit')
+
while self.lineq:
- logger.put(5, 'items in lineq: %d' % len(self.lineq))
+ logger.debug('items in lineq: {}'.format(len(self.lineq)))
self.ow.wait()
- self.logger.put(5, 'working=0')
- self.working = 0
- logger.put(3, 'Sending %d semaphore notifications' % len(threads))
- for t in threads: self.iw.notify()
- logger.put(5, '<ProcessingQueue.tell_threads_to_quit')
+
+ self.working = False
+
+ logger.info('Sending {} semaphore notifications'.format(len(threads)))
+ for t in threads:
+ self.iw.notify()
+
self.mon.release()
class ConsumerThread(threading.Thread):
@@ -595,30 +582,30 @@ class ConsumerThread(threading.Thread):
This class extends Thread, and is used to thread up the internal
module invocation.
"""
- def __init__(self, queue, logger):
+ def __init__(self, queue):
threading.Thread.__init__(self)
- logger.put(5, '>ConsumerThread.__init__')
- self.logger = logger
self.queue = queue
- logger.put(5, '<ConsumerThread.__init__')
def run(self):
- logger = self.logger
- logger.put(5, '>ConsumerThread.run')
while self.queue.working:
- logger.put(3, '%s: getting a new linemap' % self.getName())
+ logger.debug('{}: getting a new linemap'.format(self.getName()))
item = self.queue.get_linemap()
+
if item is not None:
linemap, handler, module = item
- logger.put(3, '%s: calling the handler' % self.getName())
+ logger.debug('{}: calling the handler'.format(self.getName()))
+
try:
result = handler(linemap)
if result is not None:
line = linemap['line']
- logger.put(5, '%s: returning result' % self.getName())
+ logger.debug('{}: returning result'.format(
+ self.getName()))
self.queue.put_result(line, result, module)
else:
- logger.put(5, '%s: Result is None.' % self.getName())
+ logger.debug('{}: Result is None.'.format(
+ self.getName()))
+
except Exception, e:
erep = 'Handler crash. Dump follows:\n'
erep += ' Thread : %s\n' % self.getName()
@@ -627,11 +614,11 @@ class ConsumerThread(threading.Thread):
erep += ' Error : %s\n' % e
erep += ' Line : %s\n' % linemap['line'].strip()
erep += 'End Dump'
- logger.put(0, erep)
+ logger.error(erep)
else:
- logger.put(5, '%s: Item is none.' % self.getName())
- logger.put(3, '%s: I am now dying' % self.getName())
- logger.put(5, '<ConsumerThread.run')
+ logger.debug('{}: Item is none.'.format(self.getName()))
+
+ logger.debug('{}: I am now dying'.format(self.getName()))
class Result(dict):
"""
@@ -793,88 +780,103 @@ class InternalModule:
return (ksize, 'KB')
return (size, 'Bytes')
-class Logger:
+class ConsoleUi:
"""
- A default command-line logger class. Other GUIs should use their own,
+ A default command-line UI class. Other GUIs should use their own,
but fully implement the API.
"""
- indent = ' '
+ indent = ' '
hangmsg = []
- hanging = 0
+ hanging = False
+ quiet = False
- def __init__(self, loglevel):
- self.loglevel = loglevel
+ def __init__(self, quiet=False):
+ self.quiet = quiet
def is_quiet(self):
"""Check if we should be quiet"""
- if self.loglevel == 0:
- return 1
- else:
- return 0
+ return self.quiet
- def debuglevel(self):
- """Return the current debug level"""
- return str(self.loglevel)
+ def put(self, message):
+ """Output a message"""
+ if self.quiet:
+ return
- def put(self, level, message):
- """Log a message, but only if debug levels are lesser or match"""
- if (level <= self.loglevel):
- if self.hanging:
- self.hanging = 0
- print '%s%s' % (self._getindent(), message)
+ if self.hanging:
+ self.hanging = False
+ print '{}{}'.format(self._getindent(), message)
- def puthang(self, level, message):
+ def puthang(self, message):
"""
- This indents the output, create an easier-to-read debug data.
+ This indents the output, creating an easier-to read flow.
"""
- if (level <= self.loglevel):
- print '%sInvoking: "%s"...' % (self._getindent(), message)
- self.hanging = 1
- self.hangmsg.append(message)
+ if self.quiet:
+ return
+
+ print '{}Invoking: "{}"...'.format(self._getindent(), message)
+ self.hanging = True
+ self.hangmsg.append(message)
- def endhang(self, level, message='done'):
+ def endhang(self, message='done'):
"""Must be called after puthang has been put in effect"""
- if (level <= self.loglevel):
- hangmsg = self.hangmsg.pop()
- if self.hanging:
- self.hanging = 0
- print '%s%s...%s' % (self._getindent(), hangmsg, message)
- else:
- print '%s(Hanging from "%s")....%s' % (self._getindent(),
+ if self.quiet:
+ return
+
+ hangmsg = self.hangmsg.pop()
+ if self.hanging:
+ self.hanging = False
+ print '{}{}...{}'.format(self._getindent(), hangmsg, message)
+ else:
+ print '{}(Hanging from "{}")....{}'.format(self._getindent(),
hangmsg, message)
- def progressbar(self, level, title, done, total):
+ def progressbar(self, title, done, total):
"""
A simple command-line progress bar.
"""
- if level != self.loglevel: return
+ if self.quiet:
+ return
+
##
# Do some nifty calculations to present the bar
#
- if len(title) > 40: title = title[:40]
+ if len(title) > 40:
+ title = title[:40]
+
barwidth = 60 - len(title) - 2 - len(self._getindent())
- barmask = "[%-" + str(barwidth) + "s]"
- if total != 0: bardown = int(barwidth*(float(done)/float(total)))
- else: bardown = 0
+ barmask = "[%-" + str(barwidth) + "s]"
+
+ if total != 0:
+ bardown = int(barwidth*(float(done)/float(total)))
+ else:
+ bardown = 0
+
+ # XXX: Format
bar = barmask % ("=" * bardown)
- sys.stdout.write("\r%s%s: %s\r" % (self._getindent(), title, bar))
+ sys.stdout.write("\r{}{}: {}\r".format(self._getindent(), title, bar))
- def endbar(self, level, title, message):
+ def endbar(self, title, message):
"""
After the progress bar is no longer useful, let's replace it with
something useful.
"""
- if level != self.loglevel: return
+ if self.quiet:
+ return
+
if not message:
print
return
##
# Do some nifty calculations to present the bar
#
- if len(title) > 40: title = title[:40]
+ if len(title) > 40:
+ title = title[:40]
+
barwidth = 60 - len(title) - len(self._getindent()) - 2
- message = '[%s]' % message.center(barwidth)
- sys.stdout.write("\r%s%s: %s\n" % (self._getindent(), title, message))
+ message = '[%s]' % message.center(barwidth)
+
+ sys.stdout.write("\r{}{}: {}\n".format(self._getindent(), title,
+ message))
def _getindent(self):
"""
diff --git a/py/epylog/helpers.py b/epylog/helpers.py
similarity index 65%
rename from py/epylog/helpers.py
rename to epylog/helpers.py
index 32550e6..f279e1a 100644
--- a/py/epylog/helpers.py
+++ b/epylog/helpers.py
@@ -4,7 +4,8 @@ It provides several useful methods for running the modules standalone
without having to invoke them as part of Epylog.
"""
##
-# Copyright (C) 2003 by Duke University
+# Copyright (C) 2003-2005 by Duke University
+# Copyright (C) 2005-2012 by Konstantin Ryabitsev and contributors
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
@@ -21,17 +22,27 @@ without having to invoke them as part of Epylog.
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
# 02111-1307, USA.
#
-# $Id$
-#
-# @Author Konstantin Ryabitsev <icon(a)linux.duke.edu>
-# @version $Date$
+# @Author Konstantin Ryabitsev <icon(a)mricon.com>
#
import sys
-sys.path.insert(0, './py/')
import epylog
import getopt
+import logging
+
+logger = logging.getLogger('epylog')
+logger.setLevel(logging.DEBUG)
+
+ch = logging.StreamHandler()
+ch.setLevel(logging.DEBUG)
+
+formatter = logging.Formatter("[%(levelname)s:%(module)s:%(funcName)s:"
+ "%(lineno)s] %(message)s")
+ch.setFormatter(formatter)
+logger.addHandler(ch)
+
+monthmap = epylog.log.mkmonthmap()
class ModuleTest:
"""
@@ -40,7 +51,6 @@ class ModuleTest:
effective.
"""
def __init__(self, epyclass, args):
- logger = epylog.Logger(5)
cmdargs = args[1:]
if not cmdargs: self._usage(args[0])
infile = None
@@ -63,65 +73,81 @@ class ModuleTest:
opts[key] = value
except getopt.error, e: self._usage(args[0])
if opts:
- logger.put(5, 'Additional opts follow')
- logger.put(5, opts)
- logger.put(5, 'Instantiating the module')
- epymod = epyclass(opts, logger)
- if input is None: self._usage(args[0])
- logger.put(5, 'Trying to open file %s for reading' % infile)
- try: infh = open(infile)
+ logger.debug('Additional opts follow')
+ logger.debug(opts)
+ logger.debug('Instantiating the module')
+ epymod = epyclass(opts)
+
+ if input is None:
+ self._usage(args[0])
+
+ logger.debug('Trying to open file {} for reading'.format(infile))
+
+ try:
+ infh = open(infile)
except Exception, e:
- msg = "ERROR trying to open file %s: %s" % (infile, e)
+ msg = "ERROR trying to open file {}: {}".format(infile, e)
self._die(msg)
+
if filtfile is not None:
- logger.put(5, 'Trying to open %s for writing' % filtfile)
- try: filtfh = open(filtfile, 'w')
+ logger.debug('Trying to open {} for writing'.format(filtfile))
+ try:
+ filtfh = open(filtfile, 'w')
except Exception, e:
- msg = "ERROR trying to open file %s: %s" % (filtfile, e)
+ msg = "ERROR trying to open file {}: {}".format(filtfile, e)
self._die(msg)
- monthmap = epylog.log.mkmonthmap()
+
rs = epylog.Result()
- while 1:
+
+ while True:
line = infh.readline()
- if not line: break
- line = line.strip()
- linemap = self._mk_linemap(line, monthmap)
- msg = linemap['message']
+
+ if not line:
+ break
+
+ line = line.strip()
+ linemap = self._mk_linemap(line)
+ msg = linemap['message']
+
for regex in epymod.regex_map.keys():
if regex.search(msg):
handler = epymod.regex_map[regex]
linemap['regex'] = regex
- logger.put(5, '%s -> %s' % (handler.__name__, msg))
+ logger.debug('{} -> {}'.format(handler.__name__, msg))
result = handler(linemap)
+
if result is not None:
rs.add_result(result)
if filtfile is not None:
- filtfh.write('%s\n' % line)
+ filtfh.write('{}\n'.format(line))
break
infh.close()
- if filtfile is not None: filtfh.close()
+
+ if filtfile is not None:
+ filtfh.close()
+
if not rs.is_empty():
- logger.put(5, 'Finalizing')
+ logger.debug('Finalizing')
report = epymod.finalize(rs)
+
if repfile is not None:
- logger.put(5, 'Trying to write report to %s' % repfile)
+ logger.debug('Trying to write report to {}'.format(repfile))
repfh = open(repfile, 'w')
repfh.write(report)
repfh.close()
- logger.put(5, 'Report written to %s' % repfile)
+ logger.debug('Report written to {}'.format(repfile))
else:
- logger.put(5, 'Report follows:')
- print report
+ logger.debug('----Report follows----')
+ logger.debug(report)
else:
- logger.put(5, 'No results for this run')
- logger.put(5, 'Done')
+ logger.debug('No results for this run')
- def _mk_linemap(self, line, monthmap):
+ def _mk_linemap(self, line):
"""
Create a linemap out of a line entry.
"""
try:
- stamp, sys, msg = epylog.log.get_stamp_sys_msg(line, monthmap)
+ (stamp, sys, msg) = epylog.log.get_stamp_sys_msg(line)
except ValueError:
# If we got an empty log line just go on parsing with nothing
# instead of dying
@@ -133,8 +159,9 @@ class ModuleTest:
'multiplier': 1}
return linemap
else:
- msg = 'Invalid syslog line: %s' % line
+ msg = 'Invalid syslog line: {}'.format(line)
self._die(msg)
+
linemap = {'line': line,
'stamp': stamp,
'system': sys,
@@ -146,19 +173,19 @@ class ModuleTest:
"""
Hot Grits Death!
"""
- print 'FATAL ERROR: %s' % message
+ logger.critical('FATAL ERROR: {}'.format(message))
sys.exit(1)
def _usage(self, name):
- print '''Usage:
- %s -i testcase [-r report] [-f filter] [-o EXTRAOPTS]
+ print('''Usage:
+ {} -i testcase [-r report] [-f filter] [-o EXTRAOPTS]
If -r is omitted, the report is printed to stdout
If -f is omitted, filtered lines are not shown
EXTRAOPTS:
Extra options should be submitted in this matter:
-o "option=value; option2=value; option3=value"
- ''' % name
+ '''.format(name))
sys.exit(1)
if __name__ == '__main__':
diff --git a/epylog/log.py b/epylog/log.py
new file mode 100644
index 0000000..f278991
--- /dev/null
+++ b/epylog/log.py
@@ -0,0 +1,1393 @@
+"""
+This operates on logfiles, including looking up strings, repeated data,
+handling rotated logs, figuring out the dates, etc.
+"""
+##
+# Copyright (C) 2003-2005 by Duke University
+# Copyright (C) 2005-2012 by Konstantin Ryabitsev and contributors
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
+# 02111-1307, USA.
+#
+# @Author Konstantin Riabitsev <icon(a)mricon.com>
+#
+
+import epylog
+import os
+import re
+import string
+import time
+import tempfile
+import ConfigParser
+import glob
+
+from string import Template
+
+import logging
+
+logger = logging.getLogger('epylog')
+
+def mkmonthmap():
+ """
+ The problem with syslog is that it does not log the year when the
+ event has taken place. This makes certain things difficult, including
+ looking up entries based on a timestamp. This function creates a mapping
+ for months to a year. Pad sets how many months ahead of the current one
+ should be considered in this year, and how many in the last year.
+ This function was largely contributed by Michael Stenner.
+ """
+ pad = 2
+ months = []
+
+ for i in range(0, 12):
+ months.append(time.strftime("%b", (1, i+1, 1, 1,
+ 1, 1, 1, 1, 1)))
+ basetime = time.localtime(time.time())
+ now_year = basetime[0]
+ now_month = basetime[1]
+ pad_month = now_month + pad
+ monthmap = {}
+
+ for m in range(pad_month - 12, pad_month):
+ monthname = months[m % 12]
+ year = now_year + (m / 12)
+
+ monthmap[monthname] = year
+
+ return monthmap
+
+monthmap = mkmonthmap()
+
+def mkstamp_from_syslog_datestr(datestr):
+ """
+ Takes a syslog date string and makes a timestamp out of it.
+ """
+ try:
+ (m, d, t) = datestr.split()[:3]
+ y = str(monthmap[m])
+
+ datestr = string.join([y, m, d, t], ' ')
+ tuptime = time.strptime(datestr, '%Y %b %d %H:%M:%S')
+
+ ##
+ # Python 2.2.2 (at least) breaks with DST.
+ # Work around.
+ #
+ localtime = time.localtime(time.mktime(tuptime))
+ ltime = list(tuptime)
+ ltime[8] = localtime[8]
+ tuptime = tuple(ltime)
+ timestamp = int(time.mktime(tuptime))
+
+ except:
+ # No idea, we give up
+ timestamp = -1
+
+ return timestamp
+
+def get_stamp_sys_msg(line):
+ """
+ This function takes a syslog line and returns the timestamp of the event,
+ the system where it occured, and the message.
+ """
+ mo = epylog.LOG_SPLIT_RE.match(line)
+ if not mo:
+ raise ValueError('Unknown line format: {}'.format(line))
+
+ (time, sys, msg) = mo.groups()
+
+ stamp = mkstamp_from_syslog_datestr(time)
+ sys = re.sub(epylog.SYSLOG_NG_STRIP, '', sys)
+
+ return stamp, sys, msg
+
+class LogTracker:
+ """
+ This is a helper class to track the logfiles as requested by the modules,
+ so no logs are opened more often than needed. It also does tracking of
+ rotating logfiles, opening and initializing them as necessary.
+ """
+ def __init__(self, config, ui):
+ """
+ Initializer code. Passing in config so we can use the variables
+ set by the admin.
+ """
+ self.ui = ui
+
+ sourcescfg = os.path.join(config.paths['cfgdir'], 'logsources.conf')
+
+ if not os.access(sourcescfg, os.R_OK):
+ msg = 'Log definition file "{}" not found'.format(sourcescfg)
+ raise epylog.ConfigError(msg)
+
+ self.logs = {}
+
+ logger.info('Reading log sources config from "{}"'.format(sourcescfg))
+
+ logcfg = ConfigParser.ConfigParser()
+ logcfg.read(sourcescfg)
+
+ for stubname in logcfg.sections():
+ logger.info('Found log definition: {}'.format(stubname))
+ source = logcfg.get(stubname, 'source')
+ rotated = logcfg.get(stubname, 'rotated')
+ # TODO: tsformat
+
+ log = Log(config, stubname, source, rotated, ui)
+
+ self.logs[stubname] = log
+
+ def getlog(self, stubname):
+ """
+ Return a log object based on the stub name provided by the module
+ config file.
+ """
+ if stubname in self.logs.keys():
+ logger.debug('Returning {}'.format(stubname))
+ return self.logs[stubname]
+
+ msg = 'No logs found matching {}'.format(stubname)
+ raise epylog.NoSuchLogError(msg)
+
+ def get_offset_map(self):
+ """
+ Offset map is the virtual boundary that defines which log entries
+ out of the entire scope of a logfile/logfiles we are interested in.
+ It can span multiple rotated logfiles.
+ """
+ # TODO: BROKEN
+ logger = self.logger
+ logger.put(5, '>LogTracker.get_offset_map')
+ omap = []
+ for log in self.logs:
+ entry = log.entry
+ inode = log.getinode()
+ if log.orange.endix != 0:
+ offset = 0
+ else:
+ offset = log.orange.end_offset
+ omap.append([entry, inode, offset])
+ logger.put(5, 'omap follows')
+ logger.put(5, omap)
+ logger.put(5, '<LogTracker.get_offset_map')
+ return omap
+
+ def dump_all_strings(self, fh):
+ """
+ Dumps all strings in the internal omap into a specified fh.
+ """
+ dumped = 0
+ for log in self.logs.values():
+ logger.info('Dumping strings for log {}'.format(log.stubname))
+ dumped = dumped + log.dump_strings(fh)
+ logger.info('Total of {} bytes dumped into {}'.format(dumped, fh.name))
+ return dumped
+
+ def get_stamps(self):
+ """
+ Returns a tuple with the earliest and the latest time stamp
+ from all the logs.
+ """
+ start_stamps = []
+ end_stamps = []
+
+ for log in self.logs.values():
+ if log.is_range_empty():
+ logger.info('The range for this log is empty')
+ continue
+
+ (start_stamp, end_stamp) = log.get_stamps()
+
+ if start_stamp != 0:
+ start_stamps.append(start_stamp)
+ if end_stamp != 0:
+ end_stamps.append(end_stamp)
+
+ if len(start_stamps):
+ start_stamps.sort()
+ start_stamp = start_stamps.pop(0)
+ else:
+ start_stamp = 0
+
+ if len(end_stamps):
+ end_stamps.sort()
+ end_stamp = end_stamps.pop(-1)
+ else:
+ end_stamp = 0
+
+ logger.debug('start_stamp={}'.format(start_stamp))
+ logger.debug('end_stamp={}'.format(end_stamp))
+
+ return (start_stamp, end_stamp)
+
+ def set_range_by_timestamps(self, start_stamp, end_stamp):
+ """
+ Sets offsets in the omap based on the timestamps passed in as
+ arguments.
+ """
+ for stubname in self.logs.keys():
+ logger.debug('Setting ranges for "{}"'.format(stubname))
+ log = self.logs[stubname]
+
+ try:
+ log.set_range_by_timestamps(start_stamp, end_stamp)
+ except epylog.OutOfRangeError:
+ msg = 'Timestamps not found for log "{}"'.format(stubname)
+ logger.error(msg)
+
+
+class OffsetRange:
+ """
+ This is a helper class that handles offset ranges. Since there can be
+ more than one logfile in the chains of rotated logs, specifying
+ offsets can be tricky. This, effectively, keeps four coordinates
+ internally -- the starting index, pointing at an index in the list of
+ log object, then an offset in that log object, then the end index, and
+ the end offset.
+ """
+ def __init__(self, startix, start_offset, endix, end_offset):
+ self.startix = startix
+ self.endix = endix
+ self.start_offset = start_offset
+ self.end_offset = end_offset
+
+ self.total_size = 0
+
+ logger.debug('startix={}'.format(self.startix))
+ logger.debug('start_offset={}'.format(self.start_offset))
+ logger.debug('endix={}'.format(self.endix))
+ logger.debug('end_offset={}'.format(self.end_offset))
+
+ def setstart(self, ix, offset, loglist):
+ """
+ Set the start offset -- takes two coordinates
+ """
+ self.startix = ix
+ self.start_offset = offset
+
+ logger.debug('new startix={}'.format(self.startix))
+ logger.debug('new start_offset={}'.format(self.start_offset))
+
+ self._recalc_total_size(loglist)
+
+ def setend(self, ix, offset, loglist):
+ """
+ Set the end offset -- takes two coordinates.
+ """
+ self.endix = ix
+ self.end_offset = offset
+
+ logger.debug('new endix={}'.format(self.endix))
+ logger.debug('new end_offset={}'.format(self.end_offset))
+
+ self._recalc_total_size(loglist)
+
+ def start_is_end(self):
+ """
+ Check whether the coordinates for start and end are the same
+ and return true if so, otherwise return false.
+ """
+ empty = False
+ if self.startix == self.endix:
+ if self.start_offset == self.end_offset:
+ empty = True
+ logger.debug('This range points to same location')
+
+ return empty
+
+ def done_size(self, curix, offset, loglist):
+ """
+ This is a helper function to help calculate the percentage of
+ offset that has been processed already.
+ """
+ if curix == self.startix:
+ done = offset - self.start_offset
+ else:
+ done = 0
+ for ix in range(self.startix, curix, -1):
+ if ix == self.startix:
+ done = loglist[ix].end_offset - self.start_offset
+ else: done += loglist[ix].end_offset
+ done += offset
+ return done
+
+ def is_inside(self, ix, offset):
+ """
+ Check if a proposed index is inside this offset.
+ """
+ if ix > self.startix:
+ logger.debug('ix > self.startix')
+ return False
+
+ if ix < self.endix:
+ logger.debug('ix > self.endix')
+ return False
+
+ if ix == self.startix and offset < self.start_offset:
+ logger.debug('ix = self.startix and offset < self.start_offset')
+ return False
+
+ if ix == self.endix and offset > self.end_offset:
+ logger.debug('ix = self.endix and offset > self.end_offset')
+ return False
+
+ logger.debug('ix={}, offset={} is inside'.format(ix, offset))
+ return True
+
+ def _recalc_total_size(self, loglist):
+ """
+ If the offsets change, recalculate total size of the range.
+ Useful for figuring out how much is left to do and how much is
+ done already.
+ """
+ logger.debug('startix={}, endix={}'.format(self.startix, self.endix))
+
+ total = 0
+ for ix in range(self.startix, self.endix - 1, -1):
+ if ix == self.startix:
+ total = loglist[ix].end_offset - self.start_offset
+ elif ix == self.endix:
+ total += self.end_offset
+ else:
+ total += loglist[ix].end_offset
+
+ logger.debug('total={}'.format(total))
+
+ self.total_size = total
+
+class LinePointer:
+ """
+ LinePointer is a two-dimensional coordinate that contains the index
+ and an offset of a certain line. This is like a half of an offset range.
+ """
+ def __init__(self, ix, offset):
+ self.ix = ix
+ self.offset = offset
+
+ logger.debug('ix={}'.format(self.ix))
+ logger.debug('offset={}'.format(self.offset))
+
+ def pos(self, ix, offset):
+ self.ix = ix
+ self.offset = offset
+
+ logger.debug('ix={}'.format(self.ix))
+ logger.debug('offset={}'.format(self.offset))
+
+class Log:
+ """
+ This class is the collection of LogFile objects all belonging to the same
+ entry.. It handles things like reading from files, looking up lines, etc.
+ """
+ def __init__(self, config, stubname, source, rotated, ui):
+ self.tmpprefix = config.tmpprefix
+ self.stubname = stubname
+ self.ui = ui
+
+ # initialize the source and rotated files
+ logger.info('Initializing main log file {}'.format(source))
+
+ logfile = LogFile(source, config)
+ self.loglist = [logfile]
+
+ # We'll use the start_stamp in each rotfile to figure out
+ # the order in which they should be listed
+ timestamps = []
+ stampmap = {}
+
+ for rotfile in glob.glob(rotated):
+ logger.info('Initializing rotated logfile {}'.format(rotfile))
+ logfile = LogFile(rotfile, config)
+
+ timestamps.append(logfile.start_stamp)
+ stampmap[logfile.start_stamp] = logfile
+
+ timestamps.sort(reverse=True)
+ for timestamp in timestamps:
+ logfile = stampmap[timestamp]
+ logger.debug('Adding to loglist: {}'.format(logfile.filename))
+ self.loglist.append(logfile)
+
+ # current logfile is the oldest rotated file
+ startix = len(self.loglist) - 1
+
+ self.orange = OffsetRange(startix, 0, 0, logfile.end_offset)
+
+ # used to track linepointer
+ self.lp = None
+
+ def set_range_param(self, ix, offset, whence=False):
+ """
+ Sets an offset parameter. If whence is False, then the start offset
+ is assumed. If it's True, then the end offset is assumed.
+ """
+ logger.debug('ix={}'.format(ix))
+ logger.debug('offset={}'.format(offset))
+ logger.debug('whence={}'.format(whence))
+
+ logger.info('Checking if the offset makes sense')
+
+ if self.loglist[ix].end_offset < offset:
+ msg = 'Offset {} is past the end of {}: {}! Correcting.'.format(
+ offset, self.loglist[ix].filename, self.loglist[ix].end_offset)
+ logger.error(msg)
+
+ self.orange.setstart(ix, self.loglist[ix].end_offset, self.loglist)
+
+ else:
+ if whence:
+ self.orange.setend(ix, offset, self.loglist)
+ else:
+ self.orange.setstart(ix, offset, self.loglist)
+
+ def get_logfile_by_start_stamp(self, start_stamp):
+ ix = 0
+
+ # this loop will either return or exit via a NoSuchLogError
+ while ix < len(self.loglist):
+ logfile = self.loglist[ix]
+
+ logger.debug('Looking at: {}'.format(logfile.filename))
+ if logfile.start_stamp == start_stamp:
+ logger.debug('Found the match at ix={}'.format(ix))
+ return (logfile, ix)
+
+ ix += 1
+
+ raise epylog.NoSuchLogError('No matching entries')
+
+ def nextline(self):
+ """
+ Fetch the next line in the log, based on the internally stored line
+ pointer. If the line pointer is not set, then the start offset is
+ used.
+ """
+ if self.lp is None:
+ ix = self.orange.startix
+ offset = self.orange.start_offset
+
+ logger.debug('init linepointer with ix={}, offset={}'.format(
+ ix, offset))
+ self.lp = LinePointer(ix, offset)
+
+ ix = self.lp.ix
+ offset = self.lp.offset
+
+ logger.debug('Checking if we are past the orange end')
+
+ if not self.orange.is_inside(ix, offset):
+ msg = 'Moved past the end of the range'
+ raise epylog.OutOfRangeError(msg)
+
+ log = self.loglist[ix]
+ (line, offset) = log.get_line_at_offset(offset)
+
+ done = self.orange.done_size(ix, offset, self.loglist)
+ total = self.orange.total_size
+ title = log.filename
+
+ self.ui.progressbar(title, done, total)
+
+ if offset >= log.end_offset:
+ logger.debug('End of log "{}" reached'.format(log.filename))
+
+ ix -= 1
+ offset = 0
+
+ self.lp.pos(ix, offset)
+
+ try:
+ (stamp, system, message) = get_stamp_sys_msg(line)
+ multiplier = 1
+ mo = epylog.MESSAGE_REPEATED_RE.search(message)
+
+ if mo:
+ try:
+ message = self._lookup_repeated(system)
+ multiplier = int(mo.group(1))
+ except epylog.FormatError:
+ pass
+ except epylog.GenericError:
+ pass
+
+ log.repeated_cache[system] = message
+ linemap = {'line': line,
+ 'stamp': stamp,
+ 'system': system,
+ 'message': message,
+ 'multiplier': multiplier}
+
+ except ValueError:
+ logger.error('Invalid syslog format string in {}: {}'.format(
+ log.filename, line))
+ # Pass it on
+ raise epylog.FormatError(line, logger)
+
+ return linemap
+
+ def _lookup_repeated(self, system):
+ """
+ A helper method to resolve the pesky 'last message repeated' lines.
+ It takes a system name and tries to figure out the original line.
+ """
+ log = self.loglist[self.lp.ix]
+
+ try:
+ message = log.repeated_cache[system]
+ logger.debug('Found in repeated_cache by system')
+ return message
+ except KeyError:
+ pass
+
+ host_re = re.compile('.{15,15} .*[@/]*{}'.format(system))
+ offset = self.lp.offset
+
+ logger.debug('Looking in {} for the previous report from {}'.format(
+ log.filename, system))
+ offset_orig = offset
+
+ line = None
+
+ while True:
+ try:
+ (cline, offset) = log.find_previous_entry_by_re(offset, host_re)
+ except (IOError, epylog.OutOfRangeError), e:
+ break
+
+ if epylog.MESSAGE_REPEATED_RE.search(cline):
+ try:
+ rep_offset = log.repeated_cache[offset]
+ logger.debug('Found in cached values')
+ (line, offset) = log.get_line_at_offset(rep_offset)
+ logger.debug('line={}'.format(line))
+ log.repeated_cache[offset_orig] = rep_offset
+ break
+
+ except KeyError:
+ logger.debug('Not in cached values')
+ pass
+
+ else:
+ logger.debug('Found by backstepping')
+ line = cline
+ logger.debug('line={}'.format(line))
+ log.repeated_cache[offset_orig] = offset
+ break
+
+ if not line:
+ msg = 'Could not find the original message'
+ raise epylog.GenericError(msg)
+
+ try:
+ (stamp, system, message) = get_stamp_sys_msg(line)
+ except epylog.FormatError:
+ logger.error('Invalid syslog format string in {}: {}'.format(
+ log.filename, line))
+
+ return message
+
+ def dump_strings(self, fh):
+ """
+ Dump all strings in the offset into the specified fh.
+ """
+ logger.info('Dumping strings for log {}'.format(self.stubname))
+ ologs = self._get_orange_logs()
+ if len(ologs) == 1:
+ # All strings in the same file. Easy.
+ starto = self.orange.start_offset
+ endo = self.orange.end_offset
+
+ log = ologs[0]
+ log.set_offset_range(starto, endo)
+
+ buflen = log.dump_strings(fh)
+
+ logger.info('{} bytes dumped from {} into {}'.format(
+ buflen, log.filename, fh.name))
+ else:
+ # Strings are in different rotfiles. Hard.
+ buflen = 0
+
+ flog = ologs.pop(0)
+ elog = ologs.pop(-1)
+
+ logger.debug('Processing the earliest logfile')
+
+ starto = self.orange.start_offset
+ endo = flog.end_offset
+
+ flog.set_offset_range(starto, endo)
+
+ buflen = buflen + flog.dump_strings(fh)
+
+ if len(ologs):
+ logger.info('There are logfiles between the first and last')
+ for mlog in ologs:
+ mlog.set_offset_range(0, mlog.end_offset)
+ buflen = buflen + mlog.dump_strings(fh)
+
+ logger.info('Processing the latest logfile')
+
+ starto = 0
+ endo = self.orange.end_offset
+
+ elog.set_offset_range(starto, endo)
+ buflen = buflen + elog.dump_strings(fh)
+
+ logger.info('{} bytes dumped from multiple files into {}'.format(
+ buflen, fh.name))
+
+ return buflen
+
+ def get_stamps(self):
+ """
+ Get the stamps in the offset. Start stamp and end stamp are returned.
+ """
+ ##
+ # Returns a list with the earliest and the latest stamp in the
+ # current log range.
+ #
+ logs = self._get_orange_logs()
+ flog = logs.pop(0)
+
+ flog.range_start = self.orange.start_offset
+ (start_stamp, end_stamp) = flog.get_range_stamps()
+
+ if len(logs):
+ elog = logs.pop(-1)
+ elog.range_end = self.orange.end_offset
+ (junk, end_stamp) = elog.get_range_stamps()
+
+ logger.debug('start_stamp={}'.format(start_stamp))
+ logger.debug('end_stamp={}'.format(end_stamp))
+
+ return (start_stamp, end_stamp)
+
+ def set_range_by_timestamps(self, start_stamp, end_stamp):
+ """
+ Set the range by timestamps provided.
+ """
+ if start_stamp > end_stamp:
+ msg = 'Start stamp must be before end stamp'
+ raise epylog.OutOfRangeError(msg)
+
+ logger.debug('looking for start_stamp={}'.format(start_stamp))
+ logger.debug('looking for end_stamp={}'.format(end_stamp))
+
+ ix = 0
+ start_offset = None
+ end_offset = None
+
+ for ix in range(0, len(self.loglist)-1):
+ logger.debug('ix={}'.format(ix))
+
+ curlog = self.loglist[ix]
+ logger.info('Analyzing log file "{}"'.format(curlog.filename))
+
+ try:
+ pos_start = curlog.stamp_in_log(start_stamp)
+ pos_end = curlog.stamp_in_log(end_stamp)
+ except epylog.OutOfRangeError:
+ logger.info('No useful entries in this log, ignoring')
+ continue
+
+ if pos_start == 0:
+ # start stamp is in current log
+ logger.debug('start_stamp is in "{}"'.format(curlog.filename))
+
+ start_ix = ix
+ start_offset = curlog.find_offset_by_timestamp(start_stamp)
+
+ elif pos_start > 0:
+ # Past this log. This means that we have missed the start
+ # of this stamp. Set by the end_offset of the current log.
+ logger.debug('start_stamp is past {}'.format(curlog.filename))
+ logger.debug('setting to end_offset of this log')
+
+ start_ix = ix
+ start_offset = curlog.end_offset
+
+ if pos_end == 0:
+ # end stamp is in current log
+ logger.debug('end_stamp is in {}'.format(curlog.filename))
+
+ end_ix = ix
+ end_offset = curlog.find_offset_by_timestamp(end_stamp)
+
+ elif pos_end > 0 and end_offset is None:
+ # Means that end of the search is past the end of the last
+ # log.
+ logger.debug('end_stamp is past the most current entry')
+ logger.debug('setting to end_offset of this ix')
+
+ end_ix = ix
+ end_offset = curlog.end_offset
+
+ if start_offset is not None and end_offset is not None:
+ logger.debug('Found both the start and the end')
+ break
+
+ if start_offset is None:
+ if end_offset is not None:
+ logger.debug('setting start_offset to 0, last ix')
+ start_offset = 0
+ start_ix = len(self.loglist) - 1
+ else:
+ msg = 'Range not found when searching for timestamps'
+ raise epylog.OutOfRangeError(msg)
+
+ logger.debug('start_ix={}'.format(start_ix))
+ logger.debug('start_offset={}'.format(start_offset))
+ logger.debug('end_ix={}'.format(end_ix))
+ logger.debug('end_offset={}'.format(end_offset))
+
+ self.orange.setstart(start_ix, start_offset, self.loglist)
+ self.orange.setend(end_ix, end_offset, self.loglist)
+
+ def is_range_empty(self):
+ """
+ Check if the range is empty and return an appropriate true or false.
+ """
+ empty = False
+ if self.orange.start_is_end():
+ logger.debug('Yes, range is empty')
+ return True
+
+ startlog = self.loglist[self.orange.startix]
+ if (startlog.end_offset == self.orange.end_offset and
+ self.orange.endix == self.orange.startix + 1 and
+ self.orange.end_offset == 0):
+ # This means that start is at the end of the last rotlog
+ # and end is at the start of next rotlog, meaning that the
+ # range is really empty.
+ logger.debug('Yes, range is empty')
+ return True
+
+ return False
+
+ def _get_orange_logs(self):
+ """
+ Get the logs in the offset range. Returns a list.
+ """
+ ologs = []
+ for ix in range(self.orange.startix, self.orange.endix - 1, -1):
+ logger.debug('appending {}'.format(self.loglist[ix].filename))
+ ologs.append(self.loglist[ix])
+
+ return ologs
+
+class LogFile:
+ """
+ This class handles the log files themselves -- things like opening,
+ rewinding, reading, etc.
+ """
+ def __init__(self, filename, config):
+ self.tmpprefix = config.tmpprefix
+ self.filename = filename
+
+ # Use this indicator to minimize seeks to the line start
+ self.at_line_start = True
+
+ ##
+ # start_stamp: the timestamp at the start of the log
+ # end_stamp: the timestamp at the end of the log
+ # end_offset: this is where the end of the log is
+ #
+ self.start_stamp = None
+ self.end_stamp = None
+ self.end_offset = None
+
+ ##
+ # range_start: the start offset of the range
+ # range_end: the end offset of the range
+ #
+ self.range_start = 0
+ self.range_end = None
+
+ ##
+ # repeated_cache: map of offsets to repeated lines for
+ # unwrapping those pesky "last message repeated"
+ # entries
+ # also a map of last lines for systems.
+ #
+ self.repeated_cache = {}
+
+ logger.info('Running sanity checks on the logfile')
+ self._accesscheck()
+ logger.info('Initializing the file')
+ self._initfile()
+
+ def _initfile(self):
+ """
+ Initialize the logfile. This usually consitutes opening it,
+ figuring out if it's gzipped or not, and recording where the log
+ ends. That is important, as logs are usually being appended during
+ epylog runs.
+ """
+ if self.filename[-3:] == '.gz':
+ import gzip
+
+ logger.info('Using GzipFile to open {}'.format(self.filename))
+
+ tempfile.tmpdir = self.tmpprefix
+
+ ungzfile = tempfile.mktemp('.UNGZ')
+ ungzfh = open(tempfile.mktemp('UNGZ'), 'w+')
+
+ logger.debug('ungzfile={}'.format(ungzfile))
+
+ try:
+ gzfh = gzip.open(self.filename)
+ except:
+ msg = 'Could not open "{}" with gzip.'.format(self.filename)
+ raise epylog.ConfigError(msg)
+
+ logger.info('Putting the contents of the gzlog into ungzlog')
+
+ while True:
+ chunk = gzfh.read(epylog.CHUNK_SIZE)
+
+ if chunk:
+ ungzfh.write(chunk)
+ logger.debug('Read {} bytes from gzfh'.format(len(chunk)))
+ else:
+ logger.debug('Reached EOF')
+ break
+
+ gzfh.close()
+ self.fh = ungzfh
+
+ else:
+ logger.info('Does not end in .gz, assuming plain text')
+ self.fh = open(self.filename)
+
+ logger.info('Finding the start_stamp')
+ self.fh.seek(0)
+ self.at_line_start = True
+ self.start_stamp = self._get_stamp()
+ logger.debug('start_stamp={}'.format(self.start_stamp))
+
+ logger.info('Finding the end offset')
+ self.fh.seek(0, 2)
+ self.at_line_start = False
+ self._set_at_line_start()
+
+ self.end_offset = self.fh.tell()
+ self.range_end = self.fh.tell()
+
+ logger.info('Finding the end_stamp')
+ self.end_stamp = self._get_stamp()
+ logger.debug('end_stamp={}'.format(self.end_stamp))
+
+ def set_offset_range(self, start, end):
+ """
+ A two-dimensional coordinate is accepted that points to which
+ entries we are interested in.
+ """
+ logger.debug('start={}'.format(start))
+ logger.debug('end={}'.format(end))
+
+ if start < 0:
+ msg = 'Start of range cannot be less than zero'
+ raise epylog.OutOfRangeError(msg)
+
+ if end > self.end_offset:
+ msg = 'End of range {} is past the end of log'.format(end)
+ raise epylog.OutOfRangeError(msg)
+
+ if start > end:
+ msg = 'Start of range cannot be greater than end'
+ raise epylog.OutOfRangeError(msg)
+
+ self.fh.seek(start)
+ self.at_line_start = False
+
+ self._set_at_line_start()
+ self.range_start = self.fh.tell()
+
+ self.fh.seek(end)
+ self.at_line_start = False
+
+ self._set_at_line_start()
+ self.range_end = self.fh.tell()
+
+ logger.debug('range_start={}'.format(self.range_start))
+ logger.debug('range_end={}'.format(self.range_end))
+
+ def stamp_in_log(self, searchstamp):
+ """
+ Check if the timestamp specified is inside this logfile.
+ Return values:
+ -1 = before this log
+ 0 = in this log
+ 1 = after this log
+ """
+ logger.debug('searchstamp={}'.format(searchstamp))
+ logger.debug('start_stamp={}'.format(self.start_stamp))
+ logger.debug('end_stamp={}'.format(self.end_stamp))
+
+ if self.start_stamp == 0 or self.end_stamp == 0:
+ msg = 'No stampable entries in this log'
+ raise epylog.OutOfRangeError(msg)
+
+ if searchstamp > self.end_stamp:
+ logger.debug('past the end of this log')
+ return 1
+
+ if searchstamp < self.start_stamp:
+ logger.debug('before the start of this log')
+ return -1
+
+ logger.debug('IN this log')
+ return 0
+
+
+ def find_offset_by_timestamp(self, searchstamp):
+ """
+ Find an offset by timestamp specified.
+ """
+ if self.start_stamp == 0 or self.end_stamp == 0:
+ logger.debug('Does not seem like anything useful is in this file')
+ raise epylog.OutOfRangeError('Nothing useful in this log')
+
+ if self.stamp_in_log(searchstamp) != 0:
+ msg = 'This stamp does not appear to be in this log'
+ raise epylog.OutOfRangeError(msg)
+
+ self._crude_locate(searchstamp)
+ self._fine_locate(searchstamp)
+
+ offset = self.fh.tell()
+ logger.debug('offset={}'.format(offset))
+
+ return offset
+
+ def dump_strings(self, fh):
+ """
+ Dump all strings from this logfile into a provided fh. Only the
+ offset entries are used.
+ """
+ if self.range_end is None:
+ msg = 'No range defined for logfile {}'.format(self.filename)
+ raise epylog.OutOfRangeError(msg, logger)
+
+ chunklen = self.range_end - self.range_start
+
+ logger.debug('range_start={}'.format(self.range_start))
+ logger.debug('range_end={}'.format(self.range_end))
+ logger.debug('chunklen={}'.format(chunklen))
+
+ self.fh.seek(self.range_start)
+ self.at_line_start = False
+
+ if chunklen > 0:
+ iternum = int(chunklen/epylog.CHUNK_SIZE)
+ lastchunk = chunklen%epylog.CHUNK_SIZE
+
+ logger.debug('iternum={}'.format(iternum))
+ logger.debug('lastchunk={}'.format(lastchunk))
+
+ if iternum > 0:
+ for i in range(iternum):
+ chunk = self.fh.read(epylog.CHUNK_SIZE)
+ self.at_line_start = False
+
+ fh.write(chunk)
+ logger.debug('wrote {} bytes from {} to {}'.format(
+ len(chunk), self.filename, fh.name))
+ if lastchunk > 0:
+ chunk = self.fh.read(lastchunk)
+ self.at_line_start = False
+
+ fh.write(chunk)
+ logger.debug('wrote {} bytes from {} to {}'.format(
+ len(chunk), self.filename, fh.name))
+ return chunklen
+
+ def get_range_stamps(self):
+ """
+ Get the timestamps at the beginning of the range offset, and at the
+ end.
+ """
+ logger.debug('range_start={}'.format(self.range_start))
+ self.fh.seek(self.range_start)
+ self.at_line_start = False
+
+ start_stamp = self._get_stamp()
+
+ self.fh.seek(self.range_end)
+ self.at_line_start = False
+
+ end_stamp = self._get_stamp()
+
+ logger.debug('start_stamp={}'.format(start_stamp))
+ logger.debug('end_stamp={}'.format(end_stamp))
+
+ return (start_stamp, end_stamp)
+
+ def get_line_at_offset(self, offset):
+ """
+ Get and return the line at a specified offset.
+ """
+ self.fh.seek(offset)
+ self.at_line_start = False
+
+ line = self.fh.readline()
+ offset = self.fh.tell()
+
+ return (line, offset)
+
+ def find_previous_entry_by_re(self, offset, regex, limit=1000):
+ """
+ Back up one line at a time and try to locate the one that
+ matches the provided regex.
+ """
+ self.fh.seek(offset)
+ self.at_line_start = False
+
+ count = 0
+
+ while True:
+ line = self._lineback()
+ if regex.search(line):
+ logger.debug('Found line: {}'.format(line))
+ break
+
+ count += 1
+
+ logger.debug('No match, going back more (count={})'.format(count))
+
+ if count > limit:
+ logger.debug('Reached backstepping limit')
+ msg = 'Out of sane range looking for line'
+ raise epylog.OutOfRangeError(msg)
+
+ return (line, self.fh.tell())
+
+ def _crude_locate(self, stamp):
+ """
+ This is a binary search that would look for a line matching the
+ provided timestamp.
+ """
+ logger.info('Looking for {} in {}'.format(stamp, self.filename))
+
+ increment = int(self.end_offset/2)
+ relative = increment
+
+ logger.debug('rewinding the logfile')
+ self.fh.seek(0)
+ self.at_line_start = True
+
+ logger.debug('initial increment={}'.format(increment))
+ logger.debug('initial relative={}'.format(relative))
+
+ ostamp = None
+
+ while True:
+ old_ostamp = ostamp
+
+ self._rel_position(relative)
+
+ ostamp = self._get_stamp()
+ if ostamp == 0:
+ logger.debug('Bogus timestamp! Breaking.')
+ break
+
+ logger.debug('ostamp={}'.format(ostamp))
+
+ if old_ostamp == ostamp:
+ logger.debug('ostamp and old_ostamp the same. Breaking')
+ break
+
+ increment = int(increment/2)
+ logger.debug('increment={}'.format(increment))
+
+ if ostamp < stamp:
+ logger.debug('<<<<<<<')
+ relative = increment
+ logger.debug('Jumping forward by {}'.format(relative))
+
+ elif ostamp > stamp:
+ logger.debug('>>>>>>>')
+ relative = -increment
+ logger.debug('Jumping backward by {}'.format(relative))
+
+ elif ostamp == stamp:
+ logger.debug('=======')
+ break
+
+ logger.debug('Crude search done at offset {}'.format(self.fh.tell()))
+
+ def _fine_locate(self, stamp):
+ """
+ This search algorithm will locate the best match line by line.
+ It's best used after _crude_locate, otherwise it'll take forever.
+ """
+ lineloc = 0
+ oldlineloc = 0
+
+ before_stamp = None
+ after_stamp = None
+ current_stamp = None
+
+ while True:
+ try:
+ if lineloc > 0:
+ logger.debug('Going forward one line')
+
+ before_stamp = current_stamp
+ current_stamp = after_stamp
+ after_stamp = None
+
+ self._lineover()
+
+ elif lineloc < 0:
+ logger.debug('Going back one line')
+
+ before_stamp = None
+ current_stamp = before_stamp
+ after_stamp = current_stamp
+
+ self._lineback()
+
+ offset = self.fh.tell()
+
+ if offset >= self.end_offset:
+ # We have reached the end of the initialized log.
+ # There are possibly entries past this point, but
+ # we can't trust them, as they are appended after the
+ # init and can screw us up.
+ logger.debug('End of initialized log reached, breaking')
+ self.fh.seek(self.end_offset)
+ self.at_line_start = False
+
+ break
+
+ if current_stamp is None:
+ current_stamp = self._get_stamp()
+ self.fh.seek(offset)
+ self.at_line_start = False
+
+ if before_stamp is None:
+ self._lineback()
+ before_stamp = self._get_stamp()
+ self.fh.seek(offset)
+ self.at_line_start = False
+
+ if after_stamp is None:
+ self._lineover()
+ after_stamp = self._get_stamp()
+ self.fh.seek(offset)
+ self.at_line_start = False
+
+ except IOError:
+ logger.debug('Either end or start of file reached, breaking')
+ break
+
+ logger.debug('before_stamp={}'.format(before_stamp))
+ logger.debug('current_stamp={}'.format(current_stamp))
+ logger.debug('after_stamp={}'.format(after_stamp))
+ logger.debug('searching for {}'.format(stamp))
+
+ if before_stamp == 0 or current_stamp == 0 or after_stamp == 0:
+ logger.debug('Bogus stamps found. Breaking.')
+ break
+
+ oldlineloc = lineloc
+
+ if before_stamp >= stamp:
+ logger.debug('>>>>>')
+ lineloc = -1
+
+ elif before_stamp < stamp and after_stamp <= stamp:
+ logger.debug('<<<<<')
+ lineloc = 1
+
+ elif current_stamp < stamp and after_stamp >= stamp:
+ logger.debug('<<<<<')
+ lineloc = 1
+
+ elif before_stamp < stamp and current_stamp >= stamp:
+ logger.debug('=====')
+ break
+
+ if oldlineloc == -lineloc:
+ # fine_locate cannot reverse direction.
+ # If it does, that means that entries are not in order,
+ # which may happen quite frequently on poorly ntpd'd
+ # machines. Get out and hope this is good enough.
+ logger.warning('Entries in {} going back in time!'.format(
+ self.filename))
+ break
+
+ logger.debug('fine locate done at offset {}'.format(self.fh.tell()))
+
+ def _lineover(self):
+ """
+ Go forward one line and return it.
+ """
+ offset = self.fh.tell()
+ entry = self.fh.readline()
+
+ self.at_line_start = True
+
+ if self.fh.tell() == offset:
+ logger.debug('End of file reached!')
+ raise IOError
+
+ logger.debug('New offset at {}'.format(self.fh.tell()))
+
+ return entry
+
+ def _lineback(self):
+ """
+ Go back one line and return it.
+ """
+ self._set_at_line_start()
+
+ if self.fh.tell() <= 1:
+ logger.debug('Start of file reached')
+ raise IOError
+
+ entry = self._rel_position(-2)
+ logger.debug('New offset at {}'.format(self.fh.tell()))
+
+ return entry
+
+ def _get_stamp(self):
+ """
+ Get the timestamp at current offset.
+ """
+ self._set_at_line_start()
+
+ offset = self.fh.tell()
+ curline = self.fh.readline()
+
+ # rewind back where we were
+ self.fh.seek(offset)
+
+ if len(curline):
+ try:
+ stamp = self._mkstamp_from_syslog_datestr(curline)
+ except epylog.FormatError:
+ stamp = 0
+
+ else:
+ logger.debug('Nothing in the range')
+ stamp = 0
+
+ return stamp
+
+ def _rel_position(self, relative):
+ """
+ Position the offset within a file based on the "relative" variable
+ passed in as argument. It can be positive or negative. Then it will
+ position itself at the start of the line.
+ """
+ offset = self.fh.tell()
+ new_offset = offset + relative
+
+ logger.debug('offset={}'.format(offset))
+ logger.debug('relative={}'.format(relative))
+ logger.debug('new_offset={}'.format(new_offset))
+
+ if new_offset < 0:
+ logger.debug('new_offset less than 0. Setting to 0')
+ new_offset = 0
+
+ self.fh.seek(new_offset)
+ self.at_line_start = False
+
+ entry = self._set_at_line_start()
+
+ logger.debug('after _set_at_line_start: {}'.format(self.fh.tell()))
+
+ return entry
+
+ def _mkstamp_from_syslog_datestr(self, datestr):
+ """
+ Make a timestamp out of the syslog-format date entry.
+ """
+ logger.debug('datestr={}'.format(datestr))
+ timestamp = mkstamp_from_syslog_datestr(datestr)
+
+ if timestamp == -1:
+ msg = 'Odd date format in entry: {}'.format(datestr)
+ raise epylog.FormatError(msg)
+
+ logger.debug('timestamp={}'.format(timestamp))
+
+ return timestamp
+
+ def _accesscheck(self):
+ """
+ Quick sanity checks on the logfile.
+ """
+ logfile = self.filename
+
+ if not os.access(logfile, os.F_OK):
+ msg = 'Log file "{}" does not exist'.format(logfile)
+ raise epylog.AccessError(msg)
+
+ if not os.access(logfile, os.R_OK):
+ msg = 'Logfile "{}" is not readable'.format(logfile)
+ raise epylog.AccessError(msg)
+
+ def _set_at_line_start(self):
+ """
+ Position ourselves at the beginning of the line.
+ """
+ if self.at_line_start:
+ logger.debug('self.at_line_start says already there')
+ return ''
+
+ orig_offset = self.fh.tell()
+
+ if orig_offset == 0:
+ logger.debug('Already at file start')
+ return
+
+ logger.debug('starting the backstepping loop')
+
+ entry = ''
+ while True:
+ curchar = self.fh.read(1)
+ if curchar == '\n':
+ logger.debug('Found newline at offset %d' % self.fh.tell())
+ break
+
+ entry = curchar + entry
+ offset = self.fh.tell() - 1
+ self.fh.seek(offset)
+
+ if offset == 0:
+ logger.debug('Beginning of file reached!')
+ break
+
+ offset = offset - 1
+ self.fh.seek(offset)
+
+ logger.debug('Exited the backstepping loop')
+
+ now_offset = self.fh.tell()
+ rewound = orig_offset - now_offset
+
+ logger.debug('Line start found at offset "{}"'.format(now_offset))
+ logger.debug('rewound by {} characters'.format(rewound))
+
+ self.at_line_start = True
+
+ return entry
+
diff --git a/epylog/module.py b/epylog/module.py
new file mode 100644
index 0000000..c3e1a15
--- /dev/null
+++ b/epylog/module.py
@@ -0,0 +1,296 @@
+"""
+This module handles the... er... modules for epylog, both internal and
+external.
+"""
+##
+# Copyright (C) 2003-2005 by Duke University
+# Copyright (C) 2005-2012 by Konstantin Ryabitsev and contributors
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
+# 02111-1307, USA.
+#
+# @Author Konstantin Ryabitsev <icon(a)mricon.com>
+#
+
+import ConfigParser
+import epylog
+import os
+import tempfile
+import string
+import re
+
+from string import Template
+
+from ihooks import BasicModuleLoader
+_loader = BasicModuleLoader()
+
+import logging
+
+logger = logging.getLogger('epylog')
+
+class Module:
+ """epylog Module class"""
+
+ def __init__(self, cfgfile, logtracker, config, ui):
+ self.ui = ui
+
+ self.tmpprefix = config.tmpprefix
+ self.paths = config.paths
+
+ logger.info('Initializing module for cfgfile {}'.format(cfgfile))
+
+ modcfg = ConfigParser.ConfigParser()
+ modcfg.read(cfgfile)
+
+ try:
+ self.name = modcfg.get('module', 'desc')
+ except:
+ self.name = 'Unnamed Module'
+ try:
+ self.enabled = modcfg.getboolean('module', 'enabled')
+ except:
+ self.enabled = False
+
+ if not self.enabled:
+ logger.info('Module "{}" is not enabled.'.format(self.name))
+ return
+
+ try:
+ executable = modcfg.get('module', 'exec')
+ self.executable = Template(executable).safe_substitute(self.paths)
+ except:
+ msg = 'Did not find executable name in "{}"'.format(cfgfile)
+ raise epylog.ConfigError(msg)
+
+ try:
+ self.priority = modcfg.getint('module', 'priority')
+ except:
+ self.priority = 10
+
+ try:
+ logentries = modcfg.get('module', 'files')
+ except:
+ msg = 'Log sources not found in "{}"'.format(cfgfile)
+ raise epylog.ConfigError(msg)
+
+ self.extraopts = {}
+
+ if modcfg.has_section('conf'):
+ logger.info('Found extra options')
+
+ for option in modcfg.options('conf'):
+ value = modcfg.get('conf', option)
+ value = Template(value).safe_substitute(self.paths)
+ self.extraopts[option] = value
+
+ logger.debug('extra opt {}={}'.format(option, value))
+
+ modname = os.path.basename(self.executable)
+
+ tempfile.tempdir = self.tmpprefix
+
+ self.logdump = tempfile.mktemp('%s.DUMP' % modname)
+ self.logreport = tempfile.mktemp('%s.REPORT' % modname)
+ self.logfilter = tempfile.mktemp('%s.FILTER' % modname)
+
+ logger.debug('name={}'.format(self.name))
+ logger.debug('executable={}'.format(self.executable))
+ logger.debug('enabled={}'.format(self.enabled))
+ logger.debug('priority={}'.format(self.priority))
+ logger.debug('logentries={}'.format(logentries))
+ logger.debug('logdump={}'.format(self.logdump))
+ logger.debug('logreport={}'.format(self.logreport))
+ logger.debug('logfilter={}'.format(self.logfilter))
+
+ self._init_module()
+
+ stubnames = logentries.split(',')
+ self.logs = []
+
+ for stubname in stubnames:
+ stubname = stubname.strip()
+
+ if stubname[0] != '$':
+ msg = 'Log definitions must start with "$"'
+ raise epylog.ConfigError(msg)
+
+ stubname = stubname[1:]
+
+ logger.debug('stubname={}'.format(stubname))
+
+ if stubname == 'ALL':
+ logger.info('Getting ALL logs')
+ self.logs = logtracker.logs.values()
+
+ else:
+ logger.info('Getting a log matching "{}"'.format(stubname))
+
+ try:
+ log = logtracker.getlog(stubname)
+ except epylog.NoSuchLogError:
+ # Do not die, but disable this module and complain loudly
+ logger.error('Could not find log matching "{}"'.format(
+ stubname))
+ self.enabled = False
+ continue
+
+ self.logs.append(log)
+
+ if len(self.logs) == 0:
+ self.enabled = False
+ logger.warning('Module "{}" disabled'.format(self.name))
+ return
+
+ def _init_module(self):
+ """
+ Initializes an internal module by importing it and running
+ the __init__.
+ """
+ dirname = os.path.dirname(self.executable)
+ modname = os.path.basename(self.executable)
+ modname = modname[:-3]
+
+ logger.info('Importing module "{}"'.format(modname))
+
+ stuff = _loader.find_module_in_dir(modname, dirname)
+
+ if stuff:
+ try:
+ module = _loader.load_module(modname, stuff)
+ except Exception, e:
+ msg = 'Failure trying to import module "{}" ({}): {}'.format(
+ self.name, self.executable, e)
+ raise epylog.ModuleError(msg)
+
+ else:
+ msg = 'Could not find module "{}" in dir "{}"'.format(
+ modname, dirname)
+ raise epylog.ModuleError(msg)
+
+ try:
+ modclass = getattr(module, modname)
+ self.epymod = modclass(self.extraopts)
+ except AttributeError:
+ msg = 'Could not instantiate class "{}" in module "{}"'.format(
+ modname, self.executable)
+ raise epylog.ModuleError(msg)
+
+ logger.info('Opening "{}" for writing'.format(self.logfilter))
+ self.filtfh = open(self.logfilter, 'w+')
+
+ def message_match(self, message):
+ """
+ Used by internal modules to match the message of a syslog entry
+ against the list of regexes in the .regex_map.
+ """
+ handler = None
+ match_regex = None
+
+ for regex in self.epymod.regex_map.keys():
+ if regex.search(message):
+ logger.debug('match: {}'.format(message))
+ logger.debug('matching module: {}'.format(self.name))
+
+ match_regex = regex
+ handler = self.epymod.regex_map[regex]
+ break
+
+ return (handler, match_regex)
+
+ def put_filtered(self, line):
+ """
+ Puts a filtered line into the file with all filtered lines.
+ """
+ self.filtfh.write(line)
+ logger.debug('Wrote "{}" into filtfh'.format(line))
+
+ def no_report(self):
+ """
+ Cleanup routine in case there is no report for this module.
+ """
+ self.logreport = None
+ self.logfilter = None
+ self.close_filtered()
+
+ def close_filtered(self):
+ """
+ Closes the file with filtered messages.
+ """
+ self.filtfh.close()
+
+ def finalize_processing(self, rs):
+ """
+ Called at the end of all processing to generate the report,
+ return it, and delete the imported internal module.
+ """
+ logger.info('Finalizing for module "{}"'.format(self.name))
+
+ if self.filtfh.tell():
+ if not rs.is_empty():
+ report = self.epymod.finalize(rs)
+ if report:
+ logger.debug('----Report begins----')
+ logger.debug(report)
+ logger.debug('----Report ends-----')
+
+ repfh = open(self.logreport, 'w')
+ repfh.write(report)
+ repfh.close()
+ else:
+ self.logreport = None
+ self.logfilter = None
+ else:
+ logger.info('No filtered strings for this module')
+ self.logreport = None
+ self.logfilter = None
+
+ self.close_filtered()
+
+ logger.info('Done with this module, deleting')
+ del self.epymod
+
+ def get_html_report(self):
+ """
+ Get the report from a module, and if it's not HTML, making it HTML
+ first.
+ """
+ if self.logreport is None:
+ logger.info('No report from this module')
+ return None
+
+ if not os.access(self.logreport, os.R_OK):
+ msg = 'Log report from module "{}" is missing'.format(self.name)
+ raise epylog.ModuleError(msg)
+
+ logger.info('Reading the report from file "{}"'.format(self.logreport))
+
+ fh = open(self.logreport)
+ report = fh.read()
+ fh.close()
+
+ return report
+
+ def _make_into_html(self, report):
+ """
+ Utility function that turns plaintext into HTML by essentially
+ wrapping it into "<pre></pre>" and escaping the control chars.
+ """
+ report = report.replace('&', '&')
+ report = report.replace('<', '<')
+ report = report.replace('>', '>')
+
+ report = '<pre>\n{}\n</pre>'.format(report)
+
+ return report
diff --git a/py/epylog/mytempfile.py b/epylog/mytempfile.py
similarity index 100%
rename from py/epylog/mytempfile.py
rename to epylog/mytempfile.py
diff --git a/epylog/publishers.py b/epylog/publishers.py
new file mode 100644
index 0000000..c619fcd
--- /dev/null
+++ b/epylog/publishers.py
@@ -0,0 +1,657 @@
+"""
+This module is used to publish the report into a set of predefined
+publisher classes. You can write your own, as long as they contain the
+__init__ and publish methods.
+"""
+##
+# Copyright (C) 2003-2005 by Duke University
+# Copyright (C) 2005-2012 by Konstantin Ryabitsev and contributors
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
+# 02111-1307, USA.
+#
+# @Author Konstantin Ryabitsev <icon(a)mricon.com>
+#
+
+import epylog
+import os
+import re
+import socket
+import time
+import shutil
+import gzip
+import tempfile
+
+import logging
+
+from string import Template
+
+logger = logging.getLogger('epylog')
+
+def make_html_page(template, starttime, endtime, title, module_reports,
+ unparsed):
+ """
+ Make a html page out of a set of parameters, which include
+ module reports. Used by most, if not all, publishers.
+ """
+ logger.info('Making a standard report page')
+
+ valumap = {
+ 'starttime' : starttime,
+ 'endtime' : endtime,
+ 'title' : title,
+ 'hostname' : socket.gethostname()
+ }
+
+ logger.info('Concatenating the module reports together')
+
+ allrep = ''
+ for modrep in module_reports:
+ logger.info('Processing report for "{}"'.format(modrep.name))
+ allrep = '{}\n<h2>{}</h2>\n{}'.format(allrep, modrep.name,
+ modrep.htmlreport)
+ if allrep == '':
+ allrep = 'No module reports'
+
+ valumap['module_reports'] = allrep
+
+ if unparsed is not None:
+ unparsed = unparsed.replace('&', '&')
+ unparsed = unparsed.replace('<', '<')
+ unparsed = unparsed.replace('>', '>')
+
+ unparsed = '<pre>\n{}</pre>'.format(unparsed)
+
+ else:
+ unparsed = 'No unparsed strings'
+
+ valumap['unparsed_strings'] = unparsed
+ valumap['version'] = epylog.VERSION
+
+ endpage = Template(template).safe_substitute(valumap)
+
+ logger.debug('----htmlreport starts----')
+ logger.debug(endpage)
+ logger.debug('----htmlreport ends-----')
+
+ return endpage
+
+def do_chunked_gzip(infh, outfh, filename, ui):
+ """
+ A memory-friendly way of compressing the data.
+ """
+ gzfh = gzip.GzipFile('rawlogs', fileobj=outfh)
+
+ bartotal = infh.tell()
+ bardone = 0
+ bartitle = 'Gzipping ' + filename
+
+ infh.seek(0)
+
+ while True:
+ chunk = infh.read(epylog.CHUNK_SIZE)
+ if not chunk:
+ break
+
+ gzfh.write(chunk)
+ bardone += len(chunk)
+ ui.progressbar(bartitle, bardone, bartotal)
+ logger.debug('Wrote {} bytes'.format(len(chunk)))
+
+ gzfh.close()
+ ui.endbar(bartitle, 'gzipped down to {} bytes'.format(outfh.tell()))
+
+def mail_smtp(smtpserv, fromaddr, toaddr, msg):
+ """
+ Send mail using smtp.
+ """
+ import smtplib
+
+ logger.info('Mailing via SMTP server {}'.format(smtpserv))
+
+ server = smtplib.SMTP(smtpserv)
+ server.sendmail(fromaddr, toaddr, msg)
+ server.quit()
+
+def mail_sendmail(sendmail, msg):
+ """
+ Send mail using sendmail.
+ """
+ logger.info('Mailing the message via sendmail')
+
+ p = os.popen(sendmail, 'w')
+ p.write(msg)
+ p.close()
+
+class MailPublisher:
+ """
+ This publisher sends the results of an epylog run as an email message.
+ """
+
+ name = 'Mail Publisher'
+
+ def __init__(self, sec, config, ui):
+ self.ui = ui
+ self.tmpprefix = config.tmpprefix
+ self.section = sec
+
+ try:
+ mailto = config.get(self.section, 'mailto')
+ addrs = mailto.split(',')
+ self.mailto = []
+
+ for addr in addrs:
+ addr = addr.strip()
+ logger.debug('adding mailto=' + addr)
+ self.mailto.append(addr)
+ except:
+ self.mailto = ['root']
+
+ try:
+ mailfmt = config.get(self.section, 'format')
+ except:
+ mailfmt = 'both'
+
+ if mailfmt not in ('html', 'plain', 'both'):
+ msg = ('Format for Mail Publisher must be either "html", "plain",'
+ ' or "both." Format "%s" is unknown').format(mailfmt)
+ raise epylog.ConfigError(msg)
+
+ self.mailfmt = mailfmt
+
+ if mailfmt != 'html':
+ try:
+ lynx = config.get(self.section, 'lynx')
+ except:
+ lynx = '/usr/bin/lynx'
+ if not os.access(lynx, os.X_OK):
+ msg = 'Could not find "{}"'.format(lynx)
+ raise epylog.ConfigError(msg)
+
+ self.lynx = lynx
+ logger.info('Usable "lynx" found in "{}"'.format(self.lynx))
+
+ try:
+ include_rawlogs = config.getboolean(self.section, 'include_rawlogs')
+ except:
+ include_rawlogs = False
+
+ if include_rawlogs:
+ try:
+ rawlogs = int(config.get(self.section, 'rawlogs_limit'))
+ except:
+ rawlogs = 200
+ self.rawlogs = rawlogs * 1024
+
+ else:
+ self.rawlogs = 0
+
+ try:
+ self.smtpserv = config.get(self.section, 'smtpserv')
+ except:
+ self.smtpserv = 'localhost'
+
+ logger.debug('mailfmt={}'.format(self.mailfmt))
+ logger.debug('rawlogs={}'.format(self.rawlogs))
+ logger.debug('smtpserv={}'.format(self.smtpserv))
+
+ try:
+ self.gpg_encrypt = config.getboolean(self.section, 'gpg_encrypt')
+
+ try:
+ self.gpg_keyringdir = config.get(self.section, 'gpg_keyringdir')
+ except:
+ self.gpg_keyringdir = None
+
+ try:
+ gpg_recipients = config.get(self.section, 'gpg_recipients')
+ keyids = gpg_recipients.split(',')
+ self.gpg_recipients = []
+ for keyid in keyids:
+ keyid = keyid.strip()
+ logger.debug('adding gpg_recipient=' + keyid)
+ self.gpg_recipients.append(keyid)
+ except:
+ # Will use all recipients found in the keyring
+ self.gpg_recipients = None
+
+ try:
+ gpg_signers = config.get(self.section, 'gpg_signers')
+ keyids = gpg_signers.split(',')
+ self.gpg_signers = []
+ for keyid in keyids:
+ keyid = keyid.strip()
+ logger.debug('adding gpg_signer=' + keyid)
+ self.gpg_signers.append(keyid)
+ except:
+ self.gpg_signers = None
+
+ except:
+ self.gpg_encrypt = False
+
+ logger.debug('gpg_encrypt={}'.format(self.gpg_encrypt))
+
+
+ def publish(self, template, starttime, endtime, title, module_reports,
+ unparsed_strings, rawfh):
+ logger.info('Creating a standard html page report')
+
+ html_report = make_html_page(template, starttime, endtime, title,
+ module_reports, unparsed_strings)
+
+ self.htmlrep = html_report
+
+ self.plainrep = None
+
+ if self.mailfmt != 'html':
+ logger.info('Creating a plaintext format of the report')
+
+ tempfile.tempdir = self.tmpprefix
+ htmlfile = tempfile.mktemp('.html')
+
+ tfh = open(htmlfile, 'w')
+ tfh.write(html_report)
+ tfh.close()
+
+ logger.info('HTML report is in "{}"'.format(htmlfile))
+
+ plainfile = tempfile.mktemp('PLAIN')
+
+ logger.info('PLAIN report will go into "{}"'.format(plainfile))
+
+ logger.info('Making a syscall to "{}"'.format(self.lynx))
+
+ exitcode = os.system('{} -dump {} > {} 2>/dev/null'.format(
+ self.lynx, htmlfile, plainfile))
+
+ if exitcode or not os.access(plainfile, os.R_OK):
+ msg = 'Error making a call to "{}"'.format(self.lynx)
+ raise epylog.SysCallError(msg)
+
+ logger.info('Reading in the plain version')
+
+ tfh = open(plainfile)
+ self.plainrep = tfh.read()
+ tfh.close()
+
+ logger.debug('----plainrep follows----')
+ logger.debug(self.plainrep)
+ logger.debug('----plainrep ends----')
+
+ if self.rawlogs:
+ # GzipFile doesn't work with StringIO. :/ Bleh.
+ tempfile.tempdir = self.tmpprefix
+ outfh = open(tempfile.mktemp('GZIP'), 'w+')
+ do_chunked_gzip(rawfh, outfh, 'rawlogs', self.ui)
+ size = outfh.tell()
+
+ if size > self.rawlogs:
+ logger.warning('{} is over the defined max of "{}"'.format(
+ size, self.rawlogs))
+ logger.warning('Not attaching the raw logs')
+ self.rawlogs = 0
+ else:
+ logger.debug('Reading in the gzipped logs')
+ outfh.seek(0)
+ self.gzlogs = outfh.read()
+
+ outfh.close()
+
+ logger.info('Creating an email message')
+
+ try:
+ from email.mime.base import MIMEBase
+ from email.mime.text import MIMEText
+ from email.mime.multipart import MIMEMultipart
+ except ImportError:
+ from email.MIMEBase import MIMEBase
+ from email.MIMEText import MIMEText
+ from email.MIMEMultipart import MIMEMultipart
+
+ logger.debug('Creating a main header')
+
+ root_part = MIMEMultipart('mixed')
+ root_part.preamble = 'This is a multi-part message in MIME format.'
+
+ logger.debug('Creating the text/plain part')
+ text_part = MIMEText(self.plainrep, 'plain', 'utf-8')
+ logger.debug('Creating the text/html part')
+ html_part = MIMEText(self.htmlrep, 'html', 'utf-8')
+
+ if self.rawlogs > 0:
+ logger.debug('Creating the application/x-gzip part')
+ attach_part = MIMEBase('application', 'x-gzip')
+ attach_part.set_payload(self.gzlogs)
+
+ from email.encoders import encode_base64
+
+ logger.debug('Encoding the gzipped raw logs with base64')
+ encode_base64(attach_part)
+ attach_part.add_header('Content-Disposition', 'attachment',
+ filename='raw.log.gz')
+
+ if self.mailfmt == 'both':
+ # create another multipart for text+html
+ alt_part = MIMEMultipart('alternative')
+ alt_part.attach(text_part)
+ alt_part.attach(html_part)
+ root_part.attach(alt_part)
+ elif self.mailfmt == 'html':
+ root_part.attach(html_part)
+ elif self.mailfmt == 'plain':
+ root_part.attach(text_part)
+
+ if self.rawlogs > 0:
+ root_part.attach(attach_part)
+
+ if self.gpg_encrypt:
+ logger.info('Encrypting the message')
+
+ from StringIO import StringIO
+ try:
+ import gpgme
+
+ if self.gpg_keyringdir and os.path.exists(self.gpg_keyringdir):
+ logger.debug('Setting keyring dir to {}'.format(
+ self.gpg_keyringdir))
+ os.environ['GNUPGHOME'] = self.gpg_keyringdir
+
+ msg = root_part.as_string()
+ logger.debug('----Cleartext follows----')
+ logger.debug(msg)
+ logger.debug('----Cleartext ends----')
+
+ cleartext = StringIO(msg)
+ ciphertext = StringIO()
+
+ ctx = gpgme.Context()
+
+ ctx.armor = True
+
+ recipients = []
+ signers = []
+
+ logger.debug('gpg_recipients={}'.format(self.gpg_recipients))
+ logger.debug('gpg_signers={}'.format(self.gpg_signers))
+
+ if self.gpg_recipients is not None:
+ for recipient in self.gpg_recipients:
+ logger.debug('Looking for an encryption key for {}'.format(
+ recipient))
+ recipients.append(ctx.get_key(recipient))
+ else:
+ for key in ctx.keylist():
+ for subkey in key.subkeys:
+ if subkey.can_encrypt:
+ logger.debug('Found can_encrypt key={}'.format(
+ subkey.keyid))
+ recipients.append(key)
+ break
+
+ if self.gpg_signers is not None:
+ for signer in self.gpg_signers:
+ logger.debug('Looking for a signing key for {}'.format(
+ signer))
+ signers.append(ctx.get_key(signer))
+
+ if len(signers) > 0:
+ logger.info('Encrypting and signing the report')
+ ctx.signers = signers
+ ctx.encrypt_sign(recipients, gpgme.ENCRYPT_ALWAYS_TRUST,
+ cleartext, ciphertext)
+
+ else:
+ logger.info('Encrypting the report')
+ ctx.encrypt(recipients, gpgme.ENCRYPT_ALWAYS_TRUST,
+ cleartext, ciphertext)
+
+ logger.debug('Creating the MIME envelope for PGP')
+
+ gpg_envelope_part = MIMEMultipart('encrypted')
+ gpg_envelope_part.set_param('protocol',
+ 'application/pgp-encrypted', header='Content-Type')
+ gpg_envelope_part.preamble = ('This is an OpenPGP/MIME '
+ 'encrypted message (RFC 2440 and 3156)')
+
+ gpg_mime_version_part = MIMEBase('application', 'pgp-encrypted')
+ gpg_mime_version_part.add_header('Content-Disposition',
+ 'PGP/MIME version identification')
+ gpg_mime_version_part.set_payload('Version: 1')
+
+ gpg_payload_part = MIMEBase('application', 'octet-stream',
+ name='encrypted.asc')
+ gpg_payload_part.add_header('Content-Disposition',
+ 'OpenPGP encrypted message')
+ gpg_payload_part.add_header('Content-Disposition', 'inline',
+ filename='encrypted.asc')
+ gpg_payload_part.set_payload(ciphertext.getvalue())
+
+ gpg_envelope_part.attach(gpg_mime_version_part)
+ gpg_envelope_part.attach(gpg_payload_part)
+
+ # envelope becomes the new root part
+ root_part = gpg_envelope_part
+
+
+ except ImportError:
+ logger.error('Need crypto libraries for gpg_encrypt.')
+ logger.error('Install pygpgme for GPG encryption support.')
+ logger.error('Not mailing the report out of caution.')
+ return
+
+
+ root_part['Subject'] = title
+ root_part['To'] = ', '.join(self.mailto)
+ root_part['X-Mailer'] = epylog.VERSION
+
+ logger.debug('Creating the message as string')
+ msg = root_part.as_string()
+
+ logger.debug('----Message follows----')
+ logger.debug(msg)
+ logger.debug('----Message ends----')
+
+ logger.info('Figuring out if we are using sendmail or smtplib')
+
+ if re.compile('^/').search(self.smtpserv):
+ mail_sendmail(self.smtpserv, msg)
+ else:
+ fromaddr = 'root@%s' % socket.gethostname()
+ mail_smtp(self.smtpserv, fromaddr, self.mailto, msg)
+
+ self.ui.put('Mailed the report to: {}'.format(','.join(self.mailto)))
+
+
+class FilePublisher:
+ """
+ FilePublisher publishes the results of an Epylog run into a set of files
+ and directories on the hard drive.
+ """
+ name = 'File Publisher'
+ def __init__(self, sec, config, ui):
+ self.ui = ui
+ self.tmpprefix = config.tmpprefix
+
+ msg = 'Required attribute "{}" not found'
+ try:
+ expire = int(config.get(sec, 'expire_in'))
+ except:
+ raise epylog.ConfigError(msg.format('expire_in'))
+
+ try:
+ dirmask = config.get(sec, 'dirmask')
+ except:
+ raise epylog.ConfigError(msg.format('dirmask'))
+
+ try:
+ filemask = config.get(sec, 'filemask')
+ except:
+ raise epylog.ConfigError(msg.format('filemask'))
+
+ maskmsg = 'Invalid mask for {}: {}'
+ try:
+ self.dirname = time.strftime(dirmask, time.localtime())
+ except:
+ raise epylog.ConfigError(maskmsg.format('dirmask', dirmask))
+
+ try:
+ path = config.get(sec, 'path')
+ path = Template(path).safe_substitute(config.paths)
+ except:
+ raise epylog.ConfigError(msg.format('path'))
+
+ try:
+ self.filename = time.strftime(filemask, time.localtime())
+ except:
+ epylog.ConfigError(maskmsg.format('filemask', filemask))
+
+ self._prune_old(path, dirmask, expire)
+
+ self.path = os.path.join(path, self.dirname)
+
+ try:
+ self.save_rawlogs = config.getboolean(sec, 'save_rawlogs')
+ except:
+ self.save_rawlogs = 0
+
+ if self.save_rawlogs:
+ logger.info('Will save raw logs in the reports directory')
+
+ self.notify = []
+
+ try:
+ notify = config.get(sec, 'notify')
+ for addy in notify.split(','):
+ addy = addy.strip()
+ logger.info('Will notify: {}'.format(addy))
+ self.notify.append(addy)
+
+ except:
+ pass
+
+ try:
+ self.smtpserv = config.get(sec, 'smtpserv')
+ except:
+ self.smtpserv = '/usr/sbin/sendmail -t'
+
+ if self.notify:
+ try:
+ self.pubroot = config.get(sec, 'pubroot')
+ logger.debug('pubroot={}'.format(self.pubroot))
+ except:
+ msg = 'File publisher requires a pubroot when notify is set'
+ raise epylog.ConfigError(msg)
+
+ logger.debug('path={}'.format(self.path))
+ logger.debug('filename={}'.format(self.filename))
+
+ def _prune_old(self, path, dirmask, expire):
+ """
+ Removes the directories that are older than a certain date.
+ """
+ logger.info('Pruning directories older than {} days'.format(expire))
+
+ expire_limit = int(time.time()) - (86400 * expire)
+
+ logger.debug('expire_limit={}'.format(expire_limit))
+
+ if not os.path.isdir(path):
+ logger.info('Dir {} not found -- skipping pruning'.format(path))
+ return
+
+ for entry in os.listdir(path):
+ logger.debug('Found: {}'.format(entry))
+ if os.path.isdir(os.path.join(path, entry)):
+ try:
+ stamp = time.mktime(time.strptime(entry, dirmask))
+ except ValueError, e:
+ logger.info('Dir {} did not match dirmask {}: {}'.format(
+ entry, dirmask, e))
+ logger.info('Skipping {}'.format(entry))
+ continue
+
+ if stamp < expire_limit:
+ shutil.rmtree(os.path.join(path, entry))
+ self.ui.put('File Publisher: Pruned old dir: {}'.format(
+ entry))
+ else:
+ logger.info('{} is still active'.format(entry))
+ else:
+ logger.info('{} is not a directory. Skipping.'.format(entry))
+
+ logger.info('Finished with pruning')
+
+ def publish(self, template, starttime, endtime, title, module_reports,
+ unparsed_strings, rawfh):
+ logger.info('Checking and creating the report directories')
+
+ if not os.path.isdir(self.path):
+ try:
+ os.makedirs(self.path)
+ except OSError, e:
+ logger.error('Error creating directory "{}": {}'.format(
+ self.path, e))
+ logger.error('File publisher exiting.')
+ return
+
+ logger.info('Creating a standard html page report')
+ html_report = make_html_page(template, starttime, endtime, title,
+ module_reports, unparsed_strings)
+
+ filename = '{}.html'.format(self.filename)
+ repfile = os.path.join(self.path, filename)
+
+ logger.info('Dumping the report into {}'.format(repfile))
+
+ fh = open(repfile, 'w')
+ fh.write(html_report)
+ fh.close()
+
+ self.ui.put('Report saved in: {}'.format(self.path))
+
+ if self.notify:
+ logger.info('Creating an email message')
+ publoc = '{}/{}/{}'.format(self.pubroot, self.dirname, filename)
+
+ from email.mime.text import MIMEText
+ eml = MIMEText('New Epylog report is available at:\r\n{}'.format(
+ publoc))
+
+ eml['Subject'] = '{} (report notification)'.format(title)
+ eml['To'] = ', '.join(self.notify)
+ eml['X-Mailer'] = epylog.VERSION
+
+ msg = eml.as_string()
+
+ logger.info('Figuring out if we are using sendmail or smtplib')
+ if self.smtpserv[0] == '/':
+ mail_sendmail(self.smtpserv, msg)
+ else:
+ fromaddr = 'root@{}'.format(socket.gethostname())
+ mail_smtp(self.smtpserv, fromaddr, self.notify, msg)
+
+ self.ui.put('Notification mailed to: {}'.format(
+ ','.join(self.notify)))
+
+ if self.save_rawlogs:
+ logfilen = '{}.log'.format(self.filename)
+ logfile = os.path.join(self.path, '{}.gz'.format(logfilen))
+
+ logger.info('Gzipping logs and writing them to {}'.format(logfilen))
+ outfh = open(logfile, 'w+')
+ do_chunked_gzip(rawfh, outfh, logfilen, logger)
+ outfh.close()
+ self.ui.put('Gzipped logs saved in: {}'.format(self.path))
+
diff --git a/epylog/report.py b/epylog/report.py
new file mode 100644
index 0000000..a2676fc
--- /dev/null
+++ b/epylog/report.py
@@ -0,0 +1,232 @@
+"""
+This module handles generating the reports.
+"""
+##
+# Copyright (C) 2003-2005 by Duke University
+# Copyright (C) 2005-2012 by Konstantin Ryabitsev and contributors
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
+# 02111-1307, USA.
+#
+# @Author Konstantin Ryabitsev <icon(a)mricon.com>
+#
+
+import epylog
+import os
+import re
+import time
+import tempfile
+import socket
+
+from string import Template
+
+from publishers import *
+
+import logging
+
+logger = logging.getLogger('epylog')
+
+class ModuleReport:
+ """
+ A small helper class to hold Module HTML reports.
+ """
+ def __init__(self, name, htmlreport):
+ self.name = name
+ self.htmlreport = htmlreport
+
+class Report:
+ """
+ This helper class holds the contents of a report before it is
+ published using publisher classes.
+ """
+ def __init__(self, config, ui):
+ self.ui = ui
+ logger.info('Starting Report object intialization')
+
+ ##
+ # publishers: a tuple of publisher objects
+ # filt_fh: where the filtered strings from modules will go
+ # useful: tells epylog if the report is of any use or not.
+ # module_reports: module reports will be put here eventually
+ #
+ self.publishers = []
+ self.filt_fh = None
+ self.useful = False
+ self.module_reports = []
+
+ self.tmpprefix = config.tmpprefix
+ self.runtime = time.localtime()
+
+ sec = 'report'
+
+ try:
+ title = config.get(sec, 'title')
+ except:
+ title = '$hostname system events: $localtime'
+
+ try:
+ template = config.get(sec, 'template').strip()
+ except:
+ template = '$cfgdir/report_template.html'
+
+ self.template = Template(template).safe_substitute(config.paths)
+
+ try:
+ self.unparsed = config.getboolean(sec, 'include_unparsed')
+ except:
+ self.unparsed = True
+
+ try:
+ publishers = config.get(sec, 'publishers')
+ except:
+ msg = 'No publishers defined in "{}"'.format(sec)
+ raise epylog.ConfigError(msg)
+
+ titlevars = {
+ 'hostname' : socket.gethostname(),
+ 'localtime' : time.strftime('%c', self.runtime)
+ }
+
+ logger.debug('Before title={}'.format(title))
+ self.title = Template(title).safe_substitute(titlevars)
+ logger.debug('After title={}'.format(self.title))
+
+ logger.debug('template={}'.format(self.template))
+ logger.debug('unparsed={}'.format(self.unparsed))
+
+ if self.unparsed:
+ tempfile.tmpdir = self.tmpprefix
+ filen = tempfile.mktemp('.FILT')
+ self.filt_fh = open(filen, 'w+')
+ logger.info('Filtered strings file created in {}'.format(filen))
+
+ logger.debug('publishers={}'.format(publishers))
+ logger.info('Initializing publishers')
+
+ for sec in publishers.split(','):
+ sec = sec.strip()
+
+ if sec not in config.sections():
+ message = 'Publisher section "{}" not found'.format(sec)
+ raise epylog.ConfigError(message)
+
+ try:
+ method = config.get(sec, 'method')
+ except:
+ msg = 'Publishing method not defined in "{}"'.format(sec)
+ raise epylog.ConfigError(msg)
+
+ logger.debug('method={}'.format(method))
+
+ if method == 'file':
+ publisher = FilePublisher(sec, config, ui)
+ elif method == 'mail':
+ publisher = MailPublisher(sec, config, ui)
+ else:
+ msg = 'Publishing method "{}" not supported'.format(method)
+ raise epylog.ConfigError(msg )
+
+ self.publishers.append(publisher)
+
+ def append_module_report(self, module_name, module_report):
+ """
+ Appends a module report.
+ """
+ if len(module_report) > 0:
+ modrep = ModuleReport(module_name, module_report)
+ logger.info('Appending report for "{}"'.format(module_name))
+ logger.debug('----report follows----')
+ logger.debug(module_report)
+ logger.debug('----report ends----')
+ self.module_reports.append(modrep)
+ self.useful = True
+ else:
+ logger.info('Module report is empty, ignoring')
+
+ def append_filtered_strings(self, module_name, fsfh):
+ """
+ Adds filtered strings to the report.
+ """
+ if self.filt_fh is None:
+ logger.info('No open filt_fh, ignoring')
+ return
+
+ fsfh.seek(0, 2)
+
+ if fsfh.tell() != 0:
+ logger.info('Appending filtered strings from module "{}"'.format(
+ module_name))
+ logger.debug('Doing chunked read from {} to {}'.format(
+ fsfh.name, self.filt_fh.name))
+ fsfh.seek(0)
+
+ while True:
+ chunk = fsfh.read(epylog.CHUNK_SIZE)
+ if len(chunk):
+ self.filt_fh.write(chunk)
+ logger.debug('wrote {} bytes'.format(len(chunk)))
+ else:
+ logger.debug('EOF reached')
+ break
+ self.useful = True
+
+ else:
+ logger.info('Filtered Strings are empty, ignoring')
+
+ def set_stamps(self, stamps):
+ """
+ Set the timestamps of the report -- the starting and the ending one.
+ """
+ (self.start_stamp, self.end_stamp) = stamps
+ logger.debug('start_stamp={}'.format(self.start_stamp))
+ logger.debug('end_stamp={}'.format(self.end_stamp))
+
+ def publish(self, rawfh, unparsed):
+ """
+ Publishes the report using all enabled publishers.
+ """
+ if self.filt_fh is not None:
+ if unparsed is None:
+ unparsed = self.mk_unparsed_from_raw(rawfh)
+ else:
+ unparsed = ''
+
+ logger.info('Reading in the template file "{}"'.format(self.template))
+
+ fh = open(self.template)
+ template = fh.read()
+ fh.close()
+
+ starttime = time.strftime('%c', time.localtime(self.start_stamp))
+ endtime = time.strftime('%c', time.localtime(self.end_stamp))
+
+ for publisher in self.publishers:
+ logger.info('Invoking publisher "{}"'.format(publisher.name))
+
+ publisher.publish(template,
+ starttime,
+ endtime,
+ self.title,
+ self.module_reports,
+ unparsed,
+ rawfh)
+
+
+ def is_report_useful(self):
+ """
+ Returns False if the report is not useful (no new strings in logs).
+ """
+ return self.useful
+
diff --git a/etc/epylog.conf.in b/etc/epylog.conf
similarity index 51%
rename from etc/epylog.conf.in
rename to etc/epylog.conf
index 0495214..1bbbaa3 100644
--- a/etc/epylog.conf.in
+++ b/etc/epylog.conf
@@ -2,31 +2,32 @@
# Main Epylog configuration file. See epylog.conf(5) for more info.
#
[main]
-cfgdir = %%pkgconfdir%%
-tmpdir = %%TEMP_DIR%%
-vardir = %%pkgvardir%%
+cfgdir = /etc/epylog
+tmpdir = /var/tmp
+vardir = /var/lib/epylog
+moduledir = /usr/share/epylog/modules
[report]
-title = @@HOSTNAME@@ system events: @@LOCALTIME@@
-template = %%pkgconfdir%%/report_template.html
+title = $hostname system events: $localtime
+template = $cfgdir/report_template.html
+publishers = mail
include_unparsed = yes
-publishers = mail
[mail]
-method = mail
-smtpserv = /usr/sbin/sendmail -t
-mailto = root
-format = html
-lynx = %%LYNX_BIN%%
+method = mail
+smtpserv = /usr/sbin/sendmail -t
+mailto = root
+format = html
+lynx = /usr/bin/w3m
include_rawlogs = no
-rawlogs_limit = 200
+rawlogs_limit = 200
##
# GPG encryption requires pygpgme installed
#
gpg_encrypt = no
# If gpg_keyringdir is omitted, we'll use the default ~/.gnupg for the
# user running epylog (/root/.gnupg, usually).
-#gpg_keyringdir = %%pkgconfdir%%/gpg/
+#gpg_keyringdir = {vardir}/gpg/
# List key ids, can be emails or fingerprints. If omitted, we'll
# encrypt to all keys found in the pubring.
#gpg_recipients = admin1(a)example.com, admin2(a)example.com
@@ -35,12 +36,12 @@ gpg_encrypt = no
#gpg_signers = epylog(a)logserv.example.com
[file]
-method = file
-path = /var/www/html/epylog
-dirmask = %Y-%b-%d_%a
-filemask = %H%M
+method = file
+path = /var/www/epylog
+dirmask = %Y-%b-%d_%a
+filemask = %H%M
save_rawlogs = no
-expire_in = 7
-notify = root@localhost
-smtpserv = /usr/sbin/sendmail -t
-pubroot = http://localhost/epylog
+expire_in = 7
+notify = root
+smtpserv = /usr/sbin/sendmail -t
+pubroot = http://localhost/epylog
diff --git a/etc/logsources.conf b/etc/logsources.conf
new file mode 100644
index 0000000..4aa47a6
--- /dev/null
+++ b/etc/logsources.conf
@@ -0,0 +1,19 @@
+[messages]
+source = /home/mricon/work/git/epylog/log/messages
+rotated = /home/mricon/work/git/epylog/log/messages-*
+tsformat = traditional
+
+[secure]
+source = /home/mricon/work/git/epylog/log/secure
+rotated = /home/mricon/work/git/epylog/log/secure-*
+tsformat = traditional
+
+[mail]
+source = /home/mricon/work/git/epylog/log/maillog
+rotated = /home/mricon/work/git/epylog/log/maillog-*
+tsformat = traditional
+
+[cron]
+source = /home/mricon/work/git/epylog/log/cron
+rotated = /home/mricon/work/git/epylog/log/cron-*
+tsformat = traditional
diff --git a/etc/modules.d/common_unparsed.conf.in b/etc/modules.d/common_unparsed.conf
similarity index 58%
rename from etc/modules.d/common_unparsed.conf.in
rename to etc/modules.d/common_unparsed.conf
index 71a0620..00cf3fa 100644
--- a/etc/modules.d/common_unparsed.conf.in
+++ b/etc/modules.d/common_unparsed.conf
@@ -1,10 +1,9 @@
[module]
desc = Common Unparsed Similar Strings Module
-exec = %%MODULES_DIR%%/common_unparsed_mod.py
-files = /var/log/messages[.#] /var/log/secure[.#]
-enabled = yes
-internal = yes
-outhtml = yes
+exec = $moduledir/common_unparsed_mod.py
+files = $messages, $secure
+# Needs adjusting to match 1.1
+enabled = no
priority = 10
[conf]
diff --git a/etc/modules.d/logins.conf.in b/etc/modules.d/logins.conf
similarity index 88%
rename from etc/modules.d/logins.conf.in
rename to etc/modules.d/logins.conf
index f4f7b26..c3b8c5d 100644
--- a/etc/modules.d/logins.conf.in
+++ b/etc/modules.d/logins.conf
@@ -1,10 +1,10 @@
[module]
-desc = Logins
-exec = %%MODULES_DIR%%/logins_mod.py
-files = /var/log/messages[.#], /var/log/secure[.#]
-enabled = yes
+desc = Logins
+exec = $moduledir/logins_mod.py
+files = $messages, $secure
+enabled = no
internal = yes
-outhtml = yes
+outhtml = yes
priority = 0
[conf]
@@ -12,13 +12,13 @@ priority = 0
# Only enable things useful for your configuration to speed things
# up. The more stuff you enable, the slower matching will be.
#
-enable_pam = 1
-enable_xinetd = 1
-enable_sshd = 1
+enable_pam = 1
+enable_xinetd = 1
+enable_sshd = 1
enable_uw_imap = 0
enable_dovecot = 0
enable_courier = 0
-enable_imp = 0
+enable_imp = 0
enable_proftpd = 0
##
diff --git a/etc/modules.d/mail.conf.in b/etc/modules.d/mail.conf
similarity index 69%
rename from etc/modules.d/mail.conf.in
rename to etc/modules.d/mail.conf
index fc98745..57e4579 100644
--- a/etc/modules.d/mail.conf.in
+++ b/etc/modules.d/mail.conf
@@ -1,10 +1,9 @@
[module]
desc = Mail Report
-exec = %%MODULES_DIR%%/mail_mod.py
-files = /var/log/maillog[.#]
-enabled = yes
-internal = yes
-outhtml = yes
+exec = $moduledir/mail_mod.py
+files = $maillog
+# Need fixing to match 1.1
+enabled = no
priority = 3
[conf]
diff --git a/etc/modules.d/notices.conf.in b/etc/modules.d/notices.conf
similarity index 65%
rename from etc/modules.d/notices.conf.in
rename to etc/modules.d/notices.conf
index 93fccd3..8f8f0fe 100644
--- a/etc/modules.d/notices.conf.in
+++ b/etc/modules.d/notices.conf
@@ -1,22 +1,21 @@
[module]
desc = Notices
-exec = %%MODULES_DIR%%/notices_mod.py
-files = /var/log/messages[.#], /var/log/secure[.#], /var/log/maillog[.#]
-enabled = yes
-internal = yes
-outhtml = yes
+exec = $moduledir/notices_mod.py
+files = $ALL
+# Needs fixing in order to work at all
+enabled = no
priority = 9
[conf]
##
# Where is your notice_dist.xml file?
#
-notice_dist = %%pkgconfdir%%/notice_dist.xml
+notice_dist = $cfgdir/notice_dist.xml
##
# Add your own notices into notice_local.xml, not into notice_dist.xml!
# This way you don't risk missing future revisions to notice_dist.xml
#
-notice_local = %%pkgconfdir%%/notice_local.xml
+notice_local = $cfgdir/notice_local.xml
##
# You can list the ids of <notice> members from notice_dist.xml here
# namely, or you can use ALL to enable all of them. There is no need
diff --git a/etc/modules.d/ntp.conf b/etc/modules.d/ntp.conf
new file mode 100644
index 0000000..6ed5267
--- /dev/null
+++ b/etc/modules.d/ntp.conf
@@ -0,0 +1,7 @@
+[module]
+desc = NTP
+exec = $moduledir/sntp_mod.py
+files = $messages
+# Needs review and fixing
+enabled = no
+priority = 8
diff --git a/etc/modules.d/packets.conf.in b/etc/modules.d/packets.conf
similarity index 79%
rename from etc/modules.d/packets.conf.in
rename to etc/modules.d/packets.conf
index 24f89a7..673a79f 100644
--- a/etc/modules.d/packets.conf.in
+++ b/etc/modules.d/packets.conf
@@ -1,17 +1,16 @@
[module]
desc = Packet Filter
-exec = %%MODULES_DIR%%/packets_mod.py
-files = /var/log/messages[.#]
-enabled = yes
-internal = yes
-outhtml = yes
+exec = $moduledir/packets_mod.py
+files = $messages
+# Needs fixing in order to match 1.1
+enabled = no
priority = 1
[conf]
##
# Where to look for the trojans list.
#
-trojan_list = %%pkgconfdir%%/trojans.list
+trojan_list = $cfgdir/trojans.list
##
# If a remote host hits this many systems, then don't list them namely,
# but collapse them into a nice report, e.g.: [50 hosts]
diff --git a/etc/modules.d/rsyncd.conf.in b/etc/modules.d/rsyncd.conf
similarity index 67%
rename from etc/modules.d/rsyncd.conf.in
rename to etc/modules.d/rsyncd.conf
index a271f13..0e45f27 100644
--- a/etc/modules.d/rsyncd.conf.in
+++ b/etc/modules.d/rsyncd.conf
@@ -1,10 +1,9 @@
[module]
desc = Rsyncd
-exec = %%MODULES_DIR%%/rsyncd_mod.py
-files = /var/log/messages[.#]
+exec = $moduledir/rsyncd_mod.py
+files = $messages
+# Needs fixing and review for 1.1
enabled = no
-internal = yes
-outhtml = yes
priority = 7
[conf]
diff --git a/etc/modules.d/selinux.conf b/etc/modules.d/selinux.conf
new file mode 100644
index 0000000..212934b
--- /dev/null
+++ b/etc/modules.d/selinux.conf
@@ -0,0 +1,10 @@
+[module]
+desc = SELinux Report
+exec = $moduledir/selinux_mod.py
+files = $messages
+# Needs review and fixing for 1.1
+enabled = no
+priority = 5
+
+[conf]
+enable_selinux = 1
diff --git a/etc/modules.d/smart.conf b/etc/modules.d/smart.conf
new file mode 100644
index 0000000..e15983e
--- /dev/null
+++ b/etc/modules.d/smart.conf
@@ -0,0 +1,7 @@
+[module]
+desc = S.M.A.R.T.
+exec = $moduledir/smart_mod.py
+files = $messages
+# Needs review and fixing for 1.1
+enabled = no
+priority = 7
diff --git a/etc/modules.d/spamd.conf.in b/etc/modules.d/spamd.conf
similarity index 87%
rename from etc/modules.d/spamd.conf.in
rename to etc/modules.d/spamd.conf
index 107d87a..c72ae91 100644
--- a/etc/modules.d/spamd.conf.in
+++ b/etc/modules.d/spamd.conf
@@ -1,10 +1,9 @@
[module]
desc = Spamassassin
-exec = %%MODULES_DIR%%/spamd_mod.py
-files = /var/log/maillog[.#]
+exec = $moduledir/spamd_mod.py
+files = $maillog
+# Needs review and fixing for 1.1
enabled = no
-internal = yes
-outhtml = yes
priority = 4
[conf]
diff --git a/etc/modules.d/sudo.conf b/etc/modules.d/sudo.conf
new file mode 100644
index 0000000..e1c80f0
--- /dev/null
+++ b/etc/modules.d/sudo.conf
@@ -0,0 +1,9 @@
+[module]
+desc = Sudo Report
+exec = $moduledir/sudo_mod.py
+files = $secure
+enabled = yes
+priority = 5
+
+[conf]
+enable_sudo = 1
diff --git a/etc/modules.d/weeder.conf b/etc/modules.d/weeder.conf
new file mode 100644
index 0000000..8f8e134
--- /dev/null
+++ b/etc/modules.d/weeder.conf
@@ -0,0 +1,10 @@
+[module]
+desc = Weedeater
+exec = $moduledir/weeder_mod.py
+files = $ALL
+priority = 9
+enabled = yes
+
+[conf]
+weed_dist = $cfgdir/weed_dist.cf
+weed_local = $cfgdir/weed_local.cf
diff --git a/etc/modules.d/yum.conf b/etc/modules.d/yum.conf
new file mode 100644
index 0000000..f4c59cf
--- /dev/null
+++ b/etc/modules.d/yum.conf
@@ -0,0 +1,9 @@
+[module]
+desc = Yum Report
+exec = $moduledir/yum_mod.py
+files = $messages
+enabled = yes
+priority = 5
+
+[conf]
+enable_yum = 1
diff --git a/etc/notice_dist.yaml b/etc/notice_dist.yaml
new file mode 100644
index 0000000..e346a5e
--- /dev/null
+++ b/etc/notice_dist.yaml
@@ -0,0 +1,85 @@
+# CAUTION:
+# It is not advised to edit this file! You may miss any future
+# revisions made to it. Instead, create/edit notice_local.yaml and
+# add your rules to it following the same yaml format as presented in
+# this file.
+
+Gconfd locking errors:
+ tags: 'gconfd'
+ strings:
+ - 'Failed to get lock'
+ - 'Failed to create'
+ - 'Error releasing lockfile'
+ - 'Could not lock temporary file'
+ - 'another process has the lock'
+
+SFTP Activity:
+ tags: 'sshd'
+ strings:
+ - 'subsystem request for sftp'
+
+Misc floppy errors:
+ regexes:
+ - 'floppy0:|\(floppy\)'
+
+YPServ denied:
+ tags: 'ypserv'
+ regexes:
+ - 'refused\sconnect\sfrom\s(\S+):\d+\sto\sprocedure\s(\S+)'
+ report: 'ypserv: \1 denied for \2'
+
+Linux reboot:
+ critical: true
+ tags: 'kernel'
+ regexes:
+ - 'Linux\sversion\s(\S*)'
+ report: 'Rebooted with Linux kernel \1'
+
+SSH Scan:
+ tags: 'sshd'
+ regexes:
+ - 'Did not receive identification string from (\S*)'
+ report: 'SSH scan from \1'
+
+Dirty CDROM mount:
+ strings:
+ - 'VFS: busy inodes on changed media'
+
+Misc CDROM errors:
+ tags: 'kernel'
+ strings:
+ - 'cdrom: This disc doesn'
+ - 'Make sure there is a disc in the drive.'
+
+Dirty media mounts:
+ strings:
+ - 'attempt to access beyond end of device'
+ - 'kernel: bread in fat_access failed'
+ regexes:
+ - 'rw=\d+, want=\d+, limit=\d+'
+ - 'Directory sread .* failed'
+
+NFS Timeouts:
+ critical: true
+ tags: 'nfs'
+ regexes:
+ - 'server (\S+) not responding'
+ - 'server (\S+) OK'
+ report: 'NFS timeouts to server \1'
+
+insmod errors:
+ tags: 'insmod'
+ strings:
+ - 'Hint: insmod errors'
+
+Cron runs:
+ regexes:
+ - 'CROND\[\d+\]: \((\S+)\) CMD \(([^\)]+)\)'
+ - 'crond\[\d+\]: \((\S+)\) CMD \(([^\)]+)\)'
+ report: 'Crond: \1 => \2'
+
+Promiscuous mode:
+ tags: 'kernel'
+ regexes:
+ - 'device (\S+) entered promiscuous mode'
+ report: 'device \1 entered promiscuous mode'
diff --git a/etc/notice_local.yaml b/etc/notice_local.yaml
new file mode 100644
index 0000000..4802cd6
--- /dev/null
+++ b/etc/notice_local.yaml
@@ -0,0 +1,18 @@
+#This is where you should put your own notice rules. The format is
+#simple:
+#
+#Description:
+# critical: true/false
+# tags: 'syslog tag of the string ("sshd" for "sshd[55512]:")'
+# strings:
+# - 'simple substring matches'
+# - 'one or many, depending on the need'
+# regexes:
+# - '.* regex (\S+) matches (\d+)'
+# - 'if more than one, (\S+) must have the same (\d+) number of groups'
+# - 'in the same (\S+) order (\d+)'
+# report:
+# - 'Include \1 group references \2.'
+# - 'If omitted, Description will be used'
+#
+#See notice_dist.yaml for examples.
diff --git a/etc/report_template.html b/etc/report_template.html
index 4d60b38..243cd55 100644
--- a/etc/report_template.html
+++ b/etc/report_template.html
@@ -1,22 +1,22 @@
<html>
<head>
- <title>@@TITLE@@</title>
+ <title>$title</title>
<style type="text/css">
h1 {color: gray; border-bottom: 3px double silver}
h2,h3 {color: gray; border-bottom: 1px solid silver}
</style>
</head>
<body>
- <h1>@@HOSTNAME@@</h1>
- <p>First event: <strong>@@STARTTIME@@</strong><br />
- Last event: <strong>@@ENDTIME@@</strong></p>
+ <h1>$hostname<h1>
+ <p>First event: <strong>$starttime</strong><br />
+ Last event: <strong>$endtime</strong></p>
<hr />
- @@MODULE_REPORTS@@
+ $module_reports
<hr />
<h2>Unparsed Strings:</h2>
- @@UNPARSED_STRINGS@@
+ $unparsed_strings
<hr />
<p align="right">Brought to you by
- <a href="http://linux.duke.edu/projects/epylog/">@@VERSION@@</a></p>
+ <a href="http://fedorahosted.org/epylog/">$version</a></p>
</body>
</html>
diff --git a/modules/sudo_mod.py b/modules/sudo_mod.py
index d411b9f..d7c9776 100644
--- a/modules/sudo_mod.py
+++ b/modules/sudo_mod.py
@@ -8,23 +8,26 @@ Jeremy Kindy (kindyjd at wfu.edu), Wake Forest University
import sys
import re
-##
+import logging
+
+logger = logging.getLogger('epylog')
+
# This is for testing purposes, so you can invoke this from the
# modules directory. See also the testing notes at the end of the
# file.
-#
-sys.path.insert(0, '../py/')
+if __name__ == '__main__':
+ sys.path.insert(0, '../')
+
from epylog import Result, InternalModule
class sudo_mod(InternalModule):
- def __init__(self, opts, logger):
+ def __init__(self, opts):
InternalModule.__init__(self)
- self.logger = logger
- self.logger.put(2, 'initializing sudo')
+ logger.debug('initializing sudo')
rc = re.compile
- self.ignore = 0
- self.open = 1
+ self.ignore = 0
+ self.open = 1
self.not_allowed = 2
sudo_map = {
@@ -62,38 +65,37 @@ class sudo_mod(InternalModule):
#
def sudo(self, linemap):
action = self.open
- self.logger.put(2, 'sudo invoked')
sys, msg, mult = self.get_smm(linemap)
- self.logger.put(3, 'test sudo %d' % mult)
user = self._get_sudo_user(msg)
- self.logger.put(3, 'sudo user: %s' % user)
+ logger.debug('sudo user: {}'.format(user))
+
asuser = self._get_sudo_as_user(msg)
- self.logger.put(3, 'sudo asuser: %s' % asuser)
+ logger.debug('sudo asuser: {}'.format(asuser))
+
command_name = self._get_sudo_command_name(msg)
- self.logger.put(3, 'sudo command: %s' % command_name)
+ logger.debug('sudo command: {}'.format(command_name))
restuple = self._mk_restuple(sys, action, user, asuser, command_name, None)
- self.logger.put(2, 'sudo finished')
return {restuple: mult}
def sudo_na(self, linemap):
action = self.not_allowed
- self.logger.put(2, 'sudo_na invoked')
sys, msg, mult = self.get_smm(linemap)
- self.logger.put(3, 'test sudo %d' % mult)
user = self._get_sudo_user(msg)
- self.logger.put(3, 'sudo user: %s' % user)
+ logger.debug('sudo user: {}'.format(user))
+
asuser = self._get_sudo_as_user(msg)
- self.logger.put(3, 'sudo asuser: %s' % asuser)
+ logger.debug('sudo asuser: {}'.format(asuser))
+
command_name = self._get_sudo_command_name(msg)
- self.logger.put(3, 'sudo command: %s' % command_name)
+ logger.debug('sudo command: {}'.format(command_name))
+
error_message = self._get_sudo_error_message(msg)
- self.logger.put(3, 'sudo error_message: %s' % error_message)
+ logger.debug('sudo error_message: {}'.format(error_message))
restuple = self._mk_restuple(sys, action, user, asuser, command_name, error_message)
- self.logger.put(2, 'sudo finished')
return {restuple: mult}
def sudo_ignore(self, linemap):
@@ -135,7 +137,6 @@ class sudo_mod(InternalModule):
####
# Finalize the report
def finalize(self, rs):
- logger = self.logger
##
# Prepare report
#
@@ -147,7 +148,6 @@ class sudo_mod(InternalModule):
rep[action] = ''
flipper = ''
for user in rs.get_distinct((action,)):
- #logger.put(2, 'sudo user: %s' % user)
if flipper: flipper = ''
else: flipper = self.flip
service_rep = []
@@ -156,16 +156,13 @@ class sudo_mod(InternalModule):
for asuser in rs.get_distinct((action, user, command_name)):
for error_message in rs.get_distinct((action, user, command_name, asuser)):
mymap = rs.get_submap((action, user, command_name, asuser, error_message))
- #logger.put(2, 'sudo command_name: %s' % command_name)
key2s = []
for key2 in mymap.keys():
hostname = key2[0]
key2s.append('%s(%d)' % (hostname, mymap[key2]))
hostnames = ', '.join(key2s)
- #logger.put(2, 'sudo hostnames: %s' % hostnames)
service_rep.append([command_name, hostnames, asuser, error_message])
for svcrep in service_rep:
- #logger.put(2, 'sudo svcrep: %s' % svcrep)
if blank: user = ' '
else: blank = 1
if (action == self.open):
@@ -176,11 +173,9 @@ class sudo_mod(InternalModule):
if rep[self.open]:
report += self.subreport_wrap % (self.sudo_open_title, rep[self.open])
- logger.put(2, 'sudo report: self.open added')
if rep[self.not_allowed]:
report += self.subreport_na_wrap % (self.sudo_not_allowed_title, rep[self.not_allowed])
- logger.put(2, 'sudo report: self.not_allowed added')
report = self.report_wrap % report
return report
diff --git a/modules/weeder_mod.py b/modules/weeder_mod.py
index 3461c37..1ed73b4 100644
--- a/modules/weeder_mod.py
+++ b/modules/weeder_mod.py
@@ -3,7 +3,8 @@
Description will eventually go here.
"""
##
-# Copyright (C) 2003 by Duke University
+# Copyright (C) 2003-2005 by Duke University
+# Copyright (C) 2005-2012 by Konstantin Ryabitsev and contributors
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
@@ -20,10 +21,7 @@ Description will eventually go here.
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
# 02111-1307, USA.
#
-# $Id$
-#
-# @Author Konstantin Ryabitsev <icon(a)linux.duke.edu>
-# @version $Date$
+# @Author Konstantin Ryabitsev <icon(a)mricon.com>
#
import sys
@@ -34,33 +32,43 @@ import re
# modules directory. See also the testing notes at the end of the
# file.
#
-sys.path.insert(0, '../py/')
+if __name__ == '__main__':
+ sys.path.insert(0, '../')
+
from epylog import InternalModule
+import logging
+
+logger = logging.getLogger('epylog')
+
class weeder_mod(InternalModule):
- def __init__(self, opts, logger):
+ def __init__(self, opts):
InternalModule.__init__(self)
- self.logger = logger
rc = re.compile
- weed_dist = opts.get('weed_dist', '/etc/epylog/weed_dist.cf')
- weed_local = opts.get('weed_local', '/etc/epylog/weed.local.cf')
+ weed_dist = opts.get('weed_dist', '/etc/epylog/weed_dist.cf')
+ weed_local = opts.get('weed_local', '/etc/epylog/weed.local.cf')
- weed_dist = weed_dist.strip()
+ weed_dist = weed_dist.strip()
weed_local = weed_local.strip()
- logger.put(5, 'weed_dist=%s' % weed_dist)
- logger.put(5, 'weed_local=%s' % weed_local)
+ logger.debug('weed_dist={}'.format(weed_dist))
+ logger.debug('weed_local={}'.format(weed_local))
+
weed = {}
- self.regex_map = {}
+
+ self.regex_map = {}
self.section_re = rc('^\s*\[(.*)\]\s*$')
self.comment_re = rc('^\s*#')
- self.empty_re = rc('^\s*$')
+ self.empty_re = rc('^\s*$')
for weedfile in [weed_dist, weed_local]:
- try: weed = self._read_weed(open(weedfile), weed)
- except: logger.put(5, 'Error reading %s' % weedfile)
- if not weed: return
+ try:
+ weed = self._read_weed(open(weedfile), weed)
+ except:
+ logger.debug('Error reading {}'.format(weedfile))
+ if not weed:
+ return
if 'REMOVE' in weed:
removes = weed['REMOVE']
@@ -71,34 +79,51 @@ class weeder_mod(InternalModule):
regexes = weed[key]
weed[key] = []
for regex in regexes:
- if regex != remove: weed[key].append(regex)
+ if regex != remove:
+ weed[key].append(regex)
enable = opts.get('enable', 'ALL').split(',')
- if 'ADD' in weed: enable.append('ADD')
- if enable[0] == 'ALL': enable = weed.keys()
+ if 'ADD' in weed:
+ enable.append('ADD')
+
+ if enable[0] == 'ALL':
+ enable = weed.keys()
+
for key in enable:
key = key.strip()
regexes = weed.get(key, [])
for regex in regexes:
- try: regex_re = rc(regex)
+ try:
+ regex_re = rc(regex)
except:
- logger.put(5, 'Error compiling regex "%s"' % regex)
+ logger.debug('Error compiling regex "{}"'.format(regex))
continue
+
self.regex_map[regex_re] = self.do_weed
def _read_weed(self, fh, weed):
section = 'default'
- while 1:
+
+ while True:
line = fh.readline()
- if not line: break
- if self.comment_re.search(line): continue
- if self.empty_re.search(line): continue
+ if not line:
+ break
+
+ if self.comment_re.search(line):
+ continue
+ if self.empty_re.search(line):
+ continue
+
mo = self.section_re.search(line)
- if mo: section = mo.group(1)
+ if mo:
+ section = mo.group(1)
else:
- try: weed[section].append(line.strip())
- except KeyError: weed[section] = [line.strip()]
+ try:
+ weed[section].append(line.strip())
+ except KeyError:
+ weed[section] = [line.strip()]
+
return weed
##
diff --git a/modules/yum_mod.py b/modules/yum_mod.py
index c6409fc..ca9ac20 100644
--- a/modules/yum_mod.py
+++ b/modules/yum_mod.py
@@ -11,40 +11,44 @@ Jeremy Kindy (kindyjd at wfu.edu), Wake Forest University
import sys
import re
-##
+import logging
+
+logger = logging.getLogger('epylog')
+
# This is for testing purposes, so you can invoke this from the
# modules directory. See also the testing notes at the end of the
# file.
-#
-sys.path.insert(0, '../py/')
+if __name__ == '__main__':
+ sys.path.insert(0, '../')
+
from epylog import Result, InternalModule
class yum_mod(InternalModule):
- def __init__(self, opts, logger):
+ def __init__(self, opts):
InternalModule.__init__(self)
- self.logger = logger
rc = re.compile
- self.ignore = 0
+ self.ignore = 0
self.installed = 1
- self.updated = 2
- self.erased = 3
+ self.updated = 2
+ self.erased = 3
yum_map = {
- rc('yum\: Installed\:'): self.pkg_installed,
- rc('yum\: Updated\:'): self.pkg_updated,
- rc('yum\: Erased\:'): self.pkg_erased
- }
+ rc('yum\S+: Installed:'): self.pkg_installed,
+ rc('yum\S+: Updated:'): self.pkg_updated,
+ rc('yum\S+: Erased:'): self.pkg_erased
+ }
do_yum = int(opts.get('enable_yum', '1'))
self.regex_map = {}
- if do_yum: self.regex_map.update(yum_map)
+ if do_yum:
+ self.regex_map.update(yum_map)
- self.name_re = rc('profile\: \[(.*)\]')
+ self.name_re = rc('profile\: \[(.*)\]')
self.installed_name_re = rc('Installed\: (.*)')
- self.updated_name_re = rc('Updated\: (.*)')
- self.erased_name_re = rc('Erased\: (.*)')
+ self.updated_name_re = rc('Updated\: (.*)')
+ self.erased_name_re = rc('Erased\: (.*)')
self.yum_title = '<font color="blue">Yum Changes Report</font>'
self.yum_installed_title = '<font color="blue">Packages Installed</font>'
@@ -63,37 +67,31 @@ class yum_mod(InternalModule):
# Line-matching routines
#
def pkg_updated(self, linemap):
- self.logger.put(3, 'entered pkg_updated...')
action = self.updated
sys, msg, mult = self.get_smm(linemap)
- self.logger.put(3, 'test yum %d' % mult)
name = self._get_updated_name(msg)
- self.logger.put(3, 'name: %s' % name)
+ logger.debug('name: {}'.format(name))
restuple = self._mk_restuple(action, sys, name)
return {restuple: mult}
def pkg_installed(self, linemap):
- self.logger.put(3, 'entered pkg_installed...')
action = self.installed
sys, msg, mult = self.get_smm(linemap)
- self.logger.put(3, 'test yum %d' % mult)
name = self._get_installed_name(msg)
- self.logger.put(3, 'name: %s' % name)
+ logger.debug('name: {}'.format(name))
restuple = self._mk_restuple(action, sys, name)
return {restuple: mult}
def pkg_erased(self, linemap):
- self.logger.put(3, 'entered pkg_erased...')
action = self.erased
sys, msg, mult = self.get_smm(linemap)
- self.logger.put(3, 'test yum %d' % mult)
name = self._get_erased_name(msg)
- self.logger.put(3, 'name: %s' % name)
+ logger.debug('name: {}'.format(name))
restuple = self._mk_restuple(action, sys, name)
return {restuple: mult}
@@ -130,7 +128,6 @@ class yum_mod(InternalModule):
return name
def finalize(self, rs):
- logger = self.logger
##
# Prepare report
#
@@ -141,7 +138,7 @@ class yum_mod(InternalModule):
rep[action] = ''
flipper = ''
for system in rs.get_distinct((action,)):
- self.logger.put(3, 'system: %s' % system)
+ logger.debug('system: {}'.format(system))
if flipper: flipper = ''
else: flipper = self.flip
service_rep = []
@@ -167,4 +164,4 @@ class yum_mod(InternalModule):
if __name__ == '__main__':
from epylog.helpers import ModuleTest
- ModuleTest(users_mod, sys.argv)
+ ModuleTest(yum_mod, sys.argv)
12 years, 4 months