On Thu, 2011-05-05 at 11:51 -0400, James Laska wrote:
On Thu, 2011-05-05 at 11:47 -0400, Vitezslav Humpa wrote:
> > > Can I just compress the existing logs that are over 100M for now?
> > > Any
> > > script suggestions for that?
> >
> > I put together a simple bash script to find and zip big files
> >
> > Run with no args for help, you can use it eg. like this
> > $ sh storelarge.sh /path/to/log/dir/ /where/store/ 100000
> >
>
> Modified it to pack them where they are (removing the original)
>
> $ sh compresslarge /path/to/log/dir/ 100000
>
> will search for all files bigger than 100000KB and bzip2 them
Awesome, thank you Vita... I'll run this on production once I untangle
from another activity. I'll follow-up with results afterwards (ETA ~2
hours).
*Huge* thanks to smooge from the Fedora infrastructure team. He was
able to locate additional disk space on the virt host where our AutoQA
production instance resides. Of course, thanks to autoqa-devel@ for
helping brainstorm ways around this issue. We now have ~33G total
capacity (previous was 17G). I'm running a slightly modified version of
the script Vita supplied to compress remaining large log files.
I've just re-enabled the autotest scheduler.
Per Tim's suggestion, I'll continue to monitor available disk space
every morning and run the compression script by hand. Once I'm
happy_with/trusting the compression results, I'll go ahead and add this
to cron.
Thanks,
James