kparal at redhat.com
Tue Aug 16 12:11:33 UTC 2011
> > I don't know how Steve's tests behave, but:
> > 1. We can't run destructive tests (uninstalling packages, deleting
> > system
> > files, stopping services).
> All are read only. I have several more that do stuff like start each
> servoce and scan
> the audit logs for AVCS, then stop all services. But this is
> disruptive and require
> running apps. So, I have saved some of those for a later release.
> > 2. We can help Steve create the AutoQA wrappers
> > for those tests, but we can't maintain the very tests themselves,
> > obviously. He has to do that.
> If you look at the test source, you can pull them in and maintain them
> yourself. Most
> are very simple ideas.
No, we can't. We already maintain some tests we don't fully understand and I regret it all the time. Well, at least that's my opinion. I'll discuss with team members to learn some common stance.
We want to provide the architecture that other people (like you) can use and execute useful tests. Maintaining third-party tests is very time-consuming, especially when users (i.e. test output recipients) start to complain about something that's not our code, and it diverges us from our goals. You have to be the one.
> > 3. Unfortunately we don't have an
> > infrastructure for third-party test maintainers. Currently the tests
> > have
> > to be in our git, that means he has to send any changes in patches.
> > We
> > deploy new version only once in several weeks at best. 4. I suppose
> > these
> > tests would run after each koji build. The only way of reporting
> > results
> > right now is to send emails for those maintainers that opted-in for
> > this,
> > nothing else.
> > That said, we would love to execute more tests for Fedora. But until
> > the
> > proper support is ready, it takes quite some effort. The first
> > approach is
> > go through the tests, select some appropriate ones and do that now.
> > The
> > second approach is wait some time until we are ready and then Steve
> > can
> > maintain these tests independently and we just execute them. We will
> > of
> > course create a ticket about that and follow on it when that time
> > comes.
> How do we pull some of these in?
I'll start the discussion in our team. I'll also create a ticket and CC you.
> I am interested in making sure we do
> not regress on
> the executable stack portion at a minimum. Between NX, FORTIFY_SOURCE,
> and stack-
> protector, we have pretty much won the stack overflow/inject shell
> code battle
> (excluding ROP possibilities) and I would hate to see that make a
> The find-hidden-exec test is also something people should run, but
> maybe during testing
> of a release after using it. It found some interesting problems over
> the years.
> Usually due to file creation with very bad permissions.
> The find_chroot test could run each build. It recently found a
> security problem in
> libcap. We really don't want apps that can escape the intended chroot.
> There is also a
> python equivalent. It finds lots of problems. But I think that part of
> the problem is
> python devs think python takes care of things, or maybe they don't
> know how to move
> all modules in the chroot and changing into it makes the program
> crash. Either way,
> there are some problems in python that need addressing.
> The find-sh4errors script can run each build. The idea is that bash
> has a mode, -n,
> that it simply parses the script and does not execute it. This finds
> many problems. And
> we really don't want to ship scripts that bash won't parse.
This sounds like a good starter. At least I can imagine what it does.
> The other scripts are harder to say they should be automated and run
> for each build.
> But I guarantee that if you want to find bugs in the distribution,
> start trying some of
> these scripts. There are lots of problems - enough glory to go around
> for everyone. :)
More information about the test