gmaxwell at gmail.com
Sun Mar 20 23:47:41 UTC 2005
On Sun, 20 Mar 2005 23:29:12 +0000, Mike Hearn <mike at navi.cx> wrote:
> Right. Actually I have a prototype SELinux "quarantine zone" policy file
> open in emacs right now. I've been writing a packaging/installer system
> for a while and the spyware question is common enough to be in the FAQ:
What would be neat is for somone to make a version of GLIBC that can
live inside a seccomp jail, a little loader that can prelink an
executable with that glibc and put it in the jail, and an interface
that lets you "yes / no" syscalls. :)
> Not saying it's the right solution, but it's something I (we) have been
> thinking about a fair bit.
The commentary there on the lack of signing is a little weird.... If
I setup a mirror repo, people need to trust *me* to use it unless the
packages are signed.. Even if they are only signed by some package
site ... it's far less likely that they will be compromised there on
that site rather than mine (they won't intentionally compromise the
files because they have a reputation to uphold, and lots of people use
their output so it's more likely that bad things would get noticed
than just some packages off my system).
It's not perfect security, but it is a big additional hurdle.
> > It's not even an arms race.. Once someone has gotten root priv code to
> > run on your system it's terribly difficult to remove it. There are
> > quite a few linux rootkits today that are harder than a reinstall to
> > remove, and even once you've done that you fundamentally can't be sure
> > that the system is secure.
> There are rootkits that can't be removed by a format/reinstall? How does
> that work?
Hah. No, I meant that you might as well reformat/reinstall because the
rootkit is so darn hard to remove, pointing to the futility of
antivirus type techniques (esp generic ones).
Of course, irremovable security breaches *are* possible.. Flash the
bios to patch cpu microcode .... It's not that unrealistic, .. you
make some FP exception change pump the system into ring zero.. Then
later you attack again, make some jailed code push the exception and
then start overwriting kernel memory. :) But that wasn't what I was
> > ClamAV is a cross platform antivirus package that supports both server
> > scanning techniques (such as operating as a milter) and desktop style
> > virus scanner support (intercepting file IO). It has definitions for
> > the existing linux viruses and worms, in addition to all the windows
> > cruft. As I said, it's a solved problem.
> Ah interesting, I eat my words then. I guess you are right, solved problem
> (though it'd have to be installed by default I guess, with some GUI?)
Since it's already there, and it's clear that it will be much more
useful prior to the arms race (you argue that it will be somewhat
useful after the arms race, I argue that it will not be useful at
all.... Time will tell, I suppose) why don't we allow those that need
the highest level of protection install it themselves now and get the
benefit of the full pre-arms race gain, ... and only make it a default
once it's clear that it's needed.... Maximizing whatever gain it has,
and giving more time for better security measures to be created.
> > Write software code that tracks changes to packages and detects changes
> > that might introduce security weaknesses. It's also a difficult
> The new GCC mudflap system might help here. I don't know how badly it hits
> performance but I seem to recall reading it was meant to be used during
> development only, so I guess a fair bit ...
It's not going to find very much, really, perhaps if coupled with a
pretty good regression suite with random test cases. :) Source code
analysis has a lot more potential, especially if you're working with
the assumption that the original code is secure and you're just
looking for patches that break it.
> I think it'd be more interesting to try developing some kind of
> whitelist/trust system to counter spyware/malware. Still it's a good idea.
The problem is that you'd only want to whitelist audited code.. And
with the pace of development it's not clear that it's reasonable to
keep enough code well enough audited to warrant such a system....
Another approach would be to make strong jails the norm, greatly
reducing the amount of code that must be audited to produce a trusted
Refactoring apps, apis, and user expectations to accept running most
applications in a highly jailed environment... plus improving the
performance and scalability of our jails would be a good step in this
direction. i.e. why does gaim need to access any of the file system
outside of ~/.gaim? ... The only obvious exception is file transfers,
... the file transfer system could work by having another little
program that poped up and asked for which file, and communicated over
a socket back to the more heavily jailed gaim... The file program
could be simple enough to be proven secure (if not for the fact that
it must call GTK).
But getting developers and users to understand that they need to build
their apps in this way when the can is a huge challenge... Getting
selinux to scale to supporting several different finegrained security
policies for every application on the system, much less getting the
selinux aware package maintainers to scale, will be a challenge as
> Thanks for correcting some of my misconceptions!
More information about the devel