OT - Trusted Boot project in F16

Alan Cox alan at lxorguk.ukuu.org.uk
Thu Jun 23 14:51:54 UTC 2011


> After a bit of looking (I was very concerned after reading the first
> post) it seems this feature is primarily focused on providing
> information to satisfy an external "trusted system y/n" type query by
> matching a known kernel image hash which is expected by the requestor.

Which is potentially a very dangerous thing, because all of a sudden you
create an environment which encourages third party tools to do this and
to claim DMCA violations against anyone who so much as peeks at what
their box is doing.

Note the 'trusted' is not 'by the user' in most of these cases it is 'by
big media companies who go around removing Linux from their platform and
suing people who put it back'. Also it's not 'trusted' as in secure it's
trusted as in 'the same as before'. One of the problems of deploying it
to create a secure setup is the two are not quite the same thing and it
is very very hard to manage the difference well.

Which isn't to say it doesn't have uses but to be useful to an end user
you need a setup where the end user holds the signing keys not Fedora or
big media. It also has some interesting licensing issues if you don't do
this - in particular the GPL says

1.	 The "source code" for a work means the preferred form of the
work for making modifications to it.

	Which would appear to mean 'with the keys so I can re-sign it'

2.	  The "Corresponding Source" for a work in object code form
means all the source code needed to generate, install, and (for an
executable work) run the object code and to modify the work, including
scripts to control those activities.

	Which would appear to imply any keys needed

GPL 3 covers all sorts of other stuff, so any mix of TC and GPLv3 ends up
horribly messy because the user must be able to modify the material and
there are anti DMCA provisions thankfully.


3.	It's been stated by several kernel developers that they believe
and intend the GPLv2 to be read to include such keys. In fact some kernel
files contain the following note

 * For the avoidance of doubt the "preferred form" of this code is one
   which
 * is in an open non patent encumbered format. Where cryptographic key
   signing
 * forms part of the process of creating an executable the information
 * including keys needed to generate an equivalently functional executable
 * are deemed to be part of the source code.

Yes we saw this coming years ago, along with some of the fascinating
papers by some vendors, academics and others on how to use crypto between
proprietary apps on an open source OS and firmware below it to do stuff
out of OS control.

> If this is the way things are, this feature can be implemented unnoticed
> by the user, hence the lack of major discussion on the user list. On the

Which is deeply unfortunate because it has good and bad stuff users ought
to know about.

> PS: I have questions about this feature, but at this point they are
> mostly on the level of "how is it guaranteed to be secure without a
> third-party check against the hardware". The answer must be pretty

A sequence of things measuring signing and trusting each other all in
terms of a set of master keys you don't control. Remember though this is
actually all about "was the same as before" (aka "no unauthorized
changes"). If you think about it you do the same with a web browser and
https://. There are root keys whose private key is utterly secure (except
when they get stolen) and whose public keys are in the browser. From this
you can build a chain of signatures and sign things in order to
authenticate a connection to a fair level of certainty. These root keys
also mean done carefully you can virtualise key sharing and do migrations
without compromising the system.

Roughly speaking there are two elements that are "useful". One is the TPM
provides a place to put secrets and do crypto. It's basically a crude
smartcard on the motherboard. So for example a corporate setup could put
VPN related keys there and anything changing the trusted build would not
be able to read them. Unfortunately of course that means upgrades, config
changes etc either break the keys or can't be measured properly. You can
use the tpm without TC more generally as somewhere safer to dump password
protected keys and its a pity gnome-keyring doesn't seem to do this, at
least on my box. Maybe I need to install other bits to make it do so but
it's not obvious.

The second more interesting one is to create virtual machines that are
secure from the rest of the OS so that for example a proprietary media
player could run its own secure VM where Linux and the administrator and
user cannot get at it, and can access and record secrets and the like
which the machine owner has no control over. TC PR people like to use a
banking app as an example of why it is "good", anti TC people like to
give examples like the old Sony CD rootkit stuff as why it is bad if
the bad guys might start hiding stuff under the OS this way.

The TC setups are actually valuable to system owners in some cases,
particularly in things like call centres and hotel front desks where the
system owners want to crack down on their users doing stuff they
shouldn't. They can also be useful for other things. Consider for example
a trusted environment which boots and performs security checks/virus
scans on the rest of the system. If you the end user control that and the
keys then its actually quite handy because a compromise can't modify
those measured paths without being caught when running the tests.

TC is a bit like a rifle. There are lots of evil things you can do with
it, but there are some good ones, and above all what matters is that you
are on the trigger (key owning) end of the discussion.

Alan


More information about the users mailing list