Fixing CSRF exploits in Infrastructure
a.badger at gmail.com
Wed Nov 26 17:47:06 UTC 2008
Till Maas wrote:
> On Wed November 26 2008, Toshio Kuratomi wrote:
>> Till Maas wrote:
>>> On Wed November 26 2008, Toshio Kuratomi wrote:
>>>> To be easy to code, require the token for every request of an
>>>> authenticated user.
>>> If I understand your proposal correctly, a user would need to login again
>>> for every link he clicks from his bookmarks or any mail he gets from a
>>> Fedora webapplication, e.g. packagedb.
>> The con of this is that the consequences of not marking a method are a
>> security hole this way. One that we would have to audit the code to
>> discover. I'm not a big fan of this tradeoff but I'm open to comments
>> -- is this something that's so big a regression that we should bite the
>> bullet and do it? Can we come up with another method that protects the
>> users from failure by the programs author to properly mark the methods?
> How big the regression is if users have to log in for every external link they
> click on, depends on how often this happens. I believe that links to FAS are
> not exchanged very often, therefore it will not hurt very much. I guess there
> is also not so often a need to use FAS with tabs. But maybe there are people
> who have to use FAS more often. With Bodhi it is contrary, because there it
> is normal to get mails with links if someone added a comment to a package or
> for testers to exchange links to Bodhi updates. Also links to Bodhi updates
> are used in Bugzilla comments. There it would have a much bigger impact on
> the efficiency of testing new package updates imho.
Pretty much agreed on this analysis. My one note is that in my usage,
at least, I already have to login most of the time when clicking on a
link in bugzilla or email due to my session having expired already.
> Regarding the time needed for auditing applications: There may still be a lot
> of other vulnerabilites in these applications which cannot be fixed
> automatically. Therefore they still need to be written carefully. But maybe a
> compromise would be to require the token for all requests by default and then
> whitelist the ones, that are not meant to change state, e.g. requests like:
I thought of doing this but it still allows things to be insecure if
what a method does gets changed and the whitelisting isn't updated.
With my proposal, the identity and the token are tied together. So you
can't have an identity unless you A) have a username and password or B)
have both a token and some form of alternate id (SSL Cert or cookie).
Since it's natural to protect a resource by looking for a specific
group, username, or just not being anonymous, you are protected from
CSRF at the same time.
OTOH, if a whitelist of methods isn't updated when a form goes from
merely showing information to changing data, you lose the CSRF protection.
>>> And with every login a previous session is
>>> invalidated, which also includes links in another open browser tab, where
>>> the user logged after he clicked the previous link.
>> So this is interesting. This is true since we wanted to remove the need
>> derived. We could work around this in several ways:
>> 1) In addition to checking the hash against the tg-visit, check against
>> any non-expired session. If we did this check in the FAS server (all
>> authentication goes back to the FAS server already) then I think that
>> would work. So we'd send the FAS server the tg-visit cookie and the
>> token. Then the FAS server would do the comparison of the token to the
>> visit cookie; comparison to the visit to the current active sessions,
>> and finally, if the token did not match the cookie, between the token
>> and other non-expired sessions owned by the user.
> Another way would be to not change the session id if a user needs to supply
> the username/password again only because the token was missing. It would
> probably be enough to ask the user only to click a link that contains the
> matching token in case the token is missing.
Not changing the token can have security ramifications as it allows the
browser to specify what the visit key will be once the user is
this ATM but there could be something that I'm not thinking of or
browser security holes could introduce something in this area.
> Nevertheless it seems to me that securing all requests against CSRF
> automatically makes it a little easier to write a application, because the
> author does not need to care whether a request changes state or not.
It not only makes it easier on the application author, it is secure if
the application author does something wrong. This is the main advantage
of this proposal.
> On the
> downside it has a high impact on usability
This is the one I don't know about. It will change my usage a bit since
I'll need to start clicking on links to the other apps in present app
pages to open new tabs. But when I usually open links from external
sources I'm already used to having to re-login due to the session
expiring. So we need feedback here, do you often click on multiple
things in close proximity and wouldn't be able to change to clicking
within the app? Are you able to
> or makes the automatic CSRF
> protection a lot more complicated.
This one's untrue as seen in the revised proposal.
> Also securing all requests may cost a lot
> of performance, because more requests need to be made.
This one is probably not correct either. If the token and the tg-visit
match, there's no extra request. If the token and tg-visit do not
match, there's one extra request from FAS to the database. There are no
extra json calls.
> Last but not least is always more time spent on using an application than on
> writing it, therefore if the usability of an application is only enhanced a
> little, because of the many times it is used, there will be more manpower
> saved than is used to enhance the application.
This is true. But weighed against it is that usability takes a backseat
to security. So is the usability loss high enough that we just have to
be more vigilant in making sure that our list of secure/non-secure
methods is kept up-to-date or is the security risk of people trying to
hack us low enough that we can deal with the occasional bug that opens
up a security vulnerability (which we won't find without auditing the code)?
Or best of all, can we add some other check that allows us to preserve
the present usability and still refuse state-changing events if they
haven't been marked as such.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 197 bytes
Desc: OpenPGP digital signature
Url : http://lists.fedoraproject.org/pipermail/infrastructure/attachments/20081126/54215239/attachment.bin
More information about the infrastructure