fedora hosted, sharding and openid

Seth Vidal skvidal at fedoraproject.org
Wed Feb 13 06:52:15 UTC 2013


Today Patrick got the trac plugin for openid working pretty well with the 
new openid service. This effectively breaks our tight bind to the fas db 
(and mod_auth_pgsql) from the hosted boxes.

A while back we discussed the possibility of scaling the hosted service 
out horizontally somewhat by being able to break the projects up into 
chunks of data.

We said we'd need folks to refer to their sites with something like:

projectname.fedorahosted.org

so we could direct them to the right machine via dns on the backend. And 
we could then more easily add capacity as needed.

One thing we've been doing is running fedorahosted behind https. Part of 
that is b/c we were doing a basic-auth to pgsql to auth against fas. With 
openid that won't be an issue anymore.

The second reason is for personal/private/confidential items in tickets or 
what-not - for example the board trac instance. To that I suggest we 
bottle up the board trac instance, stuff it somewhere we can put an ssl 
cert in front of it and move along.

For the rest we make them non-ssl'd. The openid login, of course would be 
ssl'd, but the rest of the site doesn't really need to be, does it?

So we'd still need to get people to refer to the right urls. I don't think 
that would be likely to happen over night but we can at least start doing 
so, right?

What we get:
1. we get the possibility of not having all of our eggs in the one basket 
of serverbeach and the two hosted instances running now
2. we can possibly gain performance by getting some of the data off of the 
one big gluster dastastore.
3. we gain the ability to setup a 'newhosted' server, put a few 
trac/git/etc instances over there and try things out w/o breaking all the 
rest of them.
4. If we suddenly find we have a single, extremely popular project (good 
problem to have imo) then we can give it a dedicate instance and maintain 
performance.

Does anyone like this idea? Is anyone opposed to it? Got criticisms that 
sould be addressed? Things I'm completely blanking about?

let me know.

Thanks,
-sv



More information about the infrastructure mailing list