On Fri, 2008-09-19 at 21:50 -0500, Mike McGrath wrote:
Some of you have seen the disk alerts on app2, Looking more closely
seems the host was not built with enough disk space (like was app1). So
after the freeze is over I'll rebuild it.
It does raise a point about storage for transifex though. Basically each
host running transifex (or damned lies, I can't quite remember which)
keeps a local copy of every scm as part of its usage. For performance
reasons I don't think that will change but its something we'll want to
figure out long term. I haven't done the research but in my brain it
seems like running something like git/hg/svn/bzr over nfs will cause
On the other hand, these aren't upstream repos but local cache so I'm also
curious what the harm would be, if they get borked one could just delete
the cache and it would repopulate. Thoughts?
I'd like to propose a different strategy...
Based upon your original e-mail, this is Damned Lies at fault, not
Transifex, now I remember a similar issue to this with the initial
rebuild of the app servers, Damned Lies wasn't working because the
SQLite database didn't exist etc. My problem is that we have at least
two copies of the database, SO...
I've said this a couple of times before, BUT it'd REALLY be 'nice' to
have a machine in PHX that is dedicated for non public facing, yet
mission critical tasks (things that happen in the background). This
would be Damned Lies checkouts, and Mirror Manager crawls to name a
The database could then be shared by NFS (or rsynced) to the app servers
to keep it up-to-date. On the other hand, for Damned Lies, it appears
we can use mysql as the backend instead of SQLite (at least it's not
This way we can hopefully reduce some of our RAM etc needs and also some
of our bandwidth needs (one lot of regular checkouts vs two).
Nigel Jones <dev(a)nigelj.com>