L10N migration to transifex.net

Toshio Kuratomi a.badger at gmail.com
Fri Feb 25 06:12:40 UTC 2011

I'll answer some of this.  Smooge can feel free to answer differently,
from his perspective :-)  I'm more on  the developer side of
infrastructure and he's more on the sysadmin side so we do have
different ways of looking at things.  On the subject of  moving to
transifex.net we were both in agreement that transifex.net held
numerous advantages over our current situation so hopefully our
perspectives here won't be too different :-)

On Thu, Feb 24, 2011 at 8:21 PM, noriko <noriko at fedoraproject.org> wrote:
> Stephen John Smoogen さんは書きました:
>> 1) It is a moving target. Development upstream is fast moving and
>> adding new features that translators want.
> It seems natural that every software is moving better, adding new
> features. However if a feature newly to be added may interfere existing
> function, as a user translator, I wish not to have a new feature added.
<nod>  This is always an issue.  For a while, Infrastructure was under
the impression that upgrading beyond 0.7 might bring workflow changes
to l10n that l10n needed to take time to figure out.  Then we were
told that upgrading beyond 0.9 would bring even more workflow changes
but that the benefits of getting to 0.9 outweighed the smaller
workflow changes.  We've pretty much been consistently targeting
deploying a slightly outdated version of transifex.

>> 2) We are really really behind on upgrades. There have been multiple
>> groups who have stepped up, started to do it and then had real life
>> hit them with some sort of curve ball. So what started out as 0.7 ->
>> 0.8 became 0.7 -> 0.9 and now would have to be 0.7 -> 1.1. We had
>> multiple people who for the last 3-4 months have said "We would be
>> happy to upgrade, but we aren't really able to do more than that."
> I see. So not enough resource can be assigned to such **frequent upgrade
> tasks from long term view.
> What I am understanding is that the problems we are encountering are
> caused by older version, and that the solution is to only upgrade. Is
> this correct understanding?
> Please let me make the point clear that L10n just wants stable working
> version, not necessarily keep upgraded.
> What if we upgrade less often?
So that doesn't really help.  As you point out, the ideal is to have a
stable, working version.  However, we all know that all software has
bugs.  Eventually, the bugs, corner case performance problems, etc hit
a mass where you realize that you have to address those problems or
admit that you aren't able to provide service that is good enough to
satisfy you, let alone the people who are consuming your service.  I
think we all realize that we've hit that point with the current
transifex in infrastructure.  The question is whether we are able,
with present resources, to address this.

If we had a lot of system administrators and packagers, one way out is
to upgrade *often*.  We send our bugs upstream, upstream addresses
them in new versions, we package those up, deploy and test in staging,
then deploy into production.  This has failed for us for two reasons
-- 1) we don't have a plethora of system administrators right now.
This is partially addressed because beckerde stepped up to update the
transifex packages and has been working on migrating the current
instance to 0.9.  However, current transifex has already moved beyond
0.9 to 1.x which points at the second issue with this method for
infrastructure: 2) the newer versions of transifex require a different
workflow than previous versions.  Staying close to upstream comes into
conflict with keeping the same workflow for the translation teams and

Alright, so if that method doesn't work well, there is another method.
 Instead of staying close to upstream, we can settle onto a version of
upstream and depend on our own resources to take care of bugs,
performance issues, etc.  In order to make this work we'd need extra
developers.  People who know Django, deploying and scaling web apps,
are familiar with localizing software, and are willing to put that all
together into enhancing our version of transifex.  Where we fall down
in many ways here:  We're just as short of developers as we are of
system admins.  The developers we have now aren't familiar with
Django.  Once we go down the road of creating local modifications to
transifex, we're going to be locking ourselves into maintaining our
codebase so we really need to have people we trust to be committed to
this for the long haul and we have to be conscious that we're
committing them to, essentially, maintaining a forked version of
transifex.  From my perspective, being on the developer side of
infrastructure and knowing how few developers we really have, this
looks even less inviting to me than the first option.  Infrastructure
is already in charge of too much code that it can barely keep up with.
 Becoming responsible for more code isn't a good idea in my opinion.

>> 3) We could have kept the old transifex going but this seemed to cause
>> more problems than it solved. There have been many items where people
>> were 'hamstrung' with the current software and wanted us to move to
>> something new.
> The old transifex above means v0.7 or v0.9.1?
> Who is 'people' and what 'items' make these people hamstrung?
> At least, there were actually v0.9.1 up on the staging server and
> running, which beckerde and lots of people put their time, blood, sweat
> and tears [1]. Many of language teams leaders who put themselves in CC
> had been watching this ticket and hoping/expecting the v0.9.1 system to
> be implemented onto production. What it the cause to make this ticket
> abandoned?
I think with beckerde's good work we might have been able to deploy
transifex 0.9.1 but it doesn't address the deeper problems that we're
experiencing which really center around not being able to run the
latest versions of the code at any given time.  For instance, some of
the performance problems that are stopping people from getting work
done in transifex are due to the nature of transifex < 1.x mirroring
the upstream repositories and pushing directly to them.  Getting
upstream transifex devs to work closely with our issues would also
require moving on and getting onto the latest upstream releases.  Both
of these require moving onto versions that require workflow changes.
Ironically, I think that moving to transifex.net now makes it easier
to make a case for moving back to hosting on Fedora Infrastructure in
that the workflow had to change when we migrated there, so it's easier
to get onto the latest versions of the transifex packages here (as
long as we have the packagers and sysadmins to keep up with updates
from here on).

However, I would like to point out that if doing translations on
transifex.net works out for us now, there's a lot of value in
remaining there.  No matter what we do, Fedora Infrastructure will
always have finite resources and ever growing desires.  There are
things that we must host on infrastructure because we're the only ones
that run them (for instance, koji, bodhi, smolt), there's things that
are valuable to run on Fedora Infrastructure because we can provide a
higher quality service than elsewhere or it's the only way to not lock
in the data or it makes life easier for our contributors (For
instance, our wiki and websites, mirrormanager that configures our
mirring system, fedorapeople).  Then there's things that we run in
Fedora Infrastructure only because it makes us feel more in control of
our own destiny....   If we're faced with finite resources, I'd love
to discard things in that latter category rather than things in the
second or (even worse) the first because removing things from that
category doesn't sacrifice quality or convenience or even freedom...
only the *feeling* that we're somehow more free.

I would argue that in the case of transifex.net, we're firmly in that
third category.  glezos is a member of the Fedora Community, transifex
was born out of a GSoC effort that he wrote with Fedora Infrastructure
people as mentors.  If he's running the transifex.net instance, it's
as close to having it run in infrastructure as we can get without
actually running on our hardware with our budget being spent on admins
to maintain it.  Furthermore, the quality of service that we'll get
from running on transifex is bound to be better than what we could get
from having Fedora Infrastructure people running it.  Keeping
transifex running is glezos's job.  It would only be our secondary or
tertiary or lower job in the crammed-full-of-issues world of Fedora
Infrastructure.  glezos and his company are intimately familiar with
the code and can work on bugs, performance bottlenecks, and missing
features as part of their day to day business.  Having lmacken or
ricky or I try to code a fix not only takes time from other projects,
it also requires a period where we gain a rudimentary understanding of
how the transifex code works and where the problem is being generated
before we can begin to dive into actual coding.

The only place that I can see where we may come into conflict at some
point is if transifex.net's paying customers need the software to
evolve in a way that doesn't work for us at all.  However, I don't see
that as an issue today and I think that glezos would be happy to help
us get our data out and onto our own systems if that were the case.
In other words, I think that's something we can easily evaluate in the
future when we start noticing that an issue exists.

Thanks for reading until the end :-)


More information about the logistics mailing list