L10N tools: Increase quality of translations
by Dimitris Glezos
Hi all.
Some members from the L10N project have identified some issues that need to be
solved to make sure the translations of Fedora are of high quality. Some of them
are infrastructure-related and at today's (-admin) meeting it was suggested to
transfer the discussion here.
I'm sure the folks at Red Hat are doing their best to keep the quality of the
translations high. But the truth is that Fedora's image in the context of
translations is not good; we do hear that a lot, from current and wanna-be
members. We can and should work to improve it. Semi-translated applications are
U-G-L-Y and shout "low QA" in our ears.
Some possible ideas have been listed on the following page:
http://fedoraproject.org/wiki/L10N/Tasks
Please feel free to chip in and help us out correct whatever doesn't seem
reasonable, break tasks into smaller tasks etc.
The bigger picture of some of these problems are:
* L10N of "local" applications (those listed on [1]) is poor; releases and
package updates contain untranslated strings in many languages. This is
unacceptable for a fully-localized desktop.
* The barrier to contribute to the l10n (be it GUI or Docs) is higher than it
should be (compared to other projects).
* QA of the translations is difficult with the current tools.
Some more concrete ideas to discuss that might concern the InfrProj and the
RelEng team are:
* Integrate better the handling of translation during a "local" package's
lifecycle. Have a flag raised for a package update that introduces new strings
so that translators can translate the new strings before the
repackaging/updating. Include in the schedule for each release a "string freeze
date" and a week later a "translation freeze date" and have all our packages
rebuilt after the latter and before the actual release.
* Move po files on their own cvsroot on cvs.fedoraproject.org to reduce
complexity and maintenance and to increase security (with a new group).
* Move the i18n status pages to Fedora servers (Plone/Turbogears?). Include a
direct link to the pofiles from there so that new members can have something to
work on before getting cvs access.
* In the future, use Plone to automate the QA between team members (ie
coordinators can review translations etc).
* Start working with the complex and tricky path to upstream translations that
no distribution has tackled yet in a successful way. Bring our translators
closer to the upstream projects.
Hope some of the above make sense. :)
-d
[1]: http://i18n.redhat.com/cgi-bin/i18n-status
--
Dimitris Glezos
Jabber ID: glezos(a)jabber.org, GPG: 0xA5A04C3B
http://dimitris.glezos.com/
"He who gives up functionality for ease of use
loses both and deserves neither." (Anonymous)
--
16 years, 10 months
Official mirror requirements
by Rahul Sundaram
Hi
What are the requirements other than bandwidth for the official mirrors
listed at http://fedora.redhat.com/Download/mirrors.html. In particular
do we require them to carry the complete copy including source images
and packages? If not we should probably enforce that or list the mirrors
which only have binary packages as partial mirrors. We can run some
routine checks in a period basis automatically to cross verify this.
Rahul
16 years, 10 months
PackageDB progress
by Toshio Kuratomi
As a followup to yesterday's meeting, I've finished converting the
owners.py script that imports owners.list and cvs information into the
database. A sync against what's currently in owners.list has been added
to the postgres database on test3 (where we're doing the packagedb
testing.)
To people who'd like to help out with the backend here's the tasks we
have:
1) look over the db schema posted last week and propose changes.
2) Enhance the owners.py script to convert RHL and EPEL branches as well
as FC branches.
3) c4chris has been working on getting review requests and approval
information into the database. There are a pair of files
APPROVED_trim.txt and REQUESTS_trim.txt on the server that have
information from the mailing list approvals at the beginning of Extras.
A build_list perl script that c4chris is working on transforms these
into a csv file. Someone can work on pushing this data into the
database.
4) Storing Groups and Categories for comps generation could be worked
on. I have some ideas for the database schema but nothing solid yet.
I'm gong to start working on a minimal TurboGears front end next with
the idea of being able to take over from owners.list as a first
milestone.
-Toshio
16 years, 11 months
my introduction
by Rachid Zarouali
hy all,
My name is Rachid Zarouali, I'm a linux sysadmin for the french cctld
registry.
i use linux since 1996 (redhat 4.0 as far as i remember).
i used several distributions like Redhat, Suse, Debian my personal linux
server is on Ubuntu ;) (no no there's no troll ).
i have skills in system administration, network, security, a little bit in
databases like mysql and postgresql ...
i'm a linux network/sys admin for now almost 6 years,
i've been working on different infrastructures , homogenous (*nix only) and
heterogenous (win/*nix), wich had different size
i've also been a solaris sysadmin for 2 years (Solaris 7).
right now i'm professionally focused on projects like, wide deployement
system (win/linux), monitoring , email infrastructure ...
i put an eye on the schedule page and seen some tasks i could help on like
xen, config management....
well, i hope this little introduction is enough ;)
sincerely,
Rachid Zarouali
16 years, 11 months
May not be making the meeting
by Toshio Kuratomi
Hey all,
I may not be able to make the FESCo Meeting tomorrow as I have to drop
some family off at the airport. I'll probably be back by the time the
Infrastructure Meeting starts but I'm not 100% certain of that.
-Toshio
16 years, 11 months
Web torture results
by Ahmed Kamal
Hi,
Paulo and me (kim0) have been working on testing a caching setup for the
wiki. A test migration to Moin 1.5 is complete, squid is now configured as a
reverse caching proxy. We've done some stress tests on the current setup.
Attached are some extracted results that (I think) are of interest. Mainly
the number of requests served per second, and the average time for serving a
request. Also, paulo pointed that caching differs per file type, the tests
have been done on three different file types (html, png image, and css)
Test Setup:
=========
1- All connections were initiated from proxy1
2- Proxy2 had squid caching turned on
3- Testing for html/png/css done, sweeping the number of concurrent
connections
4- Turn off squid caching on proxy2
5- Testing for html/png/css done, sweeping the number of concurrent
connections again
Interesting notes:
============
1- Serving PNG is 10X faster than html
2- Serving CSS is 10X faster than PNG!
3- Serving html is really the bottleneck. Unfortunately, Moin developers
acknowledged current version (1.5) is not cache friendly. Work for making
1.6 cache friendly is undergoing
4- Using squid currently only seems to double our PNG serving rate, nothing
else
5- The application server hits swapping (about 0.5GB) at full load (~300
concurrent connections), for some reason the requests/second served is still
high!! (Is our cache disks that fast)
6- The test did not stress the server bad enough to run out of swap space,
not sure if this is needed though!
I can send the full results date if anyone is interested.
Best Regards
16 years, 11 months
cfengine overview - pros and cons
by David Douthitt
I've used cfengine in a production environment, and found it to be very
useful and powerful. I'll just list the features (pro and con) below.
PROS
----
* Distributed operations
* Well-supported and open-source leader in its field
* Widely-used
* Supports many "selection critera" such as hour of day, hostname, IP
address, network, cfengine version, operating system, kernel version"
* Battle-tested with environments numbering in thousands (including that
most hostile of environments, the college campus)
* Integrates well with other systems such as CVS, RCS, et al
* Works well in isolation as well in distributed fashion - and can keep
system protected while server is offline
* Extremely flexible
* Comprehensive documentation
* Can replace cron entirely (if one has a notion to...)
* Can keep excess files from cluttering up /tmp or /var/tmp
* Can keep unwanted files or processes from appearing at all (such as
.rhosts, etc).
* Can "edit" files as well as maintain complete files
* Utilizes public-key encryption to identify clients (encrypted links
available)
* "Selection criteria" (classes) can be set programmatically by scripts
* Can be used in place of samhain or tripwire (and *reacts*!)
* Works well with NFS-mounted home directories
* Works under Windows as well
* Can manage processes - including "must be present" and "must *not* be
present" and more
* Active mailing list for support
* Can be used to configure new systems from startup (using a minimal
configuration)
CONS
----
* Documentation - comprehensive but can be hard to know where to start
with new installations
* Configuration is unlike anything you've ever seen
* The "editfiles" section of the configuration is also unlike anything
you've ever seen - and is different than any other configuration section
(looks a lot like a computer language without reasonable syntax)
* The customizability of the configuration can be overwhelming
* Doesn't necessarily "play nice" with file integrity checkers like
samhain or tripwire - i.e., if cfengine restores a file to its original
state or changes the permissions samhain may flag it as being changed.
* Inclusion in configuration files ("include file") is
counter-intuitive: "included files" are actually concatenated to
currently scanned file
* "Regexes" in the EditFiles configuration section match the entire
line, not a substring (unless using proper EditFiles command)
Most of the down-side to cfengine revolves around the unique
configuration file syntax (and the EditFiles section most of all) and
the comprehensive documentation (which does not provide for an
oft-requested 1-2-3 steps to get started).
The latter problem will be solved with an upcoming book ;-)
--
David Douthitt
HP-UX, Unixware, Linux, FreeBSD
RHCE, SCSA, Linux+, LPIC-1
http://www.lulu.com/ssrat
16 years, 11 months
puppet overview - pros and cons
by David Lutterkort
Since introductions are all the vogue today, here's some background
about me: I work in Red Hat's Emerging Technologies group on systems
management things; a little over a year ago I got interested in
configuration management and started looking around for a tool that
could fill the gaps left by the current tool chain that people use
(well, a very short chain generally, mostly made up of package
management and a little bit of source control)
During that time, I looked at pretty much all the config mgmt tools out
there, and found that puppet has the most promise of the lot, both for
straightup config mgmt and for pushing the envelope of what can be done
there (e.g., distributing detailed configurations in a reusable way[1])
Before that, I worked on RHN for a while, and before that I did a lot of
consulting work for Red Hat, mostly around J2EE web applications. I used
to know TCL, but the rehab really helped.
In the interest of full disclosure, I actively contribute to puppet and
work on stuff building on it.
I will be out of town until Jan. 2nd, with unclear email access, so I
might be a little sluggish with responses - but you can always ask
questions on puppet-dev(a)reductivelabs.com or #puppet on freenode.
Puppet
======
The references like [N] are at the end and lead to docs/additonal info
PROS
----
* Project lead (Luke Kanies) is experienced as sysadmin and
consultant around system administratio, makes his living
exclusively off consulting around puppet
* Designed and implemented in direct response to experiences with
other (and no) config mgmt systems like cfengine [5], isconf,
sth proprietary etc.
* Architecture
* Clients connect to central server (but all sane cfg mgmt
tools do that)
* Clients report facts about themselves (OS/kernel
version/release, MAC/IP address, basic HW info) to
central server, which uses them to make decisions about
client's config; the fact mechanism is pluggable and can
be easily extended with custom facts
* Server assembles config for client from sitewide
description (manifest)
* Can also be used standalone with cmd line tool for
testing (or dirt simple single machine setups)
* Use 'native' tools for all config tasks in the backend
(e.g., yum for pkg mgmt on RH-derived systems)
* Security
* Thorough security model (each client has its own SSL
cert) Puppet comes with tools to make basic SSL setup
and cert generation very painless (puppetca)
* Each client only gets to see the part of the site config
that applies to it, not the whole site config
* Builtin file server where file access can be secured
per-client (e.g. only hostX gets access to
hostX/ssh_host_key)
* Cross-platform, works on most flavors of Unix
(Fedora/RHEL/Debian/Gentoo, Solaris, OS X, some sort of *BSD
IIRC)
* Domain-specific language for manifest [2]
* Clean abstraction from messy details of changing config
* Describe desired config of system, puppet figures out
how to get there (e.g., you say 'need user X with
homedir /foo and uid N', puppet figures out appropriate
calls to useradd/usermod depending on whether user
exists and fixes attributes that are out of sync)
* Abstraction: describe config in high-level terms (user,
service, package, mount) for common config objects [3]
* Templating support for things that can't/don't need to
be described as objects; or distribute complete files
* Group config items logically with classes: can describe
that a webserver has to have latest httpd package,
service httpd enabled and running, and custom httpd.conf
file from location X (that's not possible with at least
one of the other config mgmt tools)
* Override mechanism for classes to allow for simple
one-off (or hundred-off) tweaks, e.g. to take webserver
class from above but use with different httpd.conf
* Clean definition of what inputs can influence a client's
config
* Language makes config easily readable and comprehensible
IMHO
* Emphasis on practical usability, not research
* Good set of unit tests
* No EditFiles ;)
* Cron-like support for scheduling actions during maintenance
windows (on a per-config object basis, if need be, though in
reality you want to keep that simple for your own sanity)
* Tie-in with kickstart: provision basic system with ks (including
puppet client), complete config with puppet [4]
* RH interested in furthering it for other reasons, too
* Active community, Luke is very responsive both with developer
and user issues/questions
* Beginnings of task-oriented user docs on a Wiki [6]
* GPL
CONS
----
* Not everybody is familiar with puppet's implementation language
(Ruby)
* Evolves rapidly
* Some of the more esoteric features (like comprehensive
reporting) are immature
* Need to learn puppet's language to describe site config
* Scalability in very large deployments unknown (there are
production deployments in the low hundreds of machines)
* Language is mostly declarative, but has 'exec' loophole for
running arbitrary commands on the client for practical reasons
More info
---------
Puppet's website (http://reductivelabs.com/projects/puppet/) has lots of
more info; if you want to get more of an impression, I would start with
the following, in this order:
1. http://reductivelabs.com/projects/puppet/faq.html
2. Luke's BayLISA presentation from last year
(http://video.google.co.uk/videosearch?q=Kanies+puppet) - the
ones from August '06 are also very good but _long_
3. The high-level introduction
(http://reductivelabs.com/projects/puppet/documentation/introduction.html)
4. Luke's puppet/cfengine comparison
(http://reductivelabs.com/projects/puppet/documentation/notcfengine.html) and his blog post about BCFG2 (http://www.madstop.com/articles/2006/08/08/puppet-vs-bcfg2) - gives some more insight into the why's and how's of puppet and how the main author contrasts it with what's out there.
5. The language tutorial
http://reductivelabs.com/projects/puppet/documentation/languagetutorial.html
David
[1] http://people.redhat.com/dlutter/puppet-app.html
[2]
http://reductivelabs.com/projects/puppet/documentation/languagetutorial.html
[3] http://reductivelabs.com/projects/puppet/documentation/typedocs.html
[4]
http://watzmann.net/blog/index.php/2006/12/05/kickstarting_into_puppet
[5]
http://reductivelabs.com/projects/puppet/documentation/notcfengine.html
[6] http://reductivelabs.com/cookbook/
16 years, 11 months
Package Database Schema v0.4
by Toshio Kuratomi
Here's the latest version of the package DB schema. Thanks to Karel,
Jeff, and Sopwith for the comments on the last version!
Some of the things that still need to be worked on:
- I've added several triggers. These need to be tested.
- The relationship between PackageBuild, PackageListing,
PackageBuildListing, and Package is ugly.
* PackageListing tells that a Package is present in a Collection
* PackageBuild is a specific build of a package (PackageId, EVR make
these records unique)
* PackageBuildListing combines these two (as a Build may belong to
more than one Collection.)
We want the PackageBuild to belong to one or more Collections that the
Package belongs to. I'm currently using a trigger to try to enforce
that but I have the nagging feeling I've just designed something
wrong.
- Package Groups (for collections) and Package Categories (for packages)
should now be possible. Need to implement them.
- Review grant statements once we've finalized the tables we'll be
providing.
Here's a short ChangeLog:
Move the StatusCodes into their own table to make translations easier.
This involves several new tables:
- StatusCode holds the status codes.
- StatusCodeTranslation holds the translated strings for each status
code. This is prefilled with the C translations.
- *StatusCode are tables that hold a subset of the StatusCode table.
These are used in foreign key relations to limit the status codes that
can be used on those tables. (For instance, Collection.status is a
foreign key of CollectionStatusCode.)
- *LogStatusCode are tables that holds all the status codes plus other
statuses that belong to logs. Since logs record status changes plus
"Added" and "Removed", these are generated from the StatusCode.
There is also a trigger to keep the *LogStatusCode tables in sync with
their *StatusCode tables.
(Thanks to jcollie for the idea to do this)
Branch: Make distTag and branchName unique values. (Thanks Karel)
Add on delete and on update clauses to all foreign keys. (Thanks Karel)
CollectionSet: Add a priority field to specify the search order when
overlaying collections. (Thanks f13)
Rename PackageVersion* to PackageBuild*
Trigger to make sure PackageBuildListing references PackageBuilds and
PackageListings with the same Package.
Restructure PackageACL, PersonPackageACL, and GroupPackageACL. The ACL
list can now have one record for every ACL-Package combination.
*PackageACL tables add users and groups to the relevant ACL. (Thanks
Karel)
Trigger to make changing the acl field illegal. This prevents possible
abuse.
Add a description field to Log for possible extra information.
Add some grant statements to give out permissions for pkgdbadmin to do
useful things in the db.
-Toshio
16 years, 11 months