-----BEGIN PGP SIGNED MESSAGE-----
I am not sure if anyone has let you know or not yet, but we have a
planned outage of all services in the DC tonight. It starts at 9:00p
Eastern and lasts for up to 6 hours.
This will affect www.redhat.com, rhn.redhat.com, and all of the fedora
I hope it will not really be 6 hours. I am applying a microcode upgrade
to some of the blades in the 65xx chassis as well as installing 2 new
Please be patient tonight. If you notice any problems after the window,
please let me know via email and I will take a look.
= Stacy J. Brandenburg Red Hat Inc. =
= Manager, Network Operations sbranden(a)redhat.com =
= 919-754-4313 http://www.redhat.com =
03F7 43BE 1150 CCFA F57B 54DD AEDB 1C27 1828 D94D
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----
As per the meeting, these are the things I think we need to do to further the
Fedora Hosted Project.
1) Move the hosted git/hg/svn off to another system. Ideally we'd have a
dedicated system (perhaps in another colo) that is beefy enough to run a few
xen guests. One for each SCM (so that sane configs can be done for web and
such), one for Trac itself, one for users to log in and fiddle with raw
webspace, and finally one to run apache and serve up the raw webspace. The
same storage space could be used for all of these things, so that we don't
have to guess at disk size, just make directories and NFS mount them at
appropriate places in the guests.
2) Get raw webspace working. Ideally there would be a guest that users log
into and be able to fiddle with content in a subdir of their homedir or
something. Someway to keep folks locked out of eachother's webspaces and the
rest of the system would be good, maybe we have to limit ssh to just sftp and
scp and ssh rsync to begin with, dunno. Another guest would actually serve
up the content so that no user could log into that box. Some quotas would be
in effect to keep a luser from DoSing the box by running it out of space.
Raw webspace could be in the flavor of "projectname.hosted.fedoraproject.org"
for ease of name based virtual hosting. Sourceforge does this too,
sf.net/projects/<projectname> for the sf interface, <projectname>.sf.net for
the webspace. Seems to work fairly well.
3) A web tool for admins to create a new project space. Would involve
A) creating trac space and setting appropriate admin
B) creating fedora account group for SCM repo
C) creating SCM repo, setting permissions
D) creating raw webspace and DNS hostname
4) some art love to create a nice template for Trac to use other than the
default Trac. Something with Fedora branding and nice looking icons/colors
and all that fun stuff. There could be room here for "powered by" type
images if some company steps up to donate the system/storage for this, as
well as a University logo should we get hosting from a Uni.
Wild ass guessing, I'd think that 1TB of storage would be more than plenty to
last us a while, for all of hosted SCM and webspace, and Trac data.
Eventually we could split things out like the raw webspace or SCMs or
whatever if something starts to eat more space. As for a single box to run a
bunch of guests, I don't know how well this scales, would be worth testing I
Release Engineer: Fedora
I'm interested in adding deltarpm support to yum in the fedora updates
infrastructure. There's already a yum plugin written by a redhat developer,
so we have something to start from. I have contacted the developer, and he
has no problem helping us move along.
1- Clients download faster updates. The README says drpm based update
infrastructure can reduce required bandwidth to about 20% on the average
2- Also, from the server side, this should decrease our bandwidth
requirements, and free the servers quickly to handle other users
1- The code currently expects redhat satellite server file system hierarchy,
so it will need some cleaning/polishing
2- Currently, the server side stores *all* updates issued, when a client
requests updating, the server generates an *appropriate* drpm, which the
client can download. This has the disadvantages of storing all updates ever
released on the server, and also, running active code on the server might
not be welcome by mirrors I imagine. This is a current problem!
3- As a solution to that problem, I am proposing we statically (cron-job)
generate drpms only for the newest updates. That way we will be serving the
majority of users drpms. We will also get rid of having to generate drpms on
the fly. The only thing we loose, is if someone is slow to update his
system, he won't be benefiting much from the system. Practically, I think
the benefits outweigh the cons. What do you think?
I'm trying to get up to speed with how the updates tool works and what it does.
I followed the instructions on this page http://fedoraproject.org/wiki/Infrastructure/UpdatesSystem and downloaded the source from CVS. After that I followed the instructions in the README to configure and install. The README was very detailed and helpful!
Is there more documentation on how to use the tool now that I have it installed and greater context on how it all goes together? The last part of the README says:
Running the updates system test suite
All tests are stored in the 'tests' module in this project, and can be
run by executing the command `nosetests` in top level of the project.
For more information on Nose unit tests, please see:
0) I confess I have limited knowledge of python and have never used 'nose.'
1) What is meant by a "project" when it says to execute the `nosetests` command in the top level of "the project?"
2) Is there a UI and if so how do I navigate to it?
Hey guys, I'll be missing at least the first part of the meeting this
week and next. I'll catch up in the logs.
I'm working on connecting from the packagedb to the account system right
now. I'll send more details when I have something that works as I'm
sure lmacken will want some of it for the update system :-)
As many of you know I have been hired on to work on Fedora
Infrastructure full time. This will be a great thing for our group.
Initially I'll be spending a lot of time getting our house in order
and fixing a lot of those little things that we keep putting off.
Many of the mid term goals we've already talked about and are on the
I'd like to hear the communities ideas now that we have a dedicated
resource (me). I'm especially interested in things that will help
other teams do what they need to do. Like how to not screw the doc's
guys during the next release or better empowering sponsors.
Encouraging new volunteers to come and volunteer management /
coordination is also something we need to address soon, we're growing
quite quickly. With these things in mind try not to think
specifically about our infrastructure team but about the project as a
whole and how we can aid the community to thrive.
I'll officially be starting the first week in Feburary and will be
looking for ways to expand our infrastructure both in Red Hat and
outside of Red Hat, perhaps by finding partnership in an additional
University. As long as we're smart about what we commit to, I have no
doubt that we'll be able to provide the community with everything they
Just a quick status on where we're at with the mirror manager system.
Farshad, Mike, Luke, and I have all spent some time thinking about
Luke started, and I've added to, a page of what we'd like to see in a
mirror management system. See
review this and update/change as you like.
Farshad has made a good start implementing parts of this in
TurboGears. I've asked for a Trac project into which his source code
can be put so we can collaborate on it. I want to make sure we get the schema
right for what we need to accomplish before we spend too much time
Before we unleash it to we'll need to tie this into the Fedora Account System
for user authentication, but that can be tackled in parallel to the
rest of the system development.
Dell Linux Solutions linux.dell.com & www.dell.com/linux
Linux on Dell mailing lists @ http://lists.us.dell.com
test5.fedora.phx.redhat.com has an instance of FDS running on it with
the current schemas and sample data that I've been working with. For a
primer on the schema, please see
http://fedoraproject.org/wiki/Infrastructure/AccountSystem2/Schema .Pretty screenshot attached.
I need to figure out the group situation still and hope to solidify the
schema so that development against it may commence. I have already
tested and verified apache authentication against it using
As always, if there's anything I can improve, let me know.
I've been recently invited to join this list, seeing as I'm the
original author of Glump (currently maintained by Seth), and also
because I'm currently in a similar situation trying to figure out what
to use for system config management here at McGill (I was recently
made sysadmin again after being a PHP monkey for over a year).
Now, I've not touched glump since mid-2005, and it's changed a bit
since then. I do hate cfengine (probably party due to the way it's set
up here), and I do not consider puppet to be a good alternative,
mostly because it's "yet another config language" and because it's
written in ruby. To use puppet, I'd have to learn "yet another config
language" and ruby, which is prohibitively time-consuming.
I've not yet had time to investigate bcfg2, but I will in the near
future. However, I'm wondering if I could use and extend glump to do
all I need -- will probably be the simplest. :)
I'm currently wondering how much pain it would be to write a trac
plugin that would do the same thing glump currently does. That should
give me an infrastructure with an extensive access to SVN and db for
versioning purposes, and built-in documentation/ticketing system.
Since trac provides POST-handing, I could also do stuff like edit
config files on the system to the point where they are working, and
then "bless" them to be uploaded, committed to svn, and be immediately
available to all members of that system group (unless that file is
managed by a glump-like system).
Now, this is pure vapourware. :) The only reason I'm interested in
implementing this is because we already use trac extensively, and
because writing plugins for it is pretty simple. I will probably start
out with just writing a glump plugin that will use the svn repository
to hold glump configs and sources, and then go on from there.
If you guys are interested in glump, would you also be interested in a
"trac-ified" glump? I plan to start working on this as soon as next
Cheers and happy new year,