Folks,
I volunteer.
First off, I am the Linux guy at NC State University. I maintain a Red Hat based distribution called Realm Linux (http://linux.ncsu.edu) along with large Beowulf clusters and all the meetings and mess to spear head a Linux movement at NCSU.
I can be found on irc.freenode.net as slack.
What I am interested in:
LPRng - NCSU can't use Cups until me or someone else has the time to make cups use kerberos authentication. I'm thinking this will appear in Alternatives RSN but I'll keep an eye out for it.
OpenAFS - I maintain some pretty decent OpenAFS packages that I like a lot more than the RPMs from upstream. Mine are FHS compliant, and build kernel modules properly. I would even volunteer to maintain OpenAFS packages for the community. However, I need some help to figure out how a default configuration should work and how site specific configuration should be done. Namely, each site must specify what AFS cell they are in and probably wish to make changes to the list of cells that the client can see. Also I would be willing to maintain an aklog package since 1) it doesn't change much and 2) I assume that most folks (I am) are using MIT kerberos with OpenAFS.
18 month lifetime - I need an 18 month or longer lifetime. Looks like this is where Fedora Legacy comes in to play. I've been talking with several people along the same lines. I'm fortunate in that I've gotten most things taken care of to the point that I'll only need to support RHL 9 and greater past Red Hat's EOL date. Although, I think I may have a couple 7.3 servers to take care of (binary Promise RAID drivers). I'd like to do what I can to help form the Fedora Legacy part of the project.
On Thu, 2003-09-25 at 13:01, Jack Neely wrote:
Folks,
I volunteer.
Cool. I'm up for maintenance too - but at first I think it will be on 7.3 and 9 backports and making yum suck less. :)
OpenAFS - I maintain some pretty decent OpenAFS packages that I like a lot more than the RPMs from upstream. Mine are FHS compliant, and build kernel modules properly. I would even volunteer to maintain OpenAFS packages for the community. However, I need some help to figure out how a default configuration should work and how site specific configuration should be done. Namely, each site must specify what AFS cell they are in and probably wish to make changes to the list of cells that the client can see. Also I would be willing to maintain an aklog package since 1) it doesn't change much and 2) I assume that most folks (I am) are using MIT kerberos with OpenAFS.
I use these packages here at duke (more or less)
the biggest problem to work around has todo with kernel updates and openafs version updates.
the openafs kernel modules are built to a specific kernel version and therefore must be install-only -not hard.
However, if you update the version of openafs the newer openafs may not work with the older kernel module versions.
So a suggestion would be to backport patches to whatever version of openafs gets released with the fedora core release.
or
do openafs-version-release-numbered kernel modules. which is pretty damned ugly.
-sv
On 25 Sep 2003, seth vidal wrote:
On Thu, 2003-09-25 at 13:01, Jack Neely wrote:
Folks,
I volunteer.
Cool. I'm up for maintenance too - but at first I think it will be on 7.3 and 9 backports and making yum suck less. :)
Right. To actually get up to speed on this (RHL73 and RHL9 backports), I think some kind of infrastructure etc. should be set up already (and try practising e.g. with RHL62 backports, even.)
On Thursday 25 September 2003 11:02, Pekka Savola wrote:
Right. To actually get up to speed on this (RHL73 and RHL9 backports), I think some kind of infrastructure etc. should be set up already (and try practising e.g. with RHL62 backports, even.)
Thats what we're trying to do now with Fedora Legacy.
On Thu, 25 Sep 2003, Jesse Keating wrote:
On Thursday 25 September 2003 11:02, Pekka Savola wrote:
Right. To actually get up to speed on this (RHL73 and RHL9 backports), I think some kind of infrastructure etc. should be set up already (and try practising e.g. with RHL62 backports, even.)
Thats what we're trying to do now with Fedora Legacy.
Right. Where?
My point exactly. I guess a month or two from now we're still dickering around and speaking about Fedora Legacy.
On Thu, 2003-09-25 at 14:34, Pekka Savola wrote:
On Thu, 25 Sep 2003, Jesse Keating wrote:
On Thursday 25 September 2003 11:02, Pekka Savola wrote:
Right. To actually get up to speed on this (RHL73 and RHL9 backports), I think some kind of infrastructure etc. should be set up already (and try practising e.g. with RHL62 backports, even.)
Thats what we're trying to do now with Fedora Legacy.
Right. Where?
My point exactly. I guess a month or two from now we're still dickering around and speaking about Fedora Legacy.
well as long as you're going to be so optimistic about it... :)
-sv
On Thursday 25 September 2003 11:34, Pekka Savola wrote:
Right. Where?
My point exactly. I guess a month or two from now we're still dickering around and speaking about Fedora Legacy.
An IRC channel exists, #fedora-legacy, a wiki has been put up: http://www.fedora.us/wiki/FedoraLegacy
Discussions have begun as to what Legacy is, how it will operate, how it will interact with Fedora Core and fedora.us, what hardware is used, etc.. Every project has to start somewhere, can't just wake up and say "BAM! Here it is, infrastructure and everything!". We're starting from nothing, and building it to something, with community input and participation. If you would like to give us a hand in getting the project off the ground, join the IRC channel, keep discussing it in here (until we're kicked off and told to get our own list...).
Right. To actually get up to speed on this (RHL73 and RHL9 backports), I think some kind of infrastructure etc. should be set up already (and try practising e.g. with RHL62 backports, even.)
I've been practicing 7.3 backports already - I have no interest in 6.2 backports and I've no 6.2 machines to practice on even if I had an interest.
but I agree some infrastructure should happen. maybe what was already at fedora.us could be used.
-sv
On Thu, 2003-09-25 at 14:08, seth vidal wrote:
Right. To actually get up to speed on this (RHL73 and RHL9 backports), I think some kind of infrastructure etc. should be set up already (and try practising e.g. with RHL62 backports, even.)
I've been practicing 7.3 backports already - I have no interest in 6.2 backports and I've no 6.2 machines to practice on even if I had an interest.
but I agree some infrastructure should happen. maybe what was already at fedora.us could be used.
I would think that the fedora.us bugzilla could be used in the same manner for the QA, although a new Product should probably be added for "fedora legacy" so it is easy to separate it from the other items in QA.
As for the build server, I think Warren has probably scaled as far as he can for the time being, but it would be cool if someone else could provide a build server (possibly using mach?) and take over those duties on the legacy project.
Phil
On Thursday 25 September 2003 11:18, Phillip Compton wrote:
As for the build server, I think Warren has probably scaled as far as he can for the time being, but it would be cool if someone else could provide a build server (possibly using mach?) and take over those duties on the legacy project.
I've been talking to my company about that (Pogo Linux www.pogolinux.com). We'd like to help as much as possible, and one of the ways could be to provide hardware. I just need to talk to Warren about when/where it goes (; (oh and get approval from my boss)
On Thu, 25 Sep 2003, Phillip Compton wrote: [...]
As for the build server, I think Warren has probably scaled as far as he can for the time being, but it would be cool if someone else could provide a build server (possibly using mach?) and take over those duties on the legacy project.
This is one approach. If it doesn't provide the means and infrastructure well enough, there are other ways.
Such as, everybody (on the "backport ring" that is) builds on their own systems, and signs w/ their own GPG keys. The RPMS are published on the folks own servers. A set of central servers polls periodically the list of those websites, and pulls the RPMs. If the GPG signature matches, put it in the central repository and send an automatic heads-up message to everyone (if desirable).
There are a few difficult process problems here, if done properly, (such as, _proper_ verification of GPG keys because I doubt everyone's in the web of trust...), but one may be able to gloss over those. This kind of "poll, pull and push" model would be very simple, technically.
On Thursday 25 September 2003 14:41, Pekka Savola wrote:
Such as, everybody (on the "backport ring" that is) builds on their own systems, and signs w/ their own GPG keys. The RPMS are published on the folks own servers. A set of central servers polls periodically the list of those websites, and pulls the RPMs. If the GPG signature matches, put it in the central repository and send an automatic heads-up message to everyone (if desirable).
I as PostgreSQL RPM maintainer for the PostgreSQL Global Development Group do something similar to this using a loose group of volunteers. I need to get GPG signing working, though. I have usually been able to get RPMs for RHL9, 8.0, 7.3, and even 6.2 in pretty short order. Even have a fellow building for RHAS for me. I do the RHL9, 8.0, and Aurora SPARC Linux 1.0 builds myself, though.
I as PostgreSQL RPM maintainer for the PostgreSQL Global Development Group do something similar to this using a loose group of volunteers.
<TROLL> Ahhh, so you're the one. Perhaps you could write a postgreSQL RPM with upgrade functionality that actually works? </TROLL>
-Chuck
On Saturday 27 September 2003 04:44 am, Chuck Wolber wrote:
I as PostgreSQL RPM maintainer for the PostgreSQL Global Development Group do something similar to this using a loose group of volunteers.
<TROLL> Ahhh, so you're the one. Perhaps you could write a postgreSQL RPM with upgrade functionality that actually works? </TROLL>
<TROLL feed=1, mode-nice> Visit www.postgresql.org, find the e-mail archives for pgsql-hackers, and search on the string 'Upgrading rant'.
It is not possible to upgrade PostgreSQL major versions within the RPM framework in a robust manner (or at least I've not yet found a way). It is an upstream, and not a packaging, issue. If you have ideas to the contrary, subscribe to pgsql-hackers@postgresql.org and contribute your brilliance. ;-) </TROLL>
I've been fighting that battle for over four years now, since I started maintaining the set in 1999 (PostgreSQL 6.5). At first, I thought I (with the help of Jeff Johnson, Cristian Gafton, and the excellent beta team at beta.redhat.com) had the problem mostly licked. But then the upstream package broke the pseudo upgrades. Trond Eivind Glomsrød worked on it, and he and I finally gave up trying to do it the way we were doing it after the upstream broke it again. It is a problem that really shouldn't be fixed in packaging, IMHO, since it isn't a problem just for RPM upgrades.
Just last week the item * Allow major upgrades without dump/reload, perhaps using pg_upgrade was added to TODO. It has been a long ride getting just that much out. Also search on the threads 'State of beta 2', 'need for inplace upgrading', and combine the terms RPM and upgrade in a search.
The short of it: PostgreSQL stores a vast amount of system configuration data in the system catalogs in the 'template1' database. Stuff like functions to use for input and output conversions are inside the system catalogs, coexisting in the same database as pointers to the user's data, the user's functions, the user's custom types, operators, and the like. These catalogs change every major release. OID's for the functions, for types, for operators, etc. all change. And then sometimes the actual page format for the data itself is changed -- the most recent time was between 7.2.x and 7.3.x, but it has happened a handful of times before that. It is a hard problem that people who are far smarter than I have been stumped over, or just not motivated to fix it.
But I'm open to ideas as to how to make it less painful.
It is a problem that really shouldn't be fixed in packaging, IMHO, since it isn't a problem just for RPM upgrades.
Yeah, I sorta figured that. Thanks for responding though. It's good to see it finally make it onto the PostgreSQL radar map.
-Chuck
On 25 Sep 2003, seth vidal wrote:
Right. To actually get up to speed on this (RHL73 and RHL9 backports), I think some kind of infrastructure etc. should be set up already (and try practising e.g. with RHL62 backports, even.)
I've been practicing 7.3 backports already - I have no interest in 6.2 backports and I've no 6.2 machines to practice on even if I had an interest.
Right.. as have we..
but I agree some infrastructure should happen. maybe what was already at fedora.us could be used.
.. but this is the critical part, where whether we succeed or fail will be determined. If this stuff (coordinating backports etc.) was trivial, we shouldn't be in this situation in the first place.
On Thu, Sep 25, 2003 at 09:02:00PM +0300, Pekka Savola wrote:
Right. To actually get up to speed on this (RHL73 and RHL9 backports), I think some kind of infrastructure etc. should be set up already (and try practising e.g. with RHL62 backports, even.)
I have a bunch of RHL 6.2 and 7.0 backports of various things. I'll try to dig those up and post them somewhere this weekend. That might be a start...
-Barry K. Nathan barryn@pobox.com
seth vidal (skvidal@phy.duke.edu) said:
the biggest problem to work around has todo with kernel updates and openafs version updates.
the openafs kernel modules are built to a specific kernel version and therefore must be install-only -not hard.
Current up2date will do the install-not-upgrade dance for any package that 'Provides: kernel-modules' - I'm guessing similar capabilities could be added to apt and yum fairly quickly, and such packages ported to provide that...
Bill
On Thu, 2003-09-25 at 16:46, Bill Nottingham wrote:
seth vidal (skvidal@phy.duke.edu) said:
the biggest problem to work around has todo with kernel updates and openafs version updates.
the openafs kernel modules are built to a specific kernel version and therefore must be install-only -not hard.
Current up2date will do the install-not-upgrade dance for any package that 'Provides: kernel-modules' - I'm guessing similar capabilities could be added to apt and yum fairly quickly, and such packages ported to provide that...
yum has an installonlypkgs config that can be set but it is not handled automagically - I'll have to do that.
still doesn't get around the problem of the modules needing a specific version of the openafs client software.
-sv
On Thu, Sep 25, 2003 at 04:51:21PM -0400, seth vidal wrote:
On Thu, 2003-09-25 at 16:46, Bill Nottingham wrote:
seth vidal (skvidal@phy.duke.edu) said:
the biggest problem to work around has todo with kernel updates and openafs version updates.
the openafs kernel modules are built to a specific kernel version and therefore must be install-only -not hard.
Current up2date will do the install-not-upgrade dance for any package that 'Provides: kernel-modules' - I'm guessing similar capabilities could be added to apt and yum fairly quickly, and such packages ported to provide that...
yum has an installonlypkgs config that can be set but it is not handled automagically - I'll have to do that.
still doesn't get around the problem of the modules needing a specific version of the openafs client software.
-sv
So I have a question about the package naming guidelines.
I'm using yum.
A common thing is to push out a new kernel update for security reason 42 and in the same push push out a complete set of openafs-* packages. The new openafs-kernel packages work with the new kernel and require that kernel version.
With the Fedora naming scheme the openafs-kernel packages turn into kernel-module-openafs-2.4.20-19.9 and do not require a kernel version.
So unless there's another requires somewhere to require the provided kernel-module-openafs = %{epoch}:%{version}-%{release} how does the new OpenAFS package get upgraded?
I guess in my case I can have openafs-client do the above require. (Right now it only requires %{version}.)
What if I was shipping something that was just a kernel module and not other accompanying packages?
Jack Neely
Folks,
So I'm still confused about this? Am I misunderstanding something? Do I need to clarify?
Jack Neely
So I have a question about the package naming guidelines.
I'm using yum.
A common thing is to push out a new kernel update for security reason 42 and in the same push push out a complete set of openafs-* packages. The new openafs-kernel packages work with the new kernel and require that kernel version.
With the Fedora naming scheme the openafs-kernel packages turn into kernel-module-openafs-2.4.20-19.9 and do not require a kernel version.
So unless there's another requires somewhere to require the provided kernel-module-openafs = %{epoch}:%{version}-%{release} how does the new OpenAFS package get upgraded?
I guess in my case I can have openafs-client do the above require. (Right now it only requires %{version}.)
What if I was shipping something that was just a kernel module and not other accompanying packages?
Jack Neely
-- Jack Neely slack@quackmaster.net Realm Linux Administration and Development PAMS Computer Operations at NC State University GPG Fingerprint: 1917 5AC1 E828 9337 7AA4 EA6B 213B 765F 3B6A 5B89
-- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
On Thu, 25 Sep 2003, Bill Nottingham wrote:
Current up2date will do the install-not-upgrade dance for any package that 'Provides: kernel-modules' - I'm guessing similar capabilities could be added to apt and yum fairly quickly, and such packages ported to provide that...
That's what configuration files are for, apt has had this capability for ages ;)
Allow-Duplicated { "^kernel[0-9]*$"; "^kernel[0-9]*-smp$"; "^kernel[0-9]*-enterprise$"; };
On Thu, Sep 25, 2003 at 11:45:16PM -0400, Rik van Riel wrote:
That's what configuration files are for, apt has had this capability for ages ;)
As does up2date (see the removeSkipList configuration lines in /etc/sysconfig/rhn/up2date).
-Barry K. Nathan barryn@pobox.com
Jack Neely wrote :
[...] Although, I think I may have a couple 7.3 servers to take care of (binary Promise RAID drivers).
FWIW and although Promise RAID clearly s*cks, their IDE RAID FastTrack controllers I have in some Intel 1U server work for me with the ataraid and pdcraid modules in recent 2.4 kernels. I've had data corruption with 2.4.20 so I'm still running 2.4.18 on them. Earlier kernels required those proprietary drivers, eesh.
Matthias