Just saw this on LWN (haven't even finished reading it), sorry if it has already been posted:
http://lwn.net/Articles/223038/
Marco
M. Fioretti wrote:
Just saw this on LWN (haven't even finished reading it), sorry if it has already been posted:
He's kind of slow on the draw if he just noticed that fedora is a bleeding edge distribution and not particularly stable. New development has to be tested somewhere.
On 2/21/07, Les Mikesell lesmikesell@gmail.com wrote:
M. Fioretti wrote:
Just saw this on LWN (haven't even finished reading it), sorry if it has already been posted:
http://lwn.net/Articles/223038/
He's kind of slow on the draw if he just noticed that fedora is a bleeding edge distribution and not particularly stable. New development has to be tested somewhere.
ESR has always been a lot of hot air. He's an attention whore.
On Wed, Feb 21, 2007 14:41:55 PM -0800, Lonni J Friedman (netllama@gmail.com) wrote:
ESR has always been a lot of hot air. He's an attention whore.
I have always wandered if, instead of this, he's just having a hell of a good laugh making the rest of us yell with ravings that only he can afford to publish because he doesn't need to work anymore...
More seriously, now that I've read the whole piece: I'll just ignore the multimedia/codec part of the rant, as I think Fedora's stance on the matter is the right one, but I'd like feedback (not flames, really) on just these two bits:
* failure to maintain key repositories in a sane, consistent state from which upgrades might actually be possible.
* Adding another layer of complexity, bugs, and wretched performance with yum
Personally, I have not experienced these problems yet, so I'd like to know in which specific cases there is this kind of problems.
Marco
On Wed February 21 2007 6:02:29 pm M. Fioretti wrote:
- failure to maintain key repositories in a sane, consistent
state from which upgrades might actually be possible.
- Adding another layer of complexity, bugs, and wretched
performance with yum
Well, here's an immediate 'hot-off-the-press' report for you: For some reason, Smart has been having a problem on this particular machine for a couple to three weeks. I hadn't had time to really dig in - the symptoms were that it would take forever to refresh itself, there would be very long delays in the process, so many CPU cycles being used that screen re-draws were being inhibited, and so forth; Effectively, I haven't been able to complete an update with Smart for nearly 3 weeks. Today, I fired up Yumex and ran a system update - this hadn't been done in some 2-3 weeks due to the Smart problem; Yumex found 153 packages that needed updating, one of which was Smart, and went throught the process without a hitch in about 35-45 minutes. Generally, I upgrade my machines every day - I run four Fedora boxes right now. Apart from the above described issue, I almost never experience problems with yum or with repos except for the occasional delays in propagating to the mirrors. Now, I don't run the exact same mix of packages that ESR does, and so, his experience could be different - but, I would add that I'm one of those who fought for the "Everything" install option, and tend to run my machines with a lot of software installed - so, I'm not sure what he's talking about. Since today's update, Smart has now started working properly, again.
I've also tried 40+ other distros and have several running right now. They all have issues.
I tend to lean against the hard-nosed "only-free-software" position, but, I respect it and don't find it so difficult to work around.
- failure to maintain key repositories in a sane, consistent state from which upgrades might actually be possible.
That really doesn't seem to be a genuine gripe. There are occasional problems with mirrors being out of sync but what Mr Raymond did appears to be to discover rpm wasn't letting him do an update and decide to remove some files by hand, at which point he discovered rpm was right.
Then he did some crazy things to get his files off instead of inserting the rescue cd, and using rpm --root to reinstall the package in the running system as he would have been advised if he asked.
(As they say 'half clued is more dangerous than clueless')
- Adding another layer of complexity, bugs, and wretched performance with yum
Personally, I have not experienced these problems yet, so I'd like to know in which specific cases there is this kind of problems.
Yum/rpm has performance problems in some cases. It's certainly slower than dpkg/apt in many cases. Work is occuring...
Alan
In article 20070222000106.77e0cb54@lxorguk.ukuu.org.uk, Alan fedora-list@redhat.com wrote:
- failure to maintain key repositories in a sane, consistent state from which upgrades might actually be possible.
That really doesn't seem to be a genuine gripe. There are occasional problems with mirrors being out of sync but what Mr Raymond did appears to be to discover rpm wasn't letting him do an update and decide to remove some files by hand, at which point he discovered rpm was right.
There have many *many* times when yum update has failed for me. Today was one such event (and others have mentioned on this list). Many of the failure appear to be due to the dual arch packages we get on x86_64.
On Thu, 2007-02-22 at 00:02 +0100, M. Fioretti wrote:
- failure to maintain key repositories in a sane, consistent state from which upgrades might actually be possible.
Some time ago it seemed rare that the repos were in a usable state, now it's gone back to them being okay most of the time.
Les Mikesell writes:
M. Fioretti wrote:
Just saw this on LWN (haven't even finished reading it), sorry if it has already been posted:
He's kind of slow on the draw if he just noticed that fedora is a bleeding edge distribution and not particularly stable. New development has to be tested somewhere.
Actually, had you actually read the article, it's exactly the opposite: his main complaint is, basically, that over the years, rpm slowly turned into crap:
"Red Hat/Fedora throw away what was at one time a near-unassailable lead in technical prowess..."
Five-seven years ago, rpm was, hands down, the technically superior package management system. But it failed to keep up with the times. Rather than fixing fundamental defects in rpm, instead they put a big layer of makeup on it: yum, and its plugins. But it's a still a big piece of turd underneath, and it's getting to the point where no amount of makeup will help.
Now, I don't know what was his actual major malfunction with rpm. I've been updating FC 6 constantly, and I have not experienced any problems. At least not the usual ones. He probably is also pulling it some software from other external yum repos, and they weren't either built properly by the external maintainer, or there was a temporary disconnect due to not everything being up to date at the same time -- as it often happens with external repos having packages with dependencies on other packages in core or extras -- but the usual solution is to wait a day or two for everything to settle down.
But I agree with his main point. I no longer use rpm to keep the tabs of all the additional stuff I install on my machines.
On Wed, Feb 21, 2007 18:29:32 PM -0500, Sam Varshavchik (mrsam@courier-mta.com) wrote:
Five-seven years ago, rpm was, hands down, the technically superior package management system. But it failed to keep up with the times. Rather than fixing fundamental defects in rpm
which fundamental defects, sorry?
Marco
On Thu, 22 Feb 2007 00:47:28 +0100 "M. Fioretti" mfioretti@mclink.it wrote:
which fundamental defects, sorry?
Well, whatever undocumented crap it does that allows both i386 and x86_64 rpms to install "the same" files (which are in fact obviously different) is clearly a wart about the size of the titanic.
Tom Horsley wrote:
On Thu, 22 Feb 2007 00:47:28 +0100 "M. Fioretti" mfioretti@mclink.it wrote:
which fundamental defects, sorry?
Well, whatever undocumented crap it does that allows both i386 and x86_64 rpms to install "the same" files (which are in fact obviously different) is clearly a wart about the size of the titanic.
No, that's a feature and it lets you run packages that haven't been rebuilt for x86_64 and need 32 bit libraries (what, you have a single source for software?).
The bug is that it won't easily let you build multiple versions of the same package for testing, etc. and keep them all installed - except for the kernels where somebody noticed the problem and made it special case.
Les Mikesell wrote:
Tom Horsley wrote:
On Thu, 22 Feb 2007 00:47:28 +0100 "M. Fioretti" mfioretti@mclink.it wrote:
which fundamental defects, sorry?
Well, whatever undocumented crap it does that allows both i386 and x86_64 rpms to install "the same" files (which are in fact obviously different) is clearly a wart about the size of the titanic.
No, that's a feature and it lets you run packages that haven't been rebuilt for x86_64 and need 32 bit libraries (what, you have a single source for software?).
The bug is that it won't easily let you build multiple versions of the same package for testing, etc. and keep them all installed - except for the kernels where somebody noticed the problem and made it special case.
My experience is that I perform an install and then perform an upgrade. The upgrade fails because of the above mentioned conflicts between x86 and x86_64 versions of the same packages. Whether you want to call that a bug or a feature, its a bad user experience.
Andrew Robinson
On Wed, 21 Feb 2007 18:15:19 -0600 Les Mikesell lesmikesell@gmail.com wrote:
Well, whatever undocumented crap it does that allows both i386 and x86_64 rpms to install "the same" files (which are in fact obviously different) is clearly a wart about the size of the titanic.
No, that's a feature and it lets you run packages that haven't been rebuilt for x86_64 and need 32 bit libraries (what, you have a single source for software?).
I'm talking far more insane stuff than that. Things like foobar.i386.rpm and foobar.x86_64.rpm both being installed at the same time and both "owning" the file /usr/bin/foobar when you can look at /usr/bin/foobar and see that it is clearly an x86_64 executable and did not under any circumstances come from foobar.i386.rpm.
This became far more obvious in FC6 when the default x86_64 install seemed to also install every single version of the corresponding i386 package as well (libraries I can understand, but this is junk like utility programs which couldn't possibly need to have a 32 bit version installed).
Tom Horsley wrote:
Well, whatever undocumented crap it does that allows both i386 and x86_64 rpms to install "the same" files (which are in fact obviously different) is clearly a wart about the size of the titanic.
No, that's a feature and it lets you run packages that haven't been rebuilt for x86_64 and need 32 bit libraries (what, you have a single source for software?).
I'm talking far more insane stuff than that. Things like foobar.i386.rpm and foobar.x86_64.rpm both being installed at the same time and both "owning" the file /usr/bin/foobar when you can look at /usr/bin/foobar and see that it is clearly an x86_64 executable and did not under any circumstances come from foobar.i386.rpm.
This became far more obvious in FC6 when the default x86_64 install seemed to also install every single version of the corresponding i386 package as well (libraries I can understand, but this is junk like utility programs which couldn't possibly need to have a 32 bit version installed).
Not sure what's wrong there, but I don't think it is inherent in yum or rpm. I have CentOS x86_64 installs that seem to have the right things except for the accidental inclusion of perl.i386 in the initial release.
On Wed, 21 Feb 2007, Tom Horsley wrote:
On Thu, 22 Feb 2007 00:47:28 +0100 "M. Fioretti" mfioretti@mclink.it wrote:
which fundamental defects, sorry?
Well, whatever undocumented crap it does that allows both i386 and x86_64 rpms to install "the same" files (which are in fact obviously different) is clearly a wart about the size of the titanic.
Well, just last week a co-worker here made a whole debian testing form amd64 instalation (didnt know that it was for that arch) on and intel 32 dual core procesor with no problem at all.
He found out what he had done when he tried to upgrade the kernel. ;-)
Just to state that bad things happen everywhere. ;-)
-- 21:50:04 up 2 days, 9:07, 0 users, load average: 0.92, 0.37, 0.18 --------------------------------------------------------- Lic. Martín Marqués | SELECT 'mmarques' || Centro de Telemática | '@' || 'unl.edu.ar'; Universidad Nacional | DBA, Programador, del Litoral | Administrador ---------------------------------------------------------
M. Fioretti writes:
On Wed, Feb 21, 2007 18:29:32 PM -0500, Sam Varshavchik (mrsam@courier-mta.com) wrote:
Five-seven years ago, rpm was, hands down, the technically superior package management system. But it failed to keep up with the times. Rather than fixing fundamental defects in rpm
which fundamental defects, sorry?
A detailed explanation would -- I'm afraid -- run for much longer than I'm willing to type, after a hard day at the office. But, a brief, concise summary:
• The way that the arch part of a package's "definition" is processed is fundamentally broken. Review the list's archives a few years back, when the kernel-source package switched arches (from arch to noarch, as I recall) and what barrels of fun everyone had, as a result. And, someone else already mentioned the massive, ugly, repulsive hack that multilib support is.
• Try a "yum update" when a dozen, or so, packages need tobe updated. Press CTRL-C after half of them are installed. Enjoy spending the rest of the day undoing the resulting mess, and cleaning up all the puke in your filesystem, and the rpm database. This is completely U N A C C E P T A B L E for such a critical infrastructure element as the system package manager, to leave the system in a completely incoherent state that cannot be repaired automatically. It must _not_ vomit all over itself, like it does now. _No_ excuse for this, whatsoever, no matter what anyone tells me. My package manager -- even if it gets SIGKILLed past its equivalent of a "point of no return", then the next time it runs it will simply finish whatever transaction it was in the middle of, installing and removing whatever didn't get installed or removed. This is not rocket science. If I can implement this level of error recovery in my package manager, there's no reason rpm couldn't. And I never have to deal with the wonderful dangling locks in the Berkeley DB's layer.
• Speaking of that: Berkeley DB. Enough said.
• Kernel modules. Enough said.
• Epoch. Enough said.
• Bottom line: dependency resolution logic in rpm is too primitive, and simply cannot cope with many present-day cases. "Kernel modules, enough said" and "Epoch, enough said" are just two manifestations of this deeper, underlying shortcoming.
• rpm croaks if some dependency is missing. The solution was yum, a layer on top of rpm. But yum's functionality really belongs in rpm. Furthermore, if the missing dependency is for a package in a repo not already known to yum, yum will still croak, because it won't know what to do.
There's no technical reason why an rpm file cannot include the URL of any repositories that provide packages any needed dependencies, together with the repositories' keys. The packager doesn't even need to compile such a list manually, it can be done automatically by rpmbuild. All that a package needs to do is identify its own yum repository. Then, when rpmbuild assembles a list of the new package's dependencies, it looks up which packages provide the new package's dependencies, pull those package's repository URLs, and add them as the new package's "downstream" URLs.
Then, if a dependency cannot be immediately resolved from the rpm database, and it's not found in any of the existing repositories, and the package has a pointer to a repository URL that doesn't match any of existing repo's URLs, rpm could optionally prompt for permission to add a new external repository, and then use it to pull down the required packages.
There's more stuff I can bitch about, but these are what I believe are some of the main design defects/shortcomings in rpm.
Sam Varshavchik wrote:
There's no technical reason why an rpm file cannot include the URL of any repositories that provide packages any needed dependencies, together with the repositories' keys.
That sort of defeats the purpose of having keys unless you are prepared to trust anyone potentially downstream in such a cascading arrangement.
It would also add many more points that can change and make updates even less repeatable than they are now.
Les Mikesell writes:
Sam Varshavchik wrote:
There's no technical reason why an rpm file cannot include the URL of any repositories that provide packages any needed dependencies, together with the repositories' keys.
That sort of defeats the purpose of having keys unless you are prepared to trust anyone potentially downstream in such a cascading arrangement.
It would also add many more points that can change and make updates even less repeatable than they are now.
If you trust a repo's maintainer, and you've imported repo's keys, and the maintainer builds a package with dependency on another third party repo, the maintainer puts the third party repo's URL and keys into the package, and signs the package with his key. You already trust the key, because you're pulling packages from the repo already. So, you're going to have to make a call. Either reject the third party repo's, but then the update will be rejected since the dependency won't be satisfied, or accept the third party repo's keys, and pull in the rest of the dependency.
Fundamentally, this is no different than the stock PGP web of trust mechanism. You are already trusting one third party repo that you're updating your packages from. A part of that trust, which you must understand, involves trusting whatever other third party repo the first repo itself is trusting.
On Wed, Feb 21, 2007 19:06:35 PM -0500, Sam Varshavchik (mrsam@courier-mta.com) wrote:
M. Fioretti writes:
which fundamental defects, sorry?
a brief, concise summary:
Thanks!
Try a "yum update" when a dozen, or so, packages need tobe updated. Press CTRL-C after half of them are installed. Enjoy spending the rest of the day undoing the resulting mess, and cleaning up all the puke in your filesystem, and the rpm database.
I've had similar experiences on Centos, so yes, I agree with you on this.
There's no technical reason why an rpm file cannot include the URL of any repositories that provide packages any needed dependencies, together with the repositories' keys.
I like the concept, but for some reason which I can't point out before sleeping I have the _feeling_ that there is some practical reason why this wouldn't work in real life. But maybe I'm just sleepy.
Thanks for your explanation,
Marco
M. Fioretti writes:
There's no technical reason why an rpm file cannot include the URL of any repositories that provide packages any needed dependencies, together with the repositories' keys.
I like the concept, but for some reason which I can't point out before sleeping I have the _feeling_ that there is some practical reason why this wouldn't work in real life. But maybe I'm just sleepy.
I've thought about this -- there is one situation where this model breaks down. This model depends on everyone using different package names. If two repos build a different package and use the same name for both of them, this model is going to break down.
It is necessary to have some measure of self-discipline here, and people need to keep within their own boundaries, and not stick their nose where it doesn't belong. But I do not believe that it is a big concern. People running third party repos right now already exhibit discipline. Everyone else depends on them, and basically gives them carte-blanche to install arbitrary software on their own machines. That's a lot of trust, and, over the past couple of years we didn't really have many instances of this trust being abused.
On Wed, Feb 21, 2007 at 07:06:35PM -0500, Sam Varshavchik wrote:
If I can implement this level of error recovery in my package manager, there's no reason rpm couldn't.
Is your package manager available for us to play with?
Charles Curley writes:
On Wed, Feb 21, 2007 at 07:06:35PM -0500, Sam Varshavchik wrote:
If I can implement this level of error recovery in my package manager, there's no reason rpm couldn't.
Is your package manager available for us to play with?
Yes - http://www.lpmtool.com
On FC6 there's a minor compilation error which I've already fixed in CVS, I'm just too lazy to bump the label and cut a new tarball. If you want to get it built on FC6, pull the source from Sourceforge CVS. Also, you'll need to find the Perl-RPM2 package somewhere. Perl-RPM2 is strangely absent from FC6 Extras, you'll need to hunt it down yourself.
A sample screenshot: http://www.lpmtool.com/gnome.html
All current Courier tarballs include an lpm specfile, in addition to an rpm specfile, so they can be played with, as sample lpm packages.
A couple of points about Fedora, YUM, and RPM.
1) Fedora was the outgrowth of RHL. However, Red Hat blundered when they stopped selling RHL as an entry level product with by far the largest mindshare in the Linux and business community. If Fedora was to be equivalent to or a replacement for RHL, it should have been stable, usable, fast, and have a longer lifecycle. Customers were confused and confused customers look elsewhere for solutions, hence the growth of Gentoo, Ubuntu, SuSE, etc.
Fedora has become the Red Hat beta (alpha) for RHEL. It has become far more difficult to support, update, upgrade, etc. than it should be based on its RHL ancestry.
This has really cost Red Hat mindshare, but not their bottom line which is supported by the Enterprise and RHEL.
2) Since Red Hat 8 or 9, RPM has had issues with Berkeley DB including lockups and __db* files that have to be manually deleted if updates crash or you CTRL-C while doing an RPM. This has apparently never been fixed.
3) Fedora supports upgrades, but this is difficult and often does not work. It took me nearly 20 HOURS to upgrade from FC5 x86_64 to FC6 x86_64 on my laptop.
4) Doing a query for packages shows ONE version of a program, but if you try to remove it, you find you have TWO. How many packages on an x86_64 platform do you need that support both architectures? Can this be minimized? Can the the tools better support multiple packages instead of a default "fail" mode....
5) Yum is a good updater, but has become slow and has many issues. My FC6 x86_64 boxes fail nearly every time I do an update because of missing packages. - 1 missing package, the entire update fails by default - updates fail, but the packages are found on the update sites - I have had yum crash with a core dump leaving a mess to clean up - if you run low on disk space updates are a real pain
My wife and kids use Fedora and now Knoppix. They like Linux, but I have to maintain over 6 computers (PII/366 to x86_64 Athlons) at home, two which are dual boot with XP. Weekly manually fixing yum on each becomes a real pain.
I understand ESR's frustration with Fedora as I often share it as well. I had a manager who said "perception is reality". I think this is the case with Fedora. It is getting a bad rap due to the perception of problems, mainly difficulty with MP3/multimedia and updates. Fedora is losing mindshare and marketshare (remember it WAS RHL at one time and to many, Red Hat Linux WAS Linux).
This has been a good thread. I hope it serves as a wake up call to the Fedora developers and their sponsors, especially Red Hat.
Wade Hampton writes:
- Yum is a good updater, but has become slow and has many issues. My FC6
I agree. yum is unacceptably slow. It's not really yum, but, going back to my earlier point, it's rpm's inherent design.
x86_64 boxes fail nearly every time I do an update because of
missing packages.
You need to run this down. You have something going on that needs to be fixed. I have had no issues with yum update on FC 6 x86_64.
Of course, after I upgraded from FC 4 to FC 6 (which took maybe 2 hours max on my x86_64 server), I carefully went through the aftermath, and carefully uninstalled all i386 junk, and gingerly repaired the damage due to rpm bugs I was aware of beforehand -- which manifest themselves when you remove multilib packages.
After I stabilized FC 6 x86_64 to a good state, I never had yum update fail going forward from that point on.
In article cone.1172118125.215471.3294.500@commodore.email-scan.com, Sam Varshavchik fedora-list@redhat.com wrote:
I have had no issues with yum update on FC 6 x86_64.
Well I have and quite often. This was a clean FC6 install too. Perhaps *you* aren't getting them because you don't have as many packages installed?
On Fri, Feb 23, 2007 at 01:43:53AM +0000, Rick wrote:
Sam Varshavchik fedora-list@redhat.com wrote:
I have had no issues with yum update on FC 6 x86_64.
Well I have and quite often. This was a clean FC6 install too. Perhaps *you* aren't getting them because you don't have as many packages installed?
Are you mixing in third-party repositories?
If not, what are the issues you've had?
In article 20070223014819.GA4339@jadzia.bu.edu, Matthew Miller fedora-list@redhat.com wrote:
If not, what are the issues you've had?
Most of the issues have been files that are in both the i396 and x86_64 packages. Often -devel packages are the culprits.
On Fri, 23 Feb 2007 02:22:07 +0000 (UTC) ellis@spinics.net (Rick) wrote:
If not, what are the issues you've had?
Most of the issues have been files that are in both the i396 and x86_64 packages. Often -devel packages are the culprits.
And most of the problems I have with this are usually solved by waiting a day or two for the 386 and x86_64 builds to get finished so the same versions of both are available.
The real problem comes when there is a surge of new stuff and by waiting a day, the first problem I had was solved, only to be replaced by another different package that is new where the two arches aren't in sync again.
The problem is so noticable only because FC6 installed so many utterly pointless duplicate 32 bit versions of 64 bit things.
When I look at the conflicts I'll sometimes say, "What the heck do I have a 32 bit one of those for?", and just rpm -e the 386 version, thus reducing the chance for conflicts, and making the update process run smoother.
On Thu, Feb 22, 2007 at 10:07:18PM -0500, Tom Horsley wrote:
If not, what are the issues you've had?
Most of the issues have been files that are in both the i396 and x86_64 packages. Often -devel packages are the culprits.
And most of the problems I have with this are usually solved by waiting a day or two for the 386 and x86_64 builds to get finished so the same versions of both are available.
I think this'll be much improved by the Core/Extras merge, as currently, Extras acts a lot more like Rawhide than it should.
In article 20070222220718.78c6c4cb@zooty, Tom Horsley fedora-list@redhat.com wrote:
And most of the problems I have with this are usually solved by waiting a day or two for the 386 and x86_64 builds to get finished so the same versions of both are available.
That may be so but it still isn't good to have updates that fail so often no matter what the reason. Of course it gets even worse if you have packages from other repositories installed.
Rick writes:
In article cone.1172118125.215471.3294.500@commodore.email-scan.com, Sam Varshavchik fedora-list@redhat.com wrote:
I have had no issues with yum update on FC 6 x86_64.
Well I have and quite often. This was a clean FC6 install too. Perhaps *you* aren't getting them because you don't have as many packages installed?
[mrsam@commodore ~]$ rpm -q -a | wc -l 1137
I have 1137 packages installed. Is that enough?
In article cone.1172196857.693042.27664.500@commodore.email-scan.com, Sam Varshavchik fedora-list@redhat.com wrote:
I have 1137 packages installed. Is that enough?
No.
On Thu, 2007-02-22 at 21:14 -0500, Sam Varshavchik wrote:
Rick writes:
In article cone.1172118125.215471.3294.500@commodore.email-scan.com, Sam Varshavchik fedora-list@redhat.com wrote:
I have had no issues with yum update on FC 6 x86_64.
Well I have and quite often. This was a clean FC6 install too. Perhaps *you* aren't getting them because you don't have as many packages installed?
[mrsam@commodore ~]$ rpm -q -a | wc -l 1137
I have 1137 packages installed. Is that enough?
Not sure:
[wayward4now@iam broke]# rpm -q -a | wc -l 1704
<cackles> Ric
On Wed, 2007-02-21 at 22:44 -0500, Wade Hampton wrote:
A couple of points about Fedora, YUM, and RPM.
- Fedora was the outgrowth of RHL. However, Red Hat blundered when they
stopped selling RHL as an entry level product with by far the largest mindshare in the Linux and business community.
Wade, I couldn't agree more. I was in Chemical Marketing for 26 years and the one thing you never do is blow off your supporters. It made no sense and jerked off a bunch of people whose hearts and minds RedHat practically owned. Whoever made that call needed to have their head examined.
This has been a good thread. I hope it serves as a wake up call to the Fedora developers and their sponsors, especially Red Hat.
I still believe in RedHat, that's the helluvait. Fedora is a good concept but it doesn't serve as a drop-in-replacement for the olde 6.0 series boxed set for the Linux User. Yum beats the heck out of what we had back when, an over-burdened FTP site. Smart looks like a step in the right direction. RPM can be fixed. Right?
On Wed, 21 Feb 2007, Wade Hampton wrote:
A couple of points about Fedora, YUM, and RPM. [...]
- Since Red Hat 8 or 9, RPM has had issues with Berkeley DB
including lockups and __db* files that have to be manually deleted if updates crash or you CTRL-C while doing an RPM. This has apparently never been fixed.
Did I not read someplace that this turned out to be a kernel race bug that has finally been fixed? Are people running the latest kernels still experiencing this problem?
[...]
Matthew Saltzman wrote:
On Wed, 21 Feb 2007, Wade Hampton wrote:
A couple of points about Fedora, YUM, and RPM. [...]
- Since Red Hat 8 or 9, RPM has had issues with Berkeley DB
including lockups and __db* files that have to be manually deleted if updates crash or you CTRL-C while doing an RPM. This has apparently never been fixed.
Did I not read someplace that this turned out to be a kernel race bug that has finally been fixed?
Yup (for the most part, anyway).
-- Rex
Rex Dieter wrote:
Matthew Saltzman wrote:
On Wed, 21 Feb 2007, Wade Hampton wrote:
A couple of points about Fedora, YUM, and RPM. [...]
- Since Red Hat 8 or 9, RPM has had issues with Berkeley DB
including lockups and __db* files that have to be manually deleted if updates crash or you CTRL-C while doing an RPM. This has apparently never been fixed.
Did I not read someplace that this turned out to be a kernel race bug that has finally been fixed?
Yup (for the most part, anyway).
But isn't this about the 3rd or 4th time we've heard that? I just have to wonder what possessed someone to think that berkeleydb was a suitable format without making a backup copy before every change.
Sam Varshavchik and a lot of other people wrote some thoughtful replies, but:
1) trust me, the Debian system is just as broken. I spend a lot of time sorting out both distros. 1a) regarding yum, what good is a distro without an upgrade system?
2) a few other rhetorical comments- Isn't this a result of the torrent of new apps that has arrived since RH7-8 and Debian Woody days? Isn't this fundamentally the same shared library disaster that they call dll-hell on Windows? Boy howdy that Windows repository was a great solution... At bottom isn't this a rate-of-change issue? The Mac world solves this by welding the hood shut. As soon as you start adding uncommon software there, OSX has problems too. I'll donate to an OS solution to the multimedia problem.
3) I guess if I could wave a wand, I'd have a set of common fundamental libraries that get shared and maintain compatibility between distro releases, and everything else would be handled by the applications themselves. Maybe this is plain dumb, but it sure would be easier for me...
John Fisher
John P. Fisher wrote:
- a few other rhetorical comments-
Isn't this a result of the torrent of new apps that has arrived since RH7-8 and Debian Woody days?
No, you've just forgotten how bad RH7 and 8 were. After many, many, updates RH7.3 became absolutely rock solid but RH8 was like starting from scratch and was never really fixed - they just moved on to 9.
Isn't this fundamentally the same shared library disaster that they call dll-hell on Windows? Boy howdy that Windows repository was a great solution...
It's a different version of the same problem. Linux shared libs have a versioning scheme that lets different apps run the versions each needs at the same time. Unfortunately, there is no coordination among repositories to make this work with anything that replaces a library that is also in core or extras, so RPM sees conflicting versions of the same thing.
At bottom isn't this a rate-of-change issue? The Mac world solves this by welding the hood shut. As soon as you start adding uncommon software there, OSX has problems too.
Have you experienced this with software obtained from the same source or compiled locally?
John P. Fisher:
- a few other rhetorical comments-
Isn't this a result of the torrent of new apps that has arrived since RH7-8 and Debian Woody days?
Les Mikesell:
No, you've just forgotten how bad RH7 and 8 were. After many, many, updates RH7.3 became absolutely rock solid but RH8 was like starting from scratch and was never really fixed - they just moved on to 9.
Hear, hear... And they've done the same, ever since, with Fedora. How long does it take for them to see the folly of re-inventing the wheel, over and over, rather than straightening the kinks? About 7 versions of Fedora, it seems, if we believe the comment about some changes in when and how the next Fedora release will be managed.
Isn't this fundamentally the same shared library disaster that they call dll-hell on Windows? Boy howdy that Windows repository was a great solution...
It's a different version of the same problem. Linux shared libs have a versioning scheme that lets different apps run the versions each needs at the same time. Unfortunately, there is no coordination among repositories to make this work with anything that replaces a library that is also in core or extras, so RPM sees conflicting versions of the same thing.
It seems to be a fundamental problem with Linux that they make all sorts of things really independent on other installed things, instead of compiling into the application that you're using the features from the libraries they're using. I understand the benefits of using separate supporting libraries, but tend to appreciate the benefits of an all-in-one executable, more.
On Thu, 22 Feb 2007 10:59:16 -0800 "John P. Fisher" john.fisher@znyx.com wrote:
- I guess if I could wave a wand, I'd have a set of common fundamental
libraries that get shared and maintain compatibility between distro releases, and everything else would be handled by the applications themselves. Maybe this is plain dumb, but it sure would be easier for me...
I'd just have every single app have its very own versions of every library it needs with a reaper that runs around at low priority hard-linking the ones that are identical :-).
On Thu, 22 Feb 2007, Tom Horsley wrote:
On Thu, 22 Feb 2007 10:59:16 -0800 "John P. Fisher" john.fisher@znyx.com wrote:
- I guess if I could wave a wand, I'd have a set of common fundamental
libraries that get shared and maintain compatibility between distro releases, and everything else would be handled by the applications themselves. Maybe this is plain dumb, but it sure would be easier for me...
I'd just have every single app have its very own versions of every library it needs with a reaper that runs around at low priority hard-linking the ones that are identical :-).
Then you've forgotten the zlib security issues of only 5 years ago. A security vulernability was found in a compression library common to over 500 apps. Those that dynamically linked to zlib were patched with a single upgrade; however, large numbers of apps had to be recompiled because they statically linked to zlib. This was a *major* security crisis -- and *many* apps/utilities switched to dynamic linking of zlib (and other common libraries) to avoid this happening again.
Steve Friedman
Steve Friedman wrote:
- I guess if I could wave a wand, I'd have a set of common fundamental
libraries that get shared and maintain compatibility between distro releases, and everything else would be handled by the applications themselves. Maybe this is plain dumb, but it sure would be easier for me...
I'd just have every single app have its very own versions of every library it needs with a reaper that runs around at low priority hard-linking the ones that are identical :-).
Then you've forgotten the zlib security issues of only 5 years ago. A security vulernability was found in a compression library common to over 500 apps. Those that dynamically linked to zlib were patched with a single upgrade; however, large numbers of apps had to be recompiled because they statically linked to zlib. This was a *major* security crisis -- and *many* apps/utilities switched to dynamic linking of zlib (and other common libraries) to avoid this happening again.
Steve Friedman
Good point. Is it possible to draw some sort of line in the virtual sand and say these are shared, and those are up to the application? I've been griping for years about the way apps install themselves in all 3 desktop Oses, this zlib point clobbers one of my ideas for sure, but maybe they *all* have to be shared, so thats how we got in this mess.
On Thu, 22 Feb 2007 16:16:40 -0800 "John P. Fisher" john.fisher@znyx.com wrote:
this zlib point clobbers one of my ideas for sure
Nah, not really. The next time it will be the shared lib that has the security problem and two or three static linked programs survive intact, then everyone will rush back to static linking. I think the security thing is completely orthogonal.
On Thu, Feb 22, 2007 at 07:45:38PM -0500, Tom Horsley wrote:
this zlib point clobbers one of my ideas for sure
Nah, not really. The next time it will be the shared lib that has the security problem and two or three static linked programs survive intact, then everyone will rush back to static linking. I think the security thing is completely orthogonal.
Err, what? That doesn't make any sense. The point is that the shared lib requires one small update, instead of auditing to find all programs that linked against the static library, what version they used, whether that version is vulnerable, etc., and then making an updated version of each entire affected package.
On Thu, 22 Feb 2007 19:49:51 -0500 Matthew Miller mattdm@mattdm.org wrote:
Err, what? That doesn't make any sense. The point is that the shared lib requires one small update
One small update which could just as easily introduce a security problem into every dynamically linked app as fix one.
On Thu, Feb 22, 2007 at 09:07:15PM -0500, Tom Horsley wrote:
Err, what? That doesn't make any sense. The point is that the shared lib requires one small update
One small update which could just as easily introduce a security problem into every dynamically linked app as fix one.
Well, hopefully not "just as easily", as we (speaking in general) hopefully are more aware of good, secure programming practices now. Obviously new flaws do crop up, but in most cases, security problems tend to be "current version and all previous".
But even that aside, the potential impact of the scenario you describe is no worse than with static linking, and much easier to clean up after.
In article 20070222210715.41a25aec@zooty, Tom Horsley fedora-list@redhat.com wrote:
One small update which could just as easily introduce a security problem into every dynamically linked app as fix one.
If it did, they update to the update would still be one small fix.
Tom Horsley wrote:
On Thu, 22 Feb 2007 19:49:51 -0500 Matthew Miller mattdm@mattdm.org wrote:
Err, what? That doesn't make any sense. The point is that the shared lib requires one small update
One small update which could just as easily introduce a security problem into every dynamically linked app as fix one.
Good point!
I have seen a recent problem with zlib crippling a high number of packages because of a problem. (fixed by running ldconfig and fixed quickly in the zlib package with the next update). Some other poster referred to a security flaw when the lib was static and within individual programs. If the flaw does not change the interface ability to use the dynamic library, it would be easier to only have to fix the problematic library. If it changed the way programs need to interface with the library, static or dynamic would both be a nightmare to resolve the issues.
I don't think this issue has much to do with ESR (whoever he may be, not known by me.) though.
Jim
On Thu, 2007-02-22 at 17:06 -0500, Steve Friedman wrote:
On Thu, 22 Feb 2007, Tom Horsley wrote:
On Thu, 22 Feb 2007 10:59:16 -0800 "John P. Fisher" john.fisher@znyx.com wrote:
- I guess if I could wave a wand, I'd have a set of common fundamental
libraries that get shared and maintain compatibility between distro releases, and everything else would be handled by the applications themselves. Maybe this is plain dumb, but it sure would be easier for me...
I'd just have every single app have its very own versions of every library it needs with a reaper that runs around at low priority hard-linking the ones that are identical :-).
Then you've forgotten the zlib security issues of only 5 years ago. A security vulernability was found in a compression library common to over 500 apps. Those that dynamically linked to zlib were patched with a single upgrade; however, large numbers of apps had to be recompiled because they statically linked to zlib. This was a *major* security crisis -- and *many* apps/utilities switched to dynamic linking of zlib (and other common libraries) to avoid this happening again.
As a non-programmer, I'm ignorant of many of the issues involved, but why can't you say: "if you link against an external library, do it dynamically" as a rule of thumb? That way you could replace the library without needing to recompile. Unless you want to state for sure that no-one else will use your library, and not place it in a shared location, that is. -Don
On Thu, Feb 22, 2007 at 08:47:15PM -0500, Don Levey wrote:
As a non-programmer, I'm ignorant of many of the issues involved, but why can't you say: "if you link against an external library, do it dynamically" as a rule of thumb? That way you could replace the library without needing to recompile. Unless you want to state for sure that
This is in fact the current Fedora policy.
But this is supposedly the point of sbin, but (and I haven't looked for ages) it used to be that various entries in sbin were actually dynamically linked.
Whatever you do you can't win, I guess with rescue disks being as good as they are now in an ideal world you'd have everything that could be dynamically linked and just use a rescue disk to dig yourself out of the mire when it all goes a bit pear-shaped.
-----Original Message----- From: fedora-list-bounces@redhat.com [mailto:fedora-list-bounces@redhat.com] On Behalf Of Don Levey Sent: 23 February 2007 01:47 To: For users of Fedora Subject: Re: ESR: Goodbye Fedora- big picture
On Thu, 2007-02-22 at 17:06 -0500, Steve Friedman wrote:
On Thu, 22 Feb 2007, Tom Horsley wrote:
On Thu, 22 Feb 2007 10:59:16 -0800 "John P. Fisher" john.fisher@znyx.com wrote:
- I guess if I could wave a wand, I'd have a set of common
fundamental
libraries that get shared and maintain compatibility between distro releases, and everything else would be handled by the applications themselves. Maybe this is plain dumb, but it sure would be easier
for me...
I'd just have every single app have its very own versions of every
library
it needs with a reaper that runs around at low priority hard-linking the ones that are identical :-).
Then you've forgotten the zlib security issues of only 5 years ago. A
security vulernability was found in a compression library common to
over
500 apps. Those that dynamically linked to zlib were patched with a single upgrade; however, large numbers of apps had to be recompiled because they statically linked to zlib. This was a *major* security crisis -- and *many* apps/utilities switched to dynamic linking of
zlib
(and other common libraries) to avoid this happening again.
As a non-programmer, I'm ignorant of many of the issues involved, but why can't you say: "if you link against an external library, do it dynamically" as a rule of thumb? That way you could replace the library without needing to recompile. Unless you want to state for sure that no-one else will use your library, and not place it in a shared location, that is. -Don
John P. Fisher writes:
1a) regarding yum, what good is a distro without an upgrade system?
Not much, but that's not the point. yum is just a bandaid that tries to cover an ever-expanding, stagnating wound.
Isn't this a result of the torrent of new apps that has arrived since RH7-8 and Debian Woody days?
Yes. That's the point. The rpm-based infrastructure is bursting at the seems.
Isn't this fundamentally the same shared library disaster that they call dll-hell on Windows?
No.
Boy howdy that Windows repository was a great
solution...
Yes, and we have a Berkeley-DB package database, that is prone to corruption.
But the flip side of the coin is that if you keep the package repository metadata in flat files, it's going to take forever for simple install/uninstall transactions to get done. If you think yum is slow now, try to drive it off flat files…
Generally, storing package repository in some kind of a lightweight database is the right idea. It's the actual implementation that we have here, that's the problem.
On Thu, 2007-02-22 at 00:47 +0100, M. Fioretti wrote:
On Wed, Feb 21, 2007 18:29:32 PM -0500, Sam Varshavchik (mrsam@courier-mta.com) wrote:
Five-seven years ago, rpm was, hands down, the technically superior package management system. But it failed to keep up with the times. Rather than fixing fundamental defects in rpm
which fundamental defects, sorry?
It really should have some way to record the repository a package was pulled from in the rpm database itself.
Sam Varshavchik wrote:
But I agree with his main point. I no longer use rpm to keep the tabs of all the additional stuff I install on my machines.
What do you use besides rpm now? I have both yum and rpm busted after an upgrade so the info might be helpful.
Jim
rpm -q kernel rpm: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory [root@cornette-dell-hdb packages]# yum list updates There was a problem importing one of the Python modules required to run yum. The error leading to this problem was:
libz.so.1: cannot open shared object file: No such file or directory
Please install a package which provides this module, or verify that the module is installed correctly.
It's possible that the above module doesn't match the current version of Python, which is: 2.5 (r25:51908, Feb 13 2007, 09:13:49) [GCC 4.1.1 20070209 (Red Hat 4.1.1-57)]
If you cannot solve this problem yourself, please go to the yum faq at: http://wiki.linux.duke.edu/YumFaq
[root@cornette-dell-hdb packages]# locate libz.so.1 /usr/lib/libz.so.1 /usr/lib/libz.so.1.0.0
It's a broken link, try ls -l 'locate libz'
-----Original Message----- From: fedora-list-bounces@redhat.com [mailto:fedora-list-bounces@redhat.com] On Behalf Of Jim Cornette Sent: 21 February 2007 23:48 To: For users of Fedora Subject: Re: ESR: Goodbye Fedora
Sam Varshavchik wrote:
But I agree with his main point. I no longer use rpm to keep the tabs
of all the additional stuff I install on my machines.
What do you use besides rpm now? I have both yum and rpm busted after an
upgrade so the info might be helpful.
Jim
rpm -q kernel rpm: error while loading shared libraries: libz.so.1: cannot open shared
object file: No such file or directory [root@cornette-dell-hdb packages]# yum list updates There was a problem importing one of the Python modules required to run yum. The error leading to this problem was:
libz.so.1: cannot open shared object file: No such file or directory
Please install a package which provides this module, or verify that the module is installed correctly.
It's possible that the above module doesn't match the current version of Python, which is: 2.5 (r25:51908, Feb 13 2007, 09:13:49) [GCC 4.1.1 20070209 (Red Hat 4.1.1-57)]
If you cannot solve this problem yourself, please go to the yum faq at: http://wiki.linux.duke.edu/YumFaq
[root@cornette-dell-hdb packages]# locate libz.so.1 /usr/lib/libz.so.1 /usr/lib/libz.so.1.0.0
Steve Hanselman wrote:
It's a broken link, try ls -l 'locate libz'
The file was there. This confused me.
Fortunately the problem was corrected by running ldconfig. After running ldconfig, everything depending on libz.so seemed to work again.
The answer was given by Mathias on the Fedora-test list. Question: https://www.redhat.com/archives/fedora-test-list/2007-February/msg00559.html Answer: https://www.redhat.com/archives/fedora-test-list/2007-February/msg00560.html
And confirmation after his reply within the thread.
Whatever happened to cause the borkeness is a mystery to me though.
Jim
[root@cornette-dell-hdb packages]# locate libz.so.1 /usr/lib/libz.so.1 /usr/lib/libz.so.1.0.0
On Wed, 21 Feb 2007, Sam Varshavchik wrote:
"Red Hat/Fedora throw away what was at one time a near-unassailable lead in technical prowess..."
Five-seven years ago, rpm was, hands down, the technically superior package management system. But it failed to keep up with the times. Rather than fixing fundamental defects in rpm, instead they put a big layer of makeup on it: yum, and its plugins. But it's a still a big piece of turd underneath, and it's getting to the point where no amount of makeup will help.
Tell me how you can install/upgrade packages, with it's dependencies just using rpm (no yum, smart or apt)?
-- 21:50:04 up 2 days, 9:07, 0 users, load average: 0.92, 0.37, 0.18 --------------------------------------------------------- Lic. Martín Marqués | SELECT 'mmarques' || Centro de Telemática | '@' || 'unl.edu.ar'; Universidad Nacional | DBA, Programador, del Litoral | Administrador ---------------------------------------------------------
Martin Marques writes:
On Wed, 21 Feb 2007, Sam Varshavchik wrote:
"Red Hat/Fedora throw away what was at one time a near-unassailable lead in technical prowess..."
Five-seven years ago, rpm was, hands down, the technically superior package management system. But it failed to keep up with the times. Rather than fixing fundamental defects in rpm, instead they put a big layer of makeup on it: yum, and its plugins. But it's a still a big piece of turd underneath, and it's getting to the point where no amount of makeup will help.
Tell me how you can install/upgrade packages, with it's dependencies just using rpm (no yum, smart or apt)?
Not sure I understand what you're trying to say. If you want to know why seven years ago (or so) none of that was necessary, it was simply because things were much simpler back then. Red Hat Linux fit on a single CD. There were maybe 2-3 errata updates per week. There weren't really any external repositories. No Extras, no Livna. Interpackage dependencies were much more simples. 99.9% of the time, downloading the errata and running rpm -F was all that was needed.
Things became complicated over the years, and rpm simply have not kept pace.
On Thu, 22 Feb 2007, Sam Varshavchik wrote:
Not sure I understand what you're trying to say. If you want to know why seven years ago (or so) none of that was necessary, it was simply because things were much simpler back then. Red Hat Linux fit on a single CD. There were maybe 2-3 errata updates per week. There weren't really any external repositories. No Extras, no Livna. Interpackage dependencies were much more simples. 99.9% of the time, downloading the errata and running rpm -F was all that was needed.
Two things:
Maybe not 7 years ago, but in the 7.x versions of RHL lots of people critizied the fact the installing rpms normally gave errors on dependencies which had to then be fixed by looking for the missing package (if you happen to find out which one it is) and install it, hoping this one will not race for a new package it depends on.
Second, rpm -F didn't always work.
-- 21:50:04 up 2 days, 9:07, 0 users, load average: 0.92, 0.37, 0.18 --------------------------------------------------------- Lic. Martín Marqués | SELECT 'mmarques' || Centro de Telemática | '@' || 'unl.edu.ar'; Universidad Nacional | DBA, Programador, del Litoral | Administrador ---------------------------------------------------------
On Thu, 2007-02-22 at 10:41 -0300, Martin Marques wrote:
Tell me how you can install/upgrade packages, with it's dependencies just using rpm (no yum, smart or apt)?
I was under the impression that the --aid option was supposed to aid you in adding required dependencies, though I suspect you'll need them in the current working directory, and I never got it to help me out. Admittedly, it's a long time since I tried to install something like that.
M. Fioretti wrote:
Just saw this on LWN (haven't even finished reading it), sorry if it has already been posted:
http://lwn.net/Articles/223038/
Marco
I read it, and most of the comments on it.
Mr. Raymond could have solved his problems by using a different package manager (i.e., smart) and being pro-active about using it. The best thing that extras ever did was to build its own version of smart.
As an example of being pro-active with smart (rather than merely accepting whatever changes it recommends without further review): I've known for weeks now that the priorities system seems to be broken /in re/ libgcrypt, libtheora, and some jpackage libraries. For some reason, smart wants to upgrade some packages from ATrpms, and downgrade some other packages on jpackage, this although I already have versions originally installed from core, released-updates, and extras. I don't know what's going on (are these packages now gone from released-updates or extras?), but until I do, I always unmark those changes before I commit. Why couldn't Mr. Raymond do the same thing? What is he looking for?
And about those proprietary codecs: doesn't he realize that not all of us want to build multimedia machines? Some of us want to build database servers or Web servers, and you don't need proprietary /anything/ for that sort of work!
Does anyone here think that any of Mr. Raymond's suggestions have any merit? Or shall we write this off to sour grapes?
Temlakos
Temlakos wrote:
Does anyone here think that any of Mr. Raymond's suggestions have any merit? Or shall we write this off to sour grapes?
The one that matters is that fedora isn't suitable for machine that need to be stable and reliable. I've always thought that a quick, easy solution to most surprises would be to let yum take a date/time option and ignore all updates after that time. That way you could stay almost up to date on your critical machines while watching the mail list for complaints by people with the newer changes. And, you could update a test machine and after testing, reliably update other boxes to the same versions that you tested even if new updates had gone in the repository.
solution to most surprises would be to let yum take a date/time option and ignore all updates after that time. That way you could stay almost up to date on your critical machines while watching the mail list for complaints by people with the newer changes.
File a request for enhancement in bugzilla so the idea doesn't get lost (assuming you haven't already done so)
Alan
On Wed, Feb 21, 2007 at 17:19:34 -0600, Les Mikesell lesmikesell@gmail.com wrote:
The one that matters is that fedora isn't suitable for machine that need to be stable and reliable. I've always thought that a quick, easy solution to most surprises would be to let yum take a date/time option and ignore all updates after that time. That way you could stay almost up to date on your critical machines while watching the mail list for complaints by people with the newer changes. And, you could update a test machine and after testing, reliably update other boxes to the same versions that you tested even if new updates had gone in the repository.
You'd probably want the time specified as an interval to lag, rather than a date.
Bruno Wolff III wrote:
On Wed, Feb 21, 2007 at 17:19:34 -0600, Les Mikesell lesmikesell@gmail.com wrote:
The one that matters is that fedora isn't suitable for machine that need to be stable and reliable. I've always thought that a quick, easy solution to most surprises would be to let yum take a date/time option and ignore all updates after that time. That way you could stay almost up to date on your critical machines while watching the mail list for complaints by people with the newer changes. And, you could update a test machine and after testing, reliably update other boxes to the same versions that you tested even if new updates had gone in the repository.
You'd probably want the time specified as an interval to lag, rather than a date.
That's trivial to compute, so it doesn't need to be part of the application. What I really want are reliable, repeatable updates once I've done one and tested on a non-critical box, and I'd also like it to play nice with a caching web proxy. Using a random pick from a mirrorlist every run screws up both of those concepts, even if you could pin the timestamp of the last update you want to consider.
On Thu, 22 Feb 2007, Les Mikesell wrote:
Bruno Wolff III wrote:
On Wed, Feb 21, 2007 at 17:19:34 -0600, Les Mikesell lesmikesell@gmail.com wrote:
The one that matters is that fedora isn't suitable for machine that need to be stable and reliable. I've always thought that a quick, easy solution to most surprises would be to let yum take a date/time option and ignore all updates after that time. That way you could stay almost up to date on your critical machines while watching the mail list for complaints by people with the newer changes. And, you could update a test machine and after testing, reliably update other boxes to the same versions that you tested even if new updates had gone in the repository.
You'd probably want the time specified as an interval to lag, rather than a date.
That's trivial to compute, so it doesn't need to be part of the application. What I really want are reliable, repeatable updates once I've done one and tested on a non-critical box, and I'd also like it to play nice with a caching web proxy. Using a random pick from a mirrorlist every run screws up both of those concepts, even if you could pin the timestamp of the last update you want to consider.
The workaround for this feature is trivial. We set up our own local repository (initially because updating a new config over the internet was so slow compared with ethernet speeds, but now we do it with installs and have eliminated swapping CDs). Just push approved updates (instead of blindly rsync'ing the part of the tree that interests you), and you're done.
Steve Friedman
Steve Friedman wrote:
On Thu, 22 Feb 2007, Les Mikesell wrote:
Bruno Wolff III wrote:
On Wed, Feb 21, 2007 at 17:19:34 -0600, Les Mikesell lesmikesell@gmail.com wrote:
The one that matters is that fedora isn't suitable for machine that need to be stable and reliable. I've always thought that a quick, easy solution to most surprises would be to let yum take a date/time option and ignore all updates after that time. That way you could stay almost up to date on your critical machines while watching the mail list for complaints by people with the newer changes. And, you could update a test machine and after testing, reliably update other boxes to the same versions that you tested even if new updates had gone in the repository.
You'd probably want the time specified as an interval to lag, rather than a date.
That's trivial to compute, so it doesn't need to be part of the application. What I really want are reliable, repeatable updates once I've done one and tested on a non-critical box, and I'd also like it to play nice with a caching web proxy. Using a random pick from a mirrorlist every run screws up both of those concepts, even if you could pin the timestamp of the last update you want to consider.
The workaround for this feature is trivial. We set up our own local repository (initially because updating a new config over the internet was so slow compared with ethernet speeds, but now we do it with installs and have eliminated swapping CDs). Just push approved updates (instead of blindly rsync'ing the part of the tree that interests you), and you're done.
That's always sounded fairly horrible to me as a workaround for something that should be really simple. My servers are widely distributed and not all of the same distribution/version so having to build the infrastructure of a local repository for each with hand-picked rpms doesn't sound like fun. I'd probably try to automate something that made a list of installed rpm versions and fed that to another machine's yum as an easier approach. Most of the servers are Centos, though and I've had pretty good luck with just trusting the repositories. The 3.x version even does something sensible when you use a proxy cache so I haven't put much effort into a workaround.
On Thu, 22 Feb 2007, Les Mikesell wrote:
Steve Friedman wrote:
On Thu, 22 Feb 2007, Les Mikesell wrote:
Bruno Wolff III wrote:
On Wed, Feb 21, 2007 at 17:19:34 -0600, Les Mikesell lesmikesell@gmail.com wrote:
And, you could update a test machine and after testing, reliably update other boxes to the same versions that you tested even if new updates had gone in the repository.
That's trivial to compute, so it doesn't need to be part of the application. What I really want are reliable, repeatable updates once I've done one and tested on a non-critical box, and I'd also like it to play nice with a caching web proxy.
The workaround for this feature is trivial. We set up our own local repository (initially because updating a new config over the internet was so slow compared with ethernet speeds, but now we do it with installs and have eliminated swapping CDs). Just push approved updates (instead of blindly rsync'ing the part of the tree that interests you), and you're done.
That's always sounded fairly horrible to me as a workaround for something that should be really simple. My servers are widely distributed and not all of the same distribution/version so having to build the infrastructure of a local repository for each with hand-picked rpms doesn't sound like fun. I'd probably try to automate something that made a list of installed rpm versions and fed that to another machine's yum as an easier approach. Most of the servers are Centos, though and I've had pretty good luck with just trusting the repositories. The 3.x version even does something sensible when you use a proxy cache so I haven't put much effort into a workaround.
Your initial message said that you wanted to test the updates first. So, although hand-picking isn't necessary, there will be some required admin interaction to update the test machine, test it, then approve the updates. The first and last steps can be one-liners.
Steve Friedman
On Wed, 2007-02-21 at 17:47 -0500, Temlakos wrote: SNIP!
Does anyone here think that any of Mr. Raymond's suggestions have any merit? Or shall we write this off to sour grapes?
Temlakos
I have to agree on one point, that user type workstation support is a bit lacking. What I want is a series of systems that support the web, allow me to get the content on my machine and use it, not Microsoft, and be simple to maintain after install, reasonably supported without a lot of bit bashing type read, do, reread, redo, and twiddle until it works.
Yes, Fedora is "cutting edge", but still, the workstation portion needs to be reliable, usable, and comprehensive to support additional development. It must be the spring board if Fedora is to be a good solid, usable platform for that cutting edge development.
Regards, Les H
Today Les did spake thusly:
On Wed, 2007-02-21 at 17:47 -0500, Temlakos wrote: SNIP!
Does anyone here think that any of Mr. Raymond's suggestions have any merit? Or shall we write this off to sour grapes?
Temlakos
I have to agree on one point, that user type workstation support is a bit lacking. What I want is a series of systems that support the web, allow me to get the content on my machine and use it, not Microsoft, and be simple to maintain after install, reasonably supported without a lot of bit bashing type read, do, reread, redo, and twiddle until it works.
Yes, Fedora is "cutting edge", but still, the workstation portion needs to be reliable, usable, and comprehensive to support additional development. It must be the spring board if Fedora is to be a good solid, usable platform for that cutting edge development.
Tis the one thing I miss about Debian. The abiliy to have a stable channel that only receives security updates as well as a testing channel for everything fun
Scott van Looy wrote:
Today Les did spake thusly:
On Wed, 2007-02-21 at 17:47 -0500, Temlakos wrote: SNIP!
Tis the one thing I miss about Debian. The abiliy to have a stable channel that only receives security updates as well as a testing channel for everything fun
We have fun with unstable :-D
The problem with debian is that if you use stable, you are stuck with the versions of the programes there (example, php4, apache 1.3, postgresql 7.4, etc.)
On the other hand, you get a very secure server. Something like CentOS. :-D
Martín
Martin A. Marques wrote:
The problem with debian is that if you use stable, you are stuck with the versions of the programes there (example, php4, apache 1.3, postgresql 7.4, etc.)
On the other hand, you get a very secure server. Something like CentOS. :-D
Except that they never have a schedule for the next release. So you're not only way out of date, you don't know how long you will be that way.
Martin A. Marques:
The problem with debian is that if you use stable, you are stuck with the versions of the programes there (example, php4, apache 1.3, postgresql 7.4, etc.)
Still Apache 1.3, after ALL THESE YEARS?!?
On the other hand, you get a very secure server. Something like CentOS. :-D
Les Mikesell:
Except that they never have a schedule for the next release. So you're not only way out of date, you don't know how long you will be that way.
I don't have a problem with that. I prefer systems that stay the same until there is something that is difinitively better, not just updated at some point in time because of an arbitrary decision.
Les wrote:
On Wed, 2007-02-21 at 17:47 -0500, Temlakos wrote: SNIP!
Does anyone here think that any of Mr. Raymond's suggestions have any merit? Or shall we write this off to sour grapes?
Temlakos
I have to agree on one point, that user type workstation support is a bit lacking. What I want is a series of systems that support the web, allow me to get the content on my machine and use it, not Microsoft, and be simple to maintain after install, reasonably supported without a lot of bit bashing type read, do, reread, redo, and twiddle until it works.
There are points in time when fedora does these things as well or better than anything else. By the end of any FC version's life it tends to be pretty good. But then you don't want to install it because it takes many megabytes of updates to catch up and you will soon have to reinstall to have a version that still gets security fixes.
Yes, Fedora is "cutting edge", but still, the workstation portion needs to be reliable, usable, and comprehensive to support additional development. It must be the spring board if Fedora is to be a good solid, usable platform for that cutting edge development.
You can't be both a cutting edge platform running new and largely untested software versions and rock solid at the same time. That's just not possible given the state of the art in software development.
On Wed, 21 Feb 2007 23:37:57 +0100 "M. Fioretti" mfioretti@mclink.it wrote:
Just saw this on LWN (haven't even finished reading it), sorry if it has already been posted:
If he thinks fedora repositories have consistency problems, he should try suse for a while. Fedora is wonderful in comparison.
Tom Horsley wrote:
On Wed, 21 Feb 2007 23:37:57 +0100 "M. Fioretti" mfioretti@mclink.it wrote:
Just saw this on LWN (haven't even finished reading it), sorry if it has already been posted:
If he thinks fedora repositories have consistency problems, he should try suse for a while. Fedora is wonderful in comparison.
And I have a broken Ubuntu 6.10 box that was caused by updating from a non-ubuntu repository. They all suck! It must be a hard problem.
Regards,
John
On Wed, 21 Feb 2007 16:53:11 -0800 John Wendel john.wendel@metnet.navy.mil wrote:
And I have a broken Ubuntu 6.10 box that was caused by updating from a non-ubuntu repository. They all suck! It must be a hard problem.
I have often considered trying gentoo to see if building my own damn libs and programs from source would operate better, but I have the feeling that would just push the problem out to more esoteric kinds of dependency issues like new compilers refusing to build old source.
Tom Horsley wrote:
On Wed, 21 Feb 2007 16:53:11 -0800 John Wendel john.wendel@metnet.navy.mil wrote:
And I have a broken Ubuntu 6.10 box that was caused by updating from a non-ubuntu repository. They all suck! It must be a hard problem.
I have often considered trying gentoo to see if building my own damn libs and programs from source would operate better, but I have the feeling that would just push the problem out to more esoteric kinds of dependency issues like new compilers refusing to build old source.
Yeah: I think a lot of rpm-blaming goes on because it is the guy telling you the bad news. But if there is bad news, it existed without rpm, was not caused by rpm, it is in the tarballs the stuff was built from anyway. Maybe they sorted it now but the current inkscape release as of last week can't be made for FC6, at least I couldn't make it, and extras had the previous release, due to various dependencies on packages I spent an evening trying to build from source.
Over time I found that rpm was in fact increasing the amount of sanity with the packages, even when it tells me what I don't want to hear, because at least it formalized and captured the messy interrelationships that in fact already existed. If rpm tells you there is a problem with what you are trying to do to libcom_err before it went ahead and trashed it, that's something to be grateful to rpm for, for giving you an opportunity to not do the breakage by warning you that you are leaving the path of sanity, not somehow blaming it for generating the situation.
-Andy
Tom Horsley wrote:
On Wed, 21 Feb 2007 16:53:11 -0800 John Wendel john.wendel@metnet.navy.mil wrote:
And I have a broken Ubuntu 6.10 box that was caused by updating from a non-ubuntu repository. They all suck! It must be a hard problem.
I have often considered trying gentoo to see if building my own damn libs and programs from source would operate better, but I have the feeling that would just push the problem out to more esoteric kinds of dependency issues like new compilers refusing to build old source.
Here's my take on this. I've done exactly as you considered, I built gentoo on my laptop and I have to say, it was _much_ easier in some ways, but _much harder_ in others. By that, I mean I was always used to X 'just working' out of the box. But with gentoo I had to make it work. This is one of the good things about gentoo. It makes you learn more about the guts of the system than you might not get with a binary package based distro.
The down side is the increased time it takes to update primary packages (although modular X has helped that alot) like KDE and OpenOffice. But even then, the packages do work and work well when compiled. As for your concerns about new compilers, there's no law that says you have to use new compilers when they are released. In fact I'm still compiling using GCC 3.4.6 on this laptop and am quite happy with it. I've heard stories of better performance in Gentoo with GCC 4.1.x, but my system screams as it is, so it's not that big a deal to me.
OTOH, it's been much more stable and usable for me in areas like mplayer. I've never been able to get mplayer to behave well in Fedora with RPMs, that may be due to lots of things, but I do know that mplayer 'just works' in Gentoo.
Fedora is a great OS and I use it on nearly everything else I own, I do like bleeding edge and I like the packaged updates. But I don't think your concerns about Gentoo are all that big of a concern, IMHO.
On Wed, 2007-02-21 at 23:37 +0100, M. Fioretti wrote:
Just saw this on LWN (haven't even finished reading it), sorry if it has already been posted:
http://lwn.net/Articles/223038/
Marco
I have respect for ER but I disagree with a lot of what he says/does. I'm glad that *Fedora* isn't whoring itself to proprietary media. That needs to be solved not at the distribution level, but at the plugin level.
Fedora + Extras should never result in software that is not open being installed on a system. The user needs to intentionally get that software if the user wants it.
I do think rpm should be statically linked.
As much as I respect ESR, I disagree with him on a lot of things - including the manner in which he left.
I hope he enjoys ubuntu.
On Wed, Feb 21, 2007 at 04:48:22PM -0800, Michael A Peters wrote:
I do think rpm should be statically linked.
At the time, there was a solid technical reason for not doing it. (NPTL transition growing pains.) At this point, it might be worth revisiting. However, it doesn't necessarily gain you much -- if your system is that screwed up, booting the install CD in rescue mode is usually a better choice.
On Wed, 21 Feb 2007 19:57:02 -0500 Matthew Miller mattdm@mattdm.org wrote:
On Wed, Feb 21, 2007 at 04:48:22PM -0800, Michael A Peters wrote:
I do think rpm should be statically linked.
At the time, there was a solid technical reason for not doing it. (NPTL transition growing pains.) At this point, it might be worth revisiting. However, it doesn't necessarily gain you much -- if your system is that screwed up, booting the install CD in rescue mode is usually a better choice.
And actually there can be other problems as well. Many of the insane things glibc does have rendered static linking absolutely impossible. For instance, build an app that uses any of the getpw*() library functions, static link it, then try to run it.
All the pam stuff down in the bowels winds up dlopening various authentication libraries which have dependencies on glibc which drag in the libc.so file which gives you both static and dynamic versions of things like malloc() trying to run at the same time in the same program. Quickly falls down and goes boom :-).
(This may not actually be the exact thing I saw, but the idea was like this).
On Wednesday 21 February 2007, Matthew Miller wrote:
On Wed, Feb 21, 2007 at 04:48:22PM -0800, Michael A Peters wrote:
I do think rpm should be statically linked.
At the time, there was a solid technical reason for not doing it. (NPTL transition growing pains.) At this point, it might be worth revisiting. However, it doesn't necessarily gain you much -- if your system is that screwed up, booting the install CD in rescue mode is usually a better choice.
Staticly linking rpm is 150% up to the distro that uses it, Matthew. For some reason RH has never seen fit to do so since about 5.1. I've had an unrelated (I thought) update hose rpm 3 times now, and that's 3 times too many IMNSHO.
This seems to be a similar situation that existed when I was trying to dual boot an FC4 install and a kubuntu-5.0x. I could cross mount the others filesystems maybe 5% of the time, and the rest of the time I had to unmount them and do an e2fsck on them before they would mount cleanly. Each was then using e2fsck-1.35, but the executables them selves were nothing alike, and neither were the filesystems under them. I trashed the kubuntu filesystems so many times I gave up on the dual boot and pulled the 2nd drive. Then I had to put that drive in a different box and dd /dev/zero to the whole 60GB drive before I could reinstall kubuntu-6.06 lts on it, and it hasn't sneezed since. Its sitting out in the shop, ready to cut parts with emc2 on a 1 minute notice right now. Heck, if I had a tv camera so I could watch it, I could run it from here once the raw material is clamped to the table. But that would be a wee bit geeky now don't you think?
-- Matthew Miller mattdm@mattdm.org http://mattdm.org/ Boston University Linux ------> http://linux.bu.edu/
M. Fioretti wrote:
Just saw this on LWN (haven't even finished reading it), sorry if it has already been posted:
http://lwn.net/Articles/223038/
Marco
I particularly liked the broken links on his self-aggrandizement site. Made up for vomiting over his gun policy (will he ever shoot himself in the cpu?).
[ia]
On Wednesday 21 February 2007, tokyoi@mac.com wrote:
M. Fioretti wrote:
Just saw this on LWN (haven't even finished reading it), sorry if it has already been posted:
http://lwn.net/Articles/223038/
Marco
I particularly liked the broken links on his self-aggrandizement site. Made up for vomiting over his gun policy (will he ever shoot himself in the cpu?).
I was waiting for somebody to bring that up, strictly because it hasn't got a damned thing to do with this. Tell ya what, I may not agree with his politics all the time, but I'll go to the rifle range with RMS anytime he wants to come to my 20, we have a 300 yard range available to the public locally. I may even be able to shoot a smaller group than he can as I do 'keep a hand in'. I have a couple of '1 minute rifles'.
[ia]
On Wed, 2007-02-21 at 23:09 -0500, Gene Heskett wrote:
I may not agree with his politics all the time, but I'll go to the rifle range with RMS anytime he wants to come to my 20, we have a 300 yard range available to the public locally.
Just don't go quail hunting with a certain Vice President :D
On Thursday 22 February 2007, Michael A Peters wrote:
On Wed, 2007-02-21 at 23:09 -0500, Gene Heskett wrote:
I may not agree with his politics all the time, but I'll go to the rifle range with RMS anytime he wants to come to my 20, we have a 300 yard range available to the public locally.
Just don't go quail hunting with a certain Vice President :D
I don't think I'd ever be that desperate for the taste of quail, nuh-huh.
Wouldn't it be a good idea/time to finally say "goodbye" to ESR and start new threads for all of the topics that it has spawned.