Florian La Roche wrote:
Axel Thimm wrote:
I'd like to kindly request to set to release version to "10" or something higher than "9.0.94".
The decision about the version number is already done.
If by that, you mean that the decision is to stick with 0.94 without much discussion (that I've seen or heard) and despite it's obvious shortcomings... that's a shame. Heading down this path will also lead to lots of, IMO, uncessessary Epoch inflation. One example: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=105746 I'm sure if nothing is done, many more will follow.
I don't know about you Axel, but until I see a better alternative, I'll personally be inflating Fedora X.Y to rh(X+10)Y in the release tag of packages I maintain. The only other alternative is to simply increment Epoch for everything, which is yucky, yucky.
-- Rex
On Tue, 2003-09-30 at 10:56, Rex Dieter wrote:
Florian La Roche wrote:
Axel Thimm wrote:
I'd like to kindly request to set to release version to "10" or something higher than "9.0.94".
The decision about the version number is already done.
If by that, you mean that the decision is to stick with 0.94 without much discussion (that I've seen or heard) and despite it's obvious shortcomings... that's a shame. Heading down this path will also lead to lots of, IMO, uncessessary Epoch inflation. One example: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=105746 I'm sure if nothing is done, many more will follow.
I don't know about you Axel, but until I see a better alternative, I'll personally be inflating Fedora X.Y to rh(X+10)Y in the release tag of packages I maintain. The only other alternative is to simply increment Epoch for everything, which is yucky, yucky.
Could someone please explain why the distro version matters in any way shape or form for any packages save the two or three things like 'fedora-release' ? If you're package is depending on the distro version, and not the actual components in the distro, your package is probably broken. Make your package depend on the actual components it uses, not the distro those components *might* be coming from.
-- Rex
-- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
On Tue, Sep 30, 2003 at 09:56:40AM -0500, Rex Dieter wrote:
Florian La Roche wrote:
Axel Thimm wrote:
I'd like to kindly request to set to release version to "10" or something higher than "9.0.94".
The decision about the version number is already done.
If by that, you mean that the decision is to stick with 0.94 without much discussion (that I've seen or heard) and despite it's obvious shortcomings... that's a shame.
Yes, I also miss the "open discussion of development in these lists" ...
Heading down this path will also lead to lots of, IMO, uncessessary Epoch inflation. One example: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=105746 I'm sure if nothing is done, many more will follow.
I don't know about you Axel, but until I see a better alternative, I'll personally be inflating Fedora X.Y to rh(X+10)Y in the release tag of packages I maintain. The only other alternative is to simply increment Epoch for everything, which is yucky, yucky.
Exaclty. As packagers we have been painfully tought not to use epochs unless WW3 is about to emerge.
I'll also go with your suggestion, Rex. I'd call it the "it's written rh10, but it is pronounced Fedora Core 1" idiom ...
Currently anything else is a nightmare for repos with support for multiple RH releases. This does not only include existing repos, but also forthcomming repos with support for multiple releases.
Did anyone making this decision consider how "Fedora Legacy" is to be sanely constructed? By bumping all epochs of "Fedora Core" to ensure upgradability, and maintaining unnecessary multiple specfiles?
This decision wasn't/isn't well thought IMHO.
The alternative is to drop support for upgrading from RH <= 9 to FC, which is even uglier. Please review the release decision.
Axel Thimm (Axel.Thimm@physik.fu-berlin.de) said:
I'll also go with your suggestion, Rex. I'd call it the "it's written rh10, but it is pronounced Fedora Core 1" idiom ...
Now that's just patently misleading. It's *not* Red Hat Linux 10, it's Fedora Core 1. It's a shift in the development model, shifts in the goals of the release, and more. Hence, the new name, and new version.
By bumping all epochs of "Fedora Core" to ensure upgradability, and maintaining unnecessary multiple specfiles?
Huh? We aren't bumping all epochs of Fedora Core packages, and we don't have to to maintain upgradeability.
The alternative is to drop support for upgrading from RH <= 9 to FC, which is even uglier.
Uprgrades work... there were a couple hiccups in the test release, but by the time of the final release, I do believe there will only be epochs added to indexhtml and comps.
Bill
On Wed, 2003-10-01 at 09:17, Bill Nottingham wrote:
Axel Thimm (Axel.Thimm@physik.fu-berlin.de) said:
I'll also go with your suggestion, Rex. I'd call it the "it's written rh10, but it is pronounced Fedora Core 1" idiom ...
Now that's just patently misleading. It's *not* Red Hat Linux 10, it's Fedora Core 1. It's a shift in the development model, shifts in the goals of the release, and more. Hence, the new name, and new version.
Yeah, but the actual collection of RPMs and installer aren't going to be completely new; in almost cases, they will just be upgrades. Versions ought apply to file releases, not development models, goals, names, ..., especially when there won't be a radical break in the actual conventions used with existing releases. (I.e., package files aren't going to be forced into 8.3 names or AIX-style names).
Maybe if it were a true fork--a clean break started by a completely different set of people--resetting the version counter would be reasonable. But as it is, Fedora Core is still going to be under the auspices of Red Hat, Inc.--still subject to the good taste that Red Hat, Inc. has generally shown. For all intents and purposes, it's going to look like what "Red Hat Linux 10" would have looked like had Fedora not happened.
Wil
On Wed, 2003-10-01 at 13:00, Wil Cooley wrote:
On Wed, 2003-10-01 at 09:17, Bill Nottingham wrote:
Axel Thimm (Axel.Thimm@physik.fu-berlin.de) said:
I'll also go with your suggestion, Rex. I'd call it the "it's written rh10, but it is pronounced Fedora Core 1" idiom ...
Now that's just patently misleading. It's *not* Red Hat Linux 10, it's Fedora Core 1. It's a shift in the development model, shifts in the goals of the release, and more. Hence, the new name, and new version.
Yeah, but the actual collection of RPMs and installer aren't going to be completely new; in almost cases, they will just be upgrades. Versions ought apply to file releases, not development models, goals, names, ..., especially when there won't be a radical break in the actual conventions used with existing releases. (I.e., package files aren't going to be forced into 8.3 names or AIX-style names).
I'm still seriously confused what people are smoking. The only thing changing is the name of the distro, which isn't important for any package dependencies at all in any way except the distro-related packages like fedora-release, which third party packagers certainly aren't going to be providing as add-ons.
*no* normal packages are changing versions from 9->1.0, and dependencies/versioning have absolutely no reason to care about the release version. it's user information only, not stuff software should care about - if the software does, then the software is broken.
If your package needs to be compiled differently for different versions of Red Hat/Fedora, it's because packages in those releases are different - use *those* packages and *their* versions for your dependency tracking, *and* your release name - foo-rh8.0.i686.rpm is meaningless since those usually install and run on rh9 and fc1 anyhow. The real dependency is another package(-set), like gnome2.0 vs gnome2.4, or apache1.3 vs apache2 - depend on those, and use those in your package names if you need different packages (foo-gnome2.0.i686.rpm). Then you completely sidestep the distro versions, *which you should've done anyways*, since distro version is *completely meaningless* to anyone but a human. Or badly built packages.
Packages depend on other packages, not the text in /etc/*-release. Fix your packages and move on to complain about real problems. ;-)
Maybe if it were a true fork--a clean break started by a completely different set of people--resetting the version counter would be reasonable. But as it is, Fedora Core is still going to be under the auspices of Red Hat, Inc.--still subject to the good taste that Red Hat, Inc. has generally shown. For all intents and purposes, it's going to look like what "Red Hat Linux 10" would have looked like had Fedora not happened.
Wil
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Wed, 01 Oct 2003 13:23:09 -0400, Sean Middleditch wrote:
I'm still seriously confused what people are smoking. The only thing changing is the name of the distro, which isn't important for any package dependencies at all in any way except the distro-related packages like fedora-release, which third party packagers certainly aren't going to be providing as add-ons.
*no* normal packages are changing versions from 9->1.0, and dependencies/versioning have absolutely no reason to care about the release version. it's user information only, not stuff software should care about - if the software does, then the software is broken.
[...]
Packages depend on other packages, not the text in /etc/*-release. Fix your packages and move on to complain about real problems. ;-)
Well, so far it has been easier to simply analyze /etc/redhat-release and add conditional code to spec files. Conditional code that toggles platform-specific patches, build requirements, source code configuration parameters, and things like that.
What you're asking for is that a packager puts much more work into analyzing build requirements directly in order to determine the build platform. Effectively that would duplicate the work of a software's "configure" script. There must be a cheap way to determine the build platform. /etc/redhat-release or /etc/fedora-release is one and hopefully will stay one.
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
On Wed, 2003-10-01 at 15:16, Michael Schwendt wrote:
Packages depend on other packages, not the text in /etc/*-release. Fix your packages and move on to complain about real problems. ;-)
Well, so far it has been easier to simply analyze /etc/redhat-release and add conditional code to spec files. Conditional code that toggles platform-specific patches, build requirements, source code configuration parameters, and things like that.
What you're asking for is that a packager puts much more work into analyzing build requirements directly in order to determine the build platform. Effectively that would duplicate the work of a software's "configure" script. There must be a cheap way to determine the build platform. /etc/redhat-release or /etc/fedora-release is one and hopefully will stay one.
No, you need to actually do the work of the configure script (perhaps you should actually use the app's configure script) - detect the individual bits in the system. Otherwise your package is broken. It's a lie to say that a package is an "RH8.0 package," because that's where you built it, when it runs on RH9 and FC1. Which is a very common case, I might note.
If you take the cheap way out, then you have a poorly packaged application. You also completely defeat the entire purpose of using a package manager like RPM; you might as well just go to using Slackware .tgz files.
If it *is* so hard to do the proper feature detection at RPM build-time, then perhaps some fixes to RPM are in order. If any fixing is going to be done, let's fix the *real* problem, not fix the workaround to the problem. ;-)
Perhaps a comprehensive set of common system configuration checking macros, similar to how autoconf is built, would work well? So packagers can very very easily and cleanly detect which versions of major packages or subsystems are installed. If something like that existed and was well built, nobody would've even bothered using a broken hack like distro release checking to begin with. ^,^
Since Red Hat will need to do this through all their spec files anyway, what will the ways that they will check for dependencies and such in files in the future.
Sendmail is a common package that has a lot of dependency code in it with lines like:
%if %{errata} <= 70 %define sendmailcf usr/lib/sendmail-cf %else %define sendmailcf usr/share/sendmail-cf %endif
%if %{errata} >= 72 %define smshell /sbin/nologin %else %define smshell /dev/null %endif
There is also code in one RPM I thought that compiles differnetly for RHEL over RHL and would I am guessing would soon need a check for FC.
Since RH will need to maintain RPMS for RHEL, RHL<9 til Dec 31, and RHL 9 til Apr.. it might be a good idea to come up with a common syntax now that each of these RPMS could have at the top that would allow for these sort of defines to be done by the machine it is being built on.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Wed, 01 Oct 2003 15:34:32 -0400, Sean Middleditch wrote:
No, you need to actually do the work of the configure script (perhaps you should actually use the app's configure script) - detect the individual bits in the system. Otherwise your package is broken.
What you describe is a maintenance nightmare. Assume an application wants aspell >= 0.50, but distribution B provides only aspell 0.30. A versioned build requirement on aspell >= 0.50 won't suffice, because the package won't build on B. But there is a configure switch to disable aspell support. So, what we can do is either examine aspell version somehow, e.g. with
$(rpm -q --qf "%{version}" aspell)
or check a file like redhat-release.
You're asking for completely generic rpms where the user, who has upgraded a component way beyond the version which was shipped with the original distribution, does not need to supply optional rpmbuild parameters (such as --define _with_aspell=1) for a src.rpm to build _with_ support for that optional component.
And what do you do when the user has upgraded aspell to a newer version that is incompatible? This is another unsupported case.
Or maybe you want also backwards compatibility? A src.rpm for a new distribution release which examines the installed compiler and applies patches when it finds an older compiler?
Another example, a bit closer to what you have in mind: Examining a build platform for decision on whether to apply patches or not. Fedora.us does that for a few packages with rpm -ql for openssl. It is checked whether openssl supports pkgconfig or not and in turn a patch is applied or not. Depending on how many things need to be checked, this can quickly result in nothing else than bloat and wasted effort. If a user has customized his installation a lot, there is no reason why he should not need to customize src.rpms at least a bit as well.
Not even addressing the feasibility of some tests. E.g. checking whether additional libraries must be available and linked. Do you want spec files to become as large as the average configure script? Such configure scripts are included. The build requirements in a spec file only make sure that the dependencies are pulled in roughly. The configure script performs more detailed checks on whether everything works.
It's a lie to say that a package is an "RH8.0 package," because that's where you built it, when it runs on RH9 and FC1. Which is a very common case, I might note.
Who cares? Who does the testing? A package is built for and tested on a set of platforms. Packages are built for specific distributions or a somewhat compatible family of distribution releases. If a different platform happens to meet the package's requirements, consider yourself lucky.
If you take the cheap way out, then you have a poorly packaged application. You also completely defeat the entire purpose of using a package manager like RPM; you might as well just go to using Slackware .tgz files.
This I don't understand at all, I'm afraid. Slackware pkgs don't know dependencies like RPM does. Part of creating a src.rpm is devoted to making sure the build is working on the target distribution.
If it *is* so hard to do the proper feature detection at RPM build-time, then perhaps some fixes to RPM are in order. If any fixing is going to be done, let's fix the *real* problem, not fix the workaround to the problem. ;-)
It is wasted effort to check build requirements beyond package names, package versions or distribution version. If spec files contained build platform detection code in addition to a tarball's configure script, who would maintain all that?
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
On Wed, 2003-10-01 at 17:46, Michael Schwendt wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Wed, 01 Oct 2003 15:34:32 -0400, Sean Middleditch wrote:
No, you need to actually do the work of the configure script (perhaps you should actually use the app's configure script) - detect the individual bits in the system. Otherwise your package is broken.
What you describe is a maintenance nightmare. Assume an application wants aspell >= 0.50, but distribution B provides only aspell 0.30. A versioned build requirement on aspell >= 0.50 won't suffice, because the package won't build on B. But there is a configure switch to disable aspell support. So, what we can do is either examine aspell version somehow, e.g. with
$(rpm -q --qf "%{version}" aspell)
or check a file like redhat-release.
(pre-note: if I sound harsh, it's not personal - this whole problem itself is just amazingly infuriating, and isn't a result of any one person. my apologies afore-hand for a semi-heated email.)
Which is broken. You are working around the problem instead of fixing it. Redhat-release doesn't tell you anything about the version of aspell installed. I quite often have tons of RPMs installed on a stock RH (or now Fedora) system; if I happen to upgrade aspell on my own, and your broken package is looking for my OS release instead of the actual aspell package... what then again is the point of actually having dependencies? Why not just make everything a "Distro-X.Y" package, and forget dependencies, since we know which software versions Distro-X.Y has?
And no, installing new packages doesn't make me an "advanced user." Or, at least it shouldn't - I guess I have to be thanks to packages being made using a broken packaging method. Users should just be able to grab any package online they want, and install it. A friend points them to MagicFishSoftware.com and the cool nifty Swordfish Spreadsheet app there, they should just be able to click on the "Download Linux Package" link, and watch it install. Oh, it needs the new aspell? Great, it's grabbed automatically for them, and *boom*, they've got an upgraded aspell, and redhat-release is useless in that regard.
Or, heck, the new Fedora policy states that security holes will often be fixed by providing a whole new version, not just backporting a fix. Is fedora-release now supposed to also list all the legitimate security patches it has so your can packages can have -fc1, -fc1-openssl4, -fc1-db5, -fc1-openssl4-db5 variants and so on?
If the software can use aspell, but has a switch to disable it, the software is badly designed, because it creates an unfavorable situation. If aspell has different ABIs for different versions, but those versions can't be co-installed, then aspell is also broken. You, and other packagers, want to continue screwing over users with these insane/broken development and packaging policies, versus just fixing the problem once and being done with; libraries with changing ABIs need to be co-installable, and apps shouldn't have wildly different feature sets that can't be changed/detected at runtime. If the app does depend on libraries made by uncaring developers that can't keep a stable ABI, then package the library with the app, and save the user the pain of dealing with it; sure, it bloats the package size a bit, but that's the price to pay for using poorly managed libraries.
If the app absolutely can't be packages without breaking all these rules of sane design, then perhaps it shouldn't be packaged - if the only people whom can install it easily are people who compile it, then it seems a bit of a waste of time to package it anyhow; I guess its your time to waste building 10 versions of a package that should only ever need just 1.
If the app is compiled for an old aspell, but the user only has a newer, incompatible aspell, then the user should just install the old aspell. Some apps use the newer one, some the old - this is the wonder of versioned libraries, something we should be making use of, instead of banishing users to DLL^w.so hell.
Fedora is about new technology, not continuing in the same nightmarish unuserfriendly broken hack technology.
*please*, let's not do that to users, ok? :)
Sean Middleditch writes:
Redhat-release doesn't tell you anything about the version of aspell installed. ... if I happen to upgrade aspell on my own, and your broken package is looking for my OS release instead of the actual aspell package... what then again is the point of actually having dependencies? Why not just make everything a "Distro-X.Y" package, and forget dependencies, since we know which software versions Distro-X.Y has?
I agree that relying on redhat-release for anything seems rather dangerous. Not much would break if I removed the package altogether; a little piece in rc.sysinit would give an error message, that's all. Dependencies should accurately reflect what actually is needed, not the bundling of versions in a particular release.
If the software can use aspell, but has a switch to disable it, the software is badly designed, because it creates an unfavorable situation.
I want to disagree a bit there. There could be good reasons to make it possible to compile with and without a feature to support different environments. The software maybe uses aspell only for some marginal feature, and has a lot of uses without it.
In that case, it could make sense to have two different branches of packages, one with and one without aspell dependency. Then the package name should reflect the real difference, aspell or no aspell. If it does, I know that is the point, and I can make a decision if I want to upgrade aspell to a version from a more recent distribution, consider what other dependencies that would affect, or if I want to go for the version with a somewhat reduced functionality.
If the package branches instead point at two versions of RH/FC which just happen to be before and after apsell was upgraded, I won't know this. I won't know what the difference in functionality is, and I won't have the information for a logical decision.
On Thu, 2003-10-02 at 15:33, Göran Uddeborg wrote:
Sean Middleditch writes:
If the software can use aspell, but has a switch to disable it, the software is badly designed, because it creates an unfavorable situation.
I want to disagree a bit there. There could be good reasons to make it possible to compile with and without a feature to support different environments. The software maybe uses aspell only for some marginal feature, and has a lot of uses without it.
If it's a user-oriented feature, then it needs to be run-time configured, not compile-time configured. Users don't recompile apps, just admins and developers. (And those nuts using Gentoo ;-)
In the event of portability, i.e., the package is intended to work on platforms that can't use aspell... well, that's not an issue for RPM packages for Linux.
If the app can use aspell, and a user would expect that feature, or a user could feasibly want that feature, then it needs to be enabled, and the proper dependency made. If aspell can't have multiple incompatible versions simultaneously installed, package aspell with the app, or go help the the upstream authors learn how to develop software properly. (note: i'm not saying aspell is badly developed, as I don't know if it actually suffers from this problem; this discussion is hypothetical from my point of view.)
In that case, it could make sense to have two different branches of packages, one with and one without aspell dependency. Then the package name should reflect the real difference, aspell or no aspell. If it does, I know that is the point, and I can make a decision if I want to upgrade aspell to a version from a more recent distribution, consider what other dependencies that would affect, or if I want to go for the version with a somewhat reduced functionality.
This is still user hell. A user should never ever have to deal with this kind of non-sense. If a feature is optional, it needs to be run-time optional. Otherwise, we end up with a piece of software either having 20 different versions of the package for every set of possible options, or broken hacks like distro-release dependent packages. In situations like this, plugins are the way to go, or package add-ons, not two conflicting packages that are the exact same thing save one little piece.
Yes, there are probably broken poorly developed libraries and broken poorly developed apps that depend on those, and yes, there are people who probably want those packaged. I dare hope those are the exception and not the rule; otherwise, we're all pretty much screwed so far as usability goes. ~,^
If the package branches instead point at two versions of RH/FC which just happen to be before and after apsell was upgraded, I won't know this. I won't know what the difference in functionality is, and I won't have the information for a logical decision.
-- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
On Thu, Oct 02, 2003 at 04:17:40PM -0400, Sean Middleditch wrote:
If it's a user-oriented feature, then it needs to be run-time configured, not compile-time configured.
I'd like to follow up to say that this statement expresses very well the "Red Hat Way", a gestalt that we're still striving for in Fedora Core.
michaelkjohnson
"He that composes himself is wiser than he that composes a book." Linux Application Development -- Ben Franklin http://people.redhat.com/johnsonm/lad/
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Wed, 01 Oct 2003 22:24:48 -0400, Sean Middleditch wrote:
$(rpm -q --qf "%{version}" aspell)
or check a file like redhat-release.
(pre-note: if I sound harsh, it's not personal - this whole problem itself is just amazingly infuriating, and isn't a result of any one person. my apologies afore-hand for a semi-heated email.)
Which is broken. You are working around the problem instead of fixing it. Redhat-release doesn't tell you anything about the version of aspell installed. I quite often have tons of RPMs installed on a stock RH (or now Fedora) system; if I happen to upgrade aspell on my own, and your broken package is looking for my OS release instead of the actual aspell package... what then again is the point of actually having dependencies?
Predictable builds.
On the contrary, you aim at build sources which just pick up what's there, resulting in unpredictable, untested builds.
Why not just make everything a "Distro-X.Y" package, and forget dependencies, since we know which software versions Distro-X.Y has?
You would still need a list of build requirements, so missing dependencies can be installed in incomplete build environments. Not everyone installs everything.
And no, installing new packages doesn't make me an "advanced user." Or, at least it shouldn't - I guess I have to be thanks to packages being made using a broken packaging method. Users should just be able to grab any package online they want, and install it.
This seems to refer to prebuilt (binary) packages and is an entirely different matter.
Or, heck, the new Fedora policy states that security holes will often be fixed by providing a whole new version, not just backporting a fix. Is fedora-release now supposed to also list all the legitimate security patches it has so your can packages can have -fc1, -fc1-openssl4, -fc1-db5, -fc1-openssl4-db5 variants and so on?
No, fedora-release won't cover that. But build dependencies will need to deal with that. A security or bug-fix update, which is a new version instead of a backported fix, can require dependencies to be rebuilt and shipped as additional updates. Of course, all rebuilt packages will need to see new testing.
If the software can use aspell, but has a switch to disable it, the software is badly designed, because it creates an unfavorable situation.
Why? Assume aspell support is truely optional.
If aspell has different ABIs for different versions, but those versions can't be co-installed, then aspell is also broken.
Depends on how early in the development stage of aspell we are.
You, and other packagers,
Side-note: I don't consider myself a packager. ;)
want to continue screwing over users with these insane/broken development and packaging policies, versus just fixing the problem once and being done with; libraries with changing ABIs need to be co-installable,
Hardly anyone prevents co-installation like
libfoo 0.10 => package name "libfoo10", root directory /usr/lib/libfoo10 libfoo 0.20 => package name "libfoo20", root directory /usr/lib/libfoo20
when it is necessary and there are soname conflicts. It can be necessary when some dependencies stick to libfoo 0.10 while others require the newer libfoo 0.20.
and apps shouldn't have wildly different feature sets that can't be changed/detected at runtime. If the app does depend on libraries made by uncaring developers that can't keep a stable ABI, then package the library with the app, and save the user the pain of dealing with it; sure, it bloats the package size a bit, but that's the price to pay for using poorly managed libraries.
It is clever to share components where components can be shared.
If the app is compiled for an old aspell, but the user only has a newer, incompatible aspell, then the user should just install the old aspell.
?? Automatic package dependencies enforce that.
Some apps use the newer one, some the old - this is the wonder of versioned libraries, something we should be making use of, instead of banishing users to DLL^w.so hell.
Same here. If a binary package requires libfoo.so.2, the user just needs a package which provides libfoo.so.2. It doesn't matter whether the package is called foolib or libfoo or foo. It must contain/provide libfoo.so.2 and neither libfoo.so.1 nor libfoo.so.3. What you seem to refer to are broken explicit package dependencies where the packager was overambitious and added lots of package requirements explicitly instead of letting RPM determine dependencies automatically. That's causing Linux newbies to fail.
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
On Thu, 2003-10-02 at 17:15, Michael Schwendt wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Wed, 01 Oct 2003 22:24:48 -0400, Sean Middleditch wrote:
$(rpm -q --qf "%{version}" aspell)
or check a file like redhat-release.
(pre-note: if I sound harsh, it's not personal - this whole problem itself is just amazingly infuriating, and isn't a result of any one person. my apologies afore-hand for a semi-heated email.)
Which is broken. You are working around the problem instead of fixing it. Redhat-release doesn't tell you anything about the version of aspell installed. I quite often have tons of RPMs installed on a stock RH (or now Fedora) system; if I happen to upgrade aspell on my own, and your broken package is looking for my OS release instead of the actual aspell package... what then again is the point of actually having dependencies?
Predictable builds.
On the contrary, you aim at build sources which just pick up what's there, resulting in unpredictable, untested builds.
Why not just make everything a "Distro-X.Y" package, and forget dependencies, since we know which software versions Distro-X.Y has?
You would still need a list of build requirements, so missing dependencies can be installed in incomplete build environments. Not everyone installs everything.
Right. But then you still can't detect errata packages and such when building based only on /etc/release.
And no, installing new packages doesn't make me an "advanced user." Or, at least it shouldn't - I guess I have to be thanks to packages being made using a broken packaging method. Users should just be able to grab any package online they want, and install it.
This seems to refer to prebuilt (binary) packages and is an entirely different matter.
This then might be a source of our disagreement - I couldn't care less abotu building packages, I only care about the actual users installing them. ;-)
Or, heck, the new Fedora policy states that security holes will often be fixed by providing a whole new version, not just backporting a fix. Is fedora-release now supposed to also list all the legitimate security patches it has so your can packages can have -fc1, -fc1-openssl4, -fc1-db5, -fc1-openssl4-db5 variants and so on?
No, fedora-release won't cover that. But build dependencies will need to deal with that. A security or bug-fix update, which is a new version instead of a backported fix, can require dependencies to be rebuilt and shipped as additional updates. Of course, all rebuilt packages will need to see new testing.
Which is silly. If a library/component is upgraded, it must either be binary compatible (and thus not need rebuilt applications) or be co-installable (in which case a backported fix for the old library would be mandatory for a secure OS, but that's life.)
If you really want to have to rebuild the system everytime the user coughs, perhaps you should be using Gentoo instead of Fedora... ;-)
If the software can use aspell, but has a switch to disable it, the software is badly designed, because it creates an unfavorable situation.
Why? Assume aspell support is truely optional.
Because it's completely insane to have to reinstall a whole different version of the app to 'enable' a feature. It's a lot less to do with technically possible or code correct, and a lot more to do with just it being a completely stupid way to design anything. If it's optional, make it a plugin or add-on, not something that conditionally built into or outof the app.
It's like the difference between a wholly monolithic kernel or a kernel with modules. *yes*, there are cases the monolithic kernel is a better choice (embedded systems, say) but those aren't end-user systems like Fedora - they're built once and don't change. Apps aren't any different - if the app takes a monolithic approach, its design is broken.
If aspell has different ABIs for different versions, but those versions can't be co-installed, then aspell is also broken.
Depends on how early in the development stage of aspell we are.
This is true. But then, as a library author, if you know you have tons of apps depending on your ABI, it's rather good form to go thru the extra effort to keep it stable. You can add newer API's without removing old, and flush the deprecated stuff out every several releases, and move the soversion up while you're at it. You can have a pre-1.0 app with a soversion of 23.4.6; the soversion has *nothing* to do with release version of stability, only with interface version.
You, and other packagers,
Side-note: I don't consider myself a packager. ;)
Heheh, righty. On the same hand, I don't really consider myself a normal user, so we're evenly mismatched in this debate. ^,^
want to continue screwing over users with these insane/broken development and packaging policies, versus just fixing the problem once and being done with; libraries with changing ABIs need to be co-installable,
Hardly anyone prevents co-installation like
libfoo 0.10 => package name "libfoo10", root directory /usr/lib/libfoo10 libfoo 0.20 => package name "libfoo20", root directory /usr/lib/libfoo20
when it is necessary and there are soname conflicts. It can be necessary when some dependencies stick to libfoo 0.10 while others require the newer libfoo 0.20.
Right... which is the solution to your "version of package depends on OS release" argument. If we both recognize the solution, what are we debating? ;-)
and apps shouldn't have wildly different feature sets that can't be changed/detected at runtime. If the app does depend on libraries made by uncaring developers that can't keep a stable ABI, then package the library with the app, and save the user the pain of dealing with it; sure, it bloats the package size a bit, but that's the price to pay for using poorly managed libraries.
It is clever to share components where components can be shared.
Where they *can*, yes. When they can't be shared, because sharing destabilizes the system, then its clever not to break things by sharing. ;-)
If the app is compiled for an old aspell, but the user only has a newer, incompatible aspell, then the user should just install the old aspell.
?? Automatic package dependencies enforce that.
Right.
Some apps use the newer one, some the old - this is the wonder of versioned libraries, something we should be making use of, instead of banishing users to DLL^w.so hell.
Same here. If a binary package requires libfoo.so.2, the user just needs a package which provides libfoo.so.2. It doesn't matter whether the package is called foolib or libfoo or foo. It must contain/provide libfoo.so.2 and neither libfoo.so.1 nor libfoo.so.3. What you seem to refer to are broken explicit package dependencies where the packager was overambitious and added lots of package requirements explicitly instead of letting RPM determine dependencies automatically. That's causing Linux newbies to fail.
Right, I was going off a bit on a tangent from the original discussion. Sorry. ^^; (Once I start, I never realize when I need to stop again.)
Michael, who doesn't reply to top posts and complete quotes anymore.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux)
iD8DBQE/fJVu0iMVcrivHFQRAtapAJ0ci2OYYX15LDLgAxcin/PsK6rziACfX4uJ L84Qzce66QEwQ6nrp2/TPdM= =+WQ7 -----END PGP SIGNATURE-----
-- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Thu, 02 Oct 2003 21:49:23 -0400, Sean Middleditch wrote:
Predictable builds.
On the contrary, you aim at build sources which just pick up what's there, resulting in unpredictable, untested builds.
Why not just make everything a "Distro-X.Y" package, and forget dependencies, since we know which software versions Distro-X.Y has?
You would still need a list of build requirements, so missing dependencies can be installed in incomplete build environments. Not everyone installs everything.
Right. But then you still can't detect errata packages and such when building based only on /etc/release.
Not necessary, because errata packages are backported fixes, not feature upgrades.
This seems to refer to prebuilt (binary) packages and is an entirely different matter.
This then might be a source of our disagreement - I couldn't care less abotu building packages, I only care about the actual users installing them. ;-)
Doesn't matter. It's the packager's (distributor's) responsibility to provide a working set of dependencies. Those people, who send newbies to rpmseek.com where the newbies chooses a prebuilt package for an arbitrary distribution and fails to locate missing dependencies, are nuts and are not helpful. It is much more helpful to point newbies to repositories and tools like Synaptic.
Or, heck, the new Fedora policy states that security holes will often be fixed by providing a whole new version, not just backporting a fix. Is fedora-release now supposed to also list all the legitimate security patches it has so your can packages can have -fc1, -fc1-openssl4, -fc1-db5, -fc1-openssl4-db5 variants and so on?
No, fedora-release won't cover that. But build dependencies will need to deal with that. A security or bug-fix update, which is a new version instead of a backported fix, can require dependencies to be rebuilt and shipped as additional updates. Of course, all rebuilt packages will need to see new testing.
Which is silly. If a library/component is upgraded, it must either be binary compatible (and thus not need rebuilt applications) or be co-installable (in which case a backported fix for the old library would be mandatory for a secure OS, but that's life.)
That contradicts itself. Why would you leave around the old library version if there is security hole in it? No need to keep the old one and co-install the new one. The security hole must be fixed, no matter how. Whether fixes are backported or applied as version upgrades, is a matter of feasibility.
If you really want to have to rebuild the system everytime the user coughs, perhaps you should be using Gentoo instead of Fedora... ;-)
You're exaggerating. Smiley noted.
If the software can use aspell, but has a switch to disable it, the software is badly designed, because it creates an unfavorable situation.
Why? Assume aspell support is truely optional.
Because it's completely insane to have to reinstall a whole different version of the app to 'enable' a feature.
Depends on whether this "aspell support" is a completely separate plug-in or whether it is tied into the application. In either case, it likely comes together with the source code for the application and is built from within the same src.rpm. If the target platform doesn't suffice, you can either disable optional features or not build the app at all.
It's a lot less to do with technically possible or code correct, and a lot more to do with just it being a completely stupid way to design anything. If it's optional, make it a plugin or add-on, not something that conditionally built into or outof the app.
See above. You can only build that feature if build requirements are met.
[...] But then, as a library author, if you know you have tons of apps depending on your ABI, it's rather good form to go thru the extra effort to keep it stable. You can add newer API's without removing old, and flush the deprecated stuff out every several releases, and move the soversion up while you're at it.
This doesn't change a thing. The case we're discussing is a new app which uses the new API and doesn't build with the old API. It requires the new app to be backwards compatible, too, i.e. to use the old deprecated API. And at some point in time, the API user needs to switch to the current interface, because the old API is dropped.
You can have a pre-1.0 app with a soversion of 23.4.6; the soversion has *nothing* to do with release version of stability, only with interface version.
Tell that the typical rpm-dependency-hell troll. ;o)
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
On Fri, 2003-10-03 at 09:41, Michael Schwendt wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Thu, 02 Oct 2003 21:49:23 -0400, Sean Middleditch wrote:
Right. But then you still can't detect errata packages and such when building based only on /etc/release.
Not necessary, because errata packages are backported fixes, not feature upgrades.
Not in Fedora. It's been mentioned several times on the list, and on the site iirc, that security updates will simply be new packaged versions of the upstream packages. If the only security update for a package involves a bunch of other changes as well, then you get those as well.
This seems to refer to prebuilt (binary) packages and is an entirely different matter.
This then might be a source of our disagreement - I couldn't care less abotu building packages, I only care about the actual users installing them. ;-)
Doesn't matter. It's the packager's (distributor's) responsibility to provide a working set of dependencies. Those people, who send newbies to rpmseek.com where the newbies chooses a prebuilt package for an arbitrary distribution and fails to locate missing dependencies, are nuts and are not helpful. It is much more helpful to point newbies to repositories and tools like Synaptic.
The only reason its "nuts" is because packages are made with no thought of real users. It shouldn't have to be nuts. It should just up and work.
It's definitely possible to do, if people bother putting forth the effort instead of taking the cheap way out packaging. Take a look at the autopackage system, its rather amazing, and is sort of working proof that it's possible. ~,^
Which is silly. If a library/component is upgraded, it must either be binary compatible (and thus not need rebuilt applications) or be co-installable (in which case a backported fix for the old library would be mandatory for a secure OS, but that's life.)
That contradicts itself. Why would you leave around the old library version if there is security hole in it? No need to keep the old one and co-install the new one. The security hole must be fixed, no matter how. Whether fixes are backported or applied as version upgrades, is a matter of feasibility.
My point is, if you need to get a new library, it should *never* break anything. It's either compatible, in which case its a waste of time to rebuild packages for it, or its not compatible, in which case you'd need to modify the code for the app and release a new version, which isn't a packaging issue.
If you really want to have to rebuild the system everytime the user coughs, perhaps you should be using Gentoo instead of Fedora... ;-)
You're exaggerating. Smiley noted.
Yes, but only slightly :P
If the software can use aspell, but has a switch to disable it, the software is badly designed, because it creates an unfavorable situation.
Why? Assume aspell support is truely optional.
Because it's completely insane to have to reinstall a whole different version of the app to 'enable' a feature.
Depends on whether this "aspell support" is a completely separate plug-in or whether it is tied into the application. In either case, it likely comes together with the source code for the application and is built from within the same src.rpm. If the target platform doesn't suffice, you can either disable optional features or not build the app at all.
This is honestly something more in the realm of application design that packaging, I'll admit - packagers shouldn't encourage sloppy design, tho. ;-)
A single source set *can* be made into several RPMs. I'm not sure how well RPM supports file over-rides, but some clever packaging could avoid the problem for users even when the app was designed by someone who enjoys watching users cry. ;-)
It's a lot less to do with technically possible or code correct, and a lot more to do with just it being a completely stupid way to design anything. If it's optional, make it a plugin or add-on, not something that conditionally built into or outof the app.
See above. You can only build that feature if build requirements are met.
As you've noted, users can always install dependencies. So can developers. Developers are probably more capable of it. So your package depends on something only in RH8+, but you want it to work in RH7 too? Just depend on the devel files for the newer version. People building on RH7 can grab it and build on in bliss, and you as a packager don't need to waste time making silly hacks. ^,^
[...] But then, as a library author, if you know you have tons of apps depending on your ABI, it's rather good form to go thru the extra effort to keep it stable. You can add newer API's without removing old, and flush the deprecated stuff out every several releases, and move the soversion up while you're at it.
This doesn't change a thing. The case we're discussing is a new app which uses the new API and doesn't build with the old API. It requires the new app to be backwards compatible, too, i.e. to use the old deprecated API. And at some point in time, the API user needs to switch to the current interface, because the old API is dropped.
Eh? This is not at all how this works... take a look at the GNOME libraries. GNOME2.4 libs support GNOME2.0, GNOME2.2, and GNOME2.4 interfaces. An app made for GNOME2.0 runs just fine on GNOME2.4, altho an app written for GNOME2.4 may make use of the newer versions of the APIs, and thus not run on 2.0 or 2.2. Older APIs are deprecated in newer releases, meaning they are still available, but generate warning when used. When GNOME3.0 comes out, all the deprecated APIs will be dropped - older apps can simply depend on the GNOME2.x libs, which will be coinstallable with GNOME3 (just like GNOME1.x apps can depend on GNOME1.x libs, which are co-installable with GNOME2).
This is all just sane development practice. There is nothing stopping people from doing this other than ignorance and sloth. ;-)
You can have a pre-1.0 app with a soversion of 23.4.6; the soversion has *nothing* to do with release version of stability, only with interface version.
Tell that the typical rpm-dependency-hell troll. ;o)
RPM Dependency hell exists from two things - dependencies that never work out, because poorly made packages make it so some apps need one version, and other apps need another version, resulting in a situation where its impossible to install both sets of apps without rebuilding everything.
The second situation is lack of an easy way to get dependencies; RH/FC has up2date, with up2date servers, apt servers, and yum servers. Those solve the second problem, assuming they don't make the first problem worse - which can only be solved by good packaging policies.
This is something where I'd *love* to see an "official" RPM packaging policy similar to Debian. Debian isn't perfect by a long shot, but even with their 3,000+ packagers and 10,000+ packages they manage to avoid a lot of these problems, and that's including not only Debian, but all the highly modified Debian-based offsprings as well. it's not because dpkg is magic, but just because the packages are (usually) very well built.
Michael, who doesn't reply to top posts and complete quotes anymore.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux)
iD8DBQE/fXyl0iMVcrivHFQRAkpgAJ9c4RcluvA0ig6LrlXqWmsokoMo8wCdHQd3 9d8XAAJtG9QdugqdUyFQBEw= =fmQY -----END PGP SIGNATURE-----
-- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 03 Oct 2003 10:11:46 -0400, Sean Middleditch wrote:
Right. But then you still can't detect errata packages and such when building based only on /etc/release.
Not necessary, because errata packages are backported fixes, not feature upgrades.
Not in Fedora. It's been mentioned several times on the list, and on the site iirc, that security updates will simply be new packaged versions of the upstream packages.
Deadlock. I've commented on that earlier. Do you realize that when you fix security bugs with the release of new software versions it can make it necessary to rebuild dependencies? Whether or not that will be required depends on how compatible the new software version is (with regard to API and ABI, but also wrt details such as userspace file locations).
It's the packager's (distributor's) responsibility to provide a working set of dependencies. Those people, who send newbies to rpmseek.com where the newbies chooses a prebuilt package for an arbitrary distribution and fails to locate missing dependencies, are nuts and are not helpful. It is much more helpful to point newbies to repositories and tools like Synaptic.
The only reason its "nuts" is because packages are made with no thought of real users. It shouldn't have to be nuts. It should just up and work.
Impossible mission. An individual package will never know *where* to get dependencies. It only knows *what* is missing. However, if the user makes use of package tools, the problem is solved. Back to your example, of a package requiring libfoo.so.23.4.6 (or similar), it will be entertaining to make the newbie understand that the file is provided by package libfoo-1.0 and not libfoo-0.9 or libfoo-2.0.
My point is, if you need to get a new library, it should *never* break anything. It's either compatible, in which case its a waste of time to rebuild packages for it, or its not compatible, in which case you'd need to modify the code for the app and release a new version, which isn't a packaging issue.
Returning to backporting security-fixes in libraries, eh? Another loop in this discussion. It all boils down to feasibility of doing version upgrades of core components.
A single source set *can* be made into several RPMs.
Happens often.
I'm not sure how well RPM supports file over-rides, but some clever packaging could avoid the problem for users even when the app was designed by someone who enjoys watching users cry. ;-)
Still, the app doesn't build if its build requirements aren't satisfied. Another loop in the discussion.
As you've noted, users can always install dependencies. So can developers. Developers are probably more capable of it. So your package depends on something only in RH8+, but you want it to work in RH7 too? Just depend on the devel files for the newer version. People building on RH7 can grab it and build on in bliss, and you as a packager don't need to waste time making silly hacks. ^,^
With current tools and implementation of explicit versioned build requirements, that would cause the application to not build at all on an unmodified RH7 unless the src.rpm examines in detail what's there and what's not. Maybe we should go back to the early mails and restart there. ;)
This doesn't change a thing. The case we're discussing is a new app which uses the new API and doesn't build with the old API. It requires the new app to be backwards compatible, too, i.e. to use the old deprecated API. And at some point in time, the API user needs to switch to the current interface, because the old API is dropped.
Eh? This is not at all how this works... take a look at the GNOME libraries. GNOME2.4 libs support GNOME2.0, GNOME2.2, and GNOME2.4 interfaces. An app made for GNOME2.0 runs just fine on GNOME2.4, altho an app written for GNOME2.4 may make use of the newer versions of the APIs, and thus not run on 2.0 or 2.2.
Only the latter case is relevant to this discussion. Assume parts of the app require GNOME 2.4, but the main part of the app doesn't depend on GNOME 2.x at all. The app will build and run everywhere except for the part that requires GNOME 2.4. Now make a package for all known target platforms, which builds automatically and enables/disables optional features depending on what build requirements are available. The fun starts when this requires more than simple checks of installed package versions (as outlined earlier).
RPM Dependency hell exists from two things - dependencies that never work out, because poorly made packages make it so some apps need one version, and other apps need another version, resulting in a situation where its impossible to install both sets of apps without rebuilding everything.
Bring it to the point: _poorly made packages_, not just due to versioned dependencies, but also due to custom and distribution specific package names.
This is something where I'd *love* to see an "official" RPM packaging policy similar to Debian. Debian isn't perfect by a long shot, but even with their 3,000+ packagers and 10,000+ packages they manage to avoid a lot of these problems, and that's including not only Debian, but all the highly modified Debian-based offsprings as well. it's not because dpkg is magic, but just because the packages are (usually) very well built.
This would be good, but would require the major distributors to collaborate and adhere to such packaging policies also at the level of what compiler version to use and when to package what software versions. Sounds like impossible mission again. ;) There are more major distributions which use RPM, and which are not derived from eachother, than minor Debian derivatives.
- --
On Fri, 2003-10-03 at 11:36, Michael Schwendt wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 03 Oct 2003 10:11:46 -0400, Sean Middleditch wrote:
Right. But then you still can't detect errata packages and such when building based only on /etc/release.
Not necessary, because errata packages are backported fixes, not feature upgrades.
Not in Fedora. It's been mentioned several times on the list, and on the site iirc, that security updates will simply be new packaged versions of the upstream packages.
Deadlock. I've commented on that earlier. Do you realize that when you fix security bugs with the release of new software versions it can make it necessary to rebuild dependencies? Whether or not that will be required depends on how compatible the new software version is (with regard to API and ABI, but also wrt details such as userspace file locations).
I agree backported fixes would be absolutely great, but if they're not available...
It really falls on upstream to handle this in any case. If you release a buggy library 0.1 and later on have a 0.3 that fixes a security hole you found, if people are using 0.1 its upstreams responsibility to release 0.1b (or whatever). In which case, packaging goes on as before, with two versions of the library packaged, and apps never need worry about it.
In the event the library is incompatible and no backported fixes are available, the app needs a *code change*, which is no longer a packaging problem, it's an upstream problem with the application itself. A new release of the app that compiles with the new lib would of course result in a whole new package and new version, which would then be available as an upgrade.
It's the packager's (distributor's) responsibility to provide a working set of dependencies. Those people, who send newbies to rpmseek.com where the newbies chooses a prebuilt package for an arbitrary distribution and fails to locate missing dependencies, are nuts and are not helpful. It is much more helpful to point newbies to repositories and tools like Synaptic.
The only reason its "nuts" is because packages are made with no thought of real users. It shouldn't have to be nuts. It should just up and work.
Impossible mission. An individual package will never know *where* to get dependencies. It only knows *what* is missing. However, if the user makes use of package tools, the problem is solved. Back to your example, of a package requiring libfoo.so.23.4.6 (or similar), it will be entertaining to make the newbie understand that the file is provided by package libfoo-1.0 and not libfoo-0.9 or libfoo-2.0.
Nonsense, package networks already exist today. I almost never manually download individual dependencies anymore. Let the package network tool handle the dependency fetching.
It's also useful to use user-visible names and not cryptic release names. If the packager had sane errors like "You need Version 2.0 of LibFoo", the user could easily find on the lib's website "Version 1.0 packages, Version 2.0 packages, and Version 3.0 packages." Certainly not always the case, but that doesn't mean the packager should throw up his hands and give up hope. ;-)
My point is, if you need to get a new library, it should *never* break anything. It's either compatible, in which case its a waste of time to rebuild packages for it, or its not compatible, in which case you'd need to modify the code for the app and release a new version, which isn't a packaging issue.
Returning to backporting security-fixes in libraries, eh? Another loop in this discussion. It all boils down to feasibility of doing version upgrades of core components.
I've gone over this already; if I haven't made my point here yet, then I give up.
A single source set *can* be made into several RPMs.
Happens often.
I'm not sure how well RPM supports file over-rides, but some clever packaging could avoid the problem for users even when the app was designed by someone who enjoys watching users cry. ;-)
Still, the app doesn't build if its build requirements aren't satisfied. Another loop in the discussion.
So satisfy it. The developer can fricken download the dependency. Problem solved. If the person building the package can't download the dependency, what in the nine hells is he doing building packages? ;-)
As you've noted, users can always install dependencies. So can developers. Developers are probably more capable of it. So your package depends on something only in RH8+, but you want it to work in RH7 too? Just depend on the devel files for the newer version. People building on RH7 can grab it and build on in bliss, and you as a packager don't need to waste time making silly hacks. ^,^
With current tools and implementation of explicit versioned build requirements, that would cause the application to not build at all on an unmodified RH7 unless the src.rpm examines in detail what's there and what's not. Maybe we should go back to the early mails and restart there. ;)
The problem needs to be fixed in RPM then, and not have Fedora let you continue hacking around the problem using etc/release. Like I said the first around, *fix the problem*, don't fix the *hack* for the problem.
This doesn't change a thing. The case we're discussing is a new app which uses the new API and doesn't build with the old API. It requires the new app to be backwards compatible, too, i.e. to use the old deprecated API. And at some point in time, the API user needs to switch to the current interface, because the old API is dropped.
Eh? This is not at all how this works... take a look at the GNOME libraries. GNOME2.4 libs support GNOME2.0, GNOME2.2, and GNOME2.4 interfaces. An app made for GNOME2.0 runs just fine on GNOME2.4, altho an app written for GNOME2.4 may make use of the newer versions of the APIs, and thus not run on 2.0 or 2.2.
Only the latter case is relevant to this discussion. Assume parts of the app require GNOME 2.4, but the main part of the app doesn't depend on GNOME 2.x at all. The app will build and run everywhere except for the part that requires GNOME 2.4. Now make a package for all known target platforms, which builds automatically and enables/disables optional features depending on what build requirements are available. The fun starts when this requires more than simple checks of installed package versions (as outlined earlier).
Again, optional fetures should *never* be compile time. That's horrible, broken design, and any app doing that has no business being ona user's desktop to begin with. Compile the app will all features turned on, and the users can just install the dependencies. Otherwise, the app is broken from a user perspective. You really need to start looking at this from a users' point of view, and not a easily-packaged point of view. ;-)
RPM Dependency hell exists from two things - dependencies that never work out, because poorly made packages make it so some apps need one version, and other apps need another version, resulting in a situation where its impossible to install both sets of apps without rebuilding everything.
Bring it to the point: _poorly made packages_, not just due to versioned dependencies, but also due to custom and distribution specific package names.
Which is also horrible, now that you mention it - see my comment below about policy. ^,^
This is something where I'd *love* to see an "official" RPM packaging policy similar to Debian. Debian isn't perfect by a long shot, but even with their 3,000+ packagers and 10,000+ packages they manage to avoid a lot of these problems, and that's including not only Debian, but all the highly modified Debian-based offsprings as well. it's not because dpkg is magic, but just because the packages are (usually) very well built.
This would be good, but would require the major distributors to collaborate and adhere to such packaging policies also at the level of what compiler version to use and when to package what software versions. Sounds like impossible mission again. ;) There are more major distributions which use RPM, and which are not derived from eachother, than minor Debian derivatives.
Don't just assume that's impossible. It hasn't been tried to my knowledge, which is the only reason anyone can claim so far that its failed. ;-)
Other things can be glossed over to a degree. Compiler version really (mostly) only affected C++ stuff, and that's history now. Yes, it sucks, and yes, older distro releases are rather screwed, but then Fedora isn't about legacy cruft, it's about moving forward. If we worry about how screwed up the compiler ABI was, versus how its now standards comformant, we might as well also start worrying about the old lib5 stuff too - better make sure all our software compiles/runs with it! Someone might be using Red Hat 4! ;-)
(one might argue that if the ABI of core libs like libstdc++, or glibc, changes, it's no longer the same base platform. we're not just talking about 'linux', where're talking about the platform made by 'linux/processor/corelibs', such that linux on x86 w/ an old lib ABI is as different from linux on x86 w/ the new ABI as it is with Solaris on SPARC - there's no avoiding the need for different packages then, no matter how much it sucks. ~,^)
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux)
iD4DBQE/fZd10iMVcrivHFQRAkjZAJinTgHKhDsT7caqu0EUqmA3fIZHAJ9LT6o/ xjIfdBmrazD4sHk8iUd3XQ== =6px4 -----END PGP SIGNATURE-----
-- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
Le ven 03/10/2003 à 18:04, Sean Middleditch a écrit :
On Fri, 2003-10-03 at 11:36, Michael Schwendt wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 03 Oct 2003 10:11:46 -0400, Sean Middleditch wrote:
This is something where I'd *love* to see an "official" RPM packaging policy similar to Debian. Debian isn't perfect by a long shot, but even with their 3,000+ packagers and 10,000+ packages they manage to avoid a lot of these problems, and that's including not only Debian, but all the highly modified Debian-based offsprings as well. it's not because dpkg is magic, but just because the packages are (usually) very well built
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 03 Oct 2003 12:04:35 -0400, Sean Middleditch wrote:
It's the packager's (distributor's) responsibility to provide a working set of dependencies. Those people, who send newbies to rpmseek.com where the newbie chooses a prebuilt package for an arbitrary distribution and fails to locate missing dependencies, are nuts and are not helpful. It is much more helpful to point newbies to repositories and tools like Synaptic.
The only reason its "nuts" is because packages are made with no thought of real users. It shouldn't have to be nuts. It should just up and work.
Impossible mission. An individual package will never know *where* to get dependencies. It only knows *what* is missing. However, if the user makes use of package tools, the problem is solved. Back to your example, of a package requiring libfoo.so.23.4.6 (or similar), it will be entertaining to make the newbie understand that the file is provided by package libfoo-1.0 and not libfoo-0.9 or libfoo-2.0.
Nonsense, package networks already exist today. I almost never manually download individual dependencies anymore. Let the package network tool handle the dependency fetching.
Apparently, you need longer quotes in order to not kill the context and misunderstand comments. Notice the comment on _package tools_ compared with picking individual packages at rpmseek.com/rpmfind.net. You're creating endless loops in this discussion.
I'm not sure how well RPM supports file over-rides, but some clever packaging could avoid the problem for users even when the app was designed by someone who enjoys watching users cry. ;-)
Still, the app doesn't build if its build requirements aren't satisfied. Another loop in the discussion.
So satisfy it. The developer can fricken download the dependency. Problem solved. If the person building the package can't download the dependency, what in the nine hells is he doing building packages? ;-)
*sigh* He isn't. It is _you_ who wants the src.rpms be much more flexible than what we have presently. My arbitrary example -- and even a very basic one -- with aspell and one src.rpm for multiple platforms still holds true. The "real user" (adapting your terminology) works with prebuilt binary packages and not src.rpms. Packagers aim at creating high-quality binary packages for "real users" while at the same time keeping the maintenance requirements low. Clean src.rpms are an added thing and allow the average user to rebuild stuff from source without big problems. Let the rare user with an exotic installation and configuration adjust the src.rpm if need be.
As you've noted, users can always install dependencies. So can developers. Developers are probably more capable of it. So your package depends on something only in RH8+, but you want it to work in RH7 too? Just depend on the devel files for the newer version. People building on RH7 can grab it and build on in bliss, and you as a packager don't need to waste time making silly hacks. ^,^
With current tools and implementation of explicit versioned build requirements, that would cause the application to not build at all on an unmodified RH7 unless the src.rpm examines in detail what's there and what's not. Maybe we should go back to the early mails and restart there. ;)
The problem needs to be fixed in RPM then, and not have Fedora let you continue hacking around the problem using etc/release. Like I said the first around, *fix the problem*, don't fix the *hack* for the problem.
I give up here. Sorry. Going on with this "discussion" is wasted time as long as you don't propose a solution. A sentence like "*fix the problem*, don't fix the *hack* for the problem" is written quickly. It wipes away everything which has been commented on earlier.
Maybe we should pick a single topic (like automated detection of build requirements) and beat that one to death. But the current discussion touches too many topics at once and is fruitless. You won't get any packagers to prepare src.rpms for unknown target platforms by duplicating the work of "configure" scripts where a straight-forward detection of available packages and package versions wouldn't suffice. You would deal with things like detecting inter-library dependencies, compiler features/bugs, linker tests, and things like that.
Again, optional fetures should *never* be compile time. That's horrible, broken design, and any app doing that has no business being ona user's desktop to begin with. Compile the app will all features turned on,
Usually, this requires build dependencies to be available beforehand and fails where optional build dependencies are unavailable, and e.g. a plug-in cannot be built. I've seen Michael K. Johnson's comment on run-time configuration as opposed to compile-time configuration, but that refers to something else. In order to build an optional run-time configurable part of an app, you need to build it from source code at some point im time. This requires build dependencies to be available.
and the users can just install the dependencies. Otherwise, the app is broken from a user perspective. You really need to start looking at this from a users' point of view, and not a easily-packaged point of view. ;-)
You fail to understand the implications of build requirements.
["official" RPM packaging policy ]
Other things can be glossed over to a degree. Compiler version really (mostly) only affected C++ stuff, and that's history now.
No one still using GCC 2.95.x or GCC 2.96 or early GCC 3 release and needs patches? What about glibc, Perl, Python? Also, do SuSE still use RPM 3.x? ;)
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
On Fri, 2003-10-03 at 13:52, Michael Schwendt wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 03 Oct 2003 12:04:35 -0400, Sean Middleditch wrote:
Nonsense, package networks already exist today. I almost never manually download individual dependencies anymore. Let the package network tool handle the dependency fetching.
Apparently, you need longer quotes in order to not kill the context and misunderstand comments. Notice the comment on _package tools_ compared with picking individual packages at rpmseek.com/rpmfind.net. You're creating endless loops in this discussion.
This happens, yes - my apologies. My memory tends not to go back further than about 30 seconds. ~,^ Psychologists would call it goldfish syndrome if they knew about me. ;-)
Reading back a little ways, I'm now rather confused what your whole paragraph was about - we have the tools/networks, so newbies don't have to worry about libfoo-1.0 vs libfoo-0.9 and so on.
I'm not sure how well RPM supports file over-rides, but some clever packaging could avoid the problem for users even when the app was designed by someone who enjoys watching users cry. ;-)
Still, the app doesn't build if its build requirements aren't satisfied. Another loop in the discussion.
So satisfy it. The developer can fricken download the dependency. Problem solved. If the person building the package can't download the dependency, what in the nine hells is he doing building packages? ;-)
*sigh* He isn't. It is _you_ who wants the src.rpms be much more flexible than what we have presently. My arbitrary example -- and even a very basic one -- with aspell and one src.rpm for multiple platforms still holds true. The "real user" (adapting your terminology) works with prebuilt binary packages and not src.rpms. Packagers aim at creating high-quality binary packages for "real users" while at the same time keeping the maintenance requirements low. Clean src.rpms are an added thing and allow the average user to rebuild stuff from source without big problems. Let the rare user with an exotic installation and configuration adjust the src.rpm if need be.
No, your packaging method wants to screw over users, by making packages that are not easy or sane to use, as the features and dependencies present end up being based on silly things like where the packager built them, versus the place the user is installing them. The "low maintenance" of the packages results in "low flexibility" for the user, since they can no longer "just install" an app, but then become dependent on knowing things like which errata they have installed, which distro they have (most Windows users aren't even sure which of 4 versions of Windows they have, as a point of how bad it is to make a user know this stuff), etc.
Instead of fixing the problem, you're arguing about Fedora breaking the packaging habits you've been forced to develop to hack around the problem.
As you've noted, users can always install dependencies. So can developers. Developers are probably more capable of it. So your package depends on something only in RH8+, but you want it to work in RH7 too? Just depend on the devel files for the newer version. People building on RH7 can grab it and build on in bliss, and you as a packager don't need to waste time making silly hacks. ^,^
With current tools and implementation of explicit versioned build requirements, that would cause the application to not build at all on an unmodified RH7 unless the src.rpm examines in detail what's there and what's not. Maybe we should go back to the early mails and restart there. ;)
The problem needs to be fixed in RPM then, and not have Fedora let you continue hacking around the problem using etc/release. Like I said the first around, *fix the problem*, don't fix the *hack* for the problem.
I give up here. Sorry. Going on with this "discussion" is wasted time as long as you don't propose a solution. A sentence like "*fix the problem*, don't fix the *hack* for the problem" is written quickly. It wipes away everything which has been commented on earlier.
Solution I've mentioned before - the applications are simply built one way, the way that makes most sense for the user, and not on local build location dependencies. Let the package system deal with both runtime and buildtime dependencies, don't just randmly apply patches or remove features because the person building the package is too lazy to download the dependency. There's you fix, as I've said before. If it is too hard to do this, then this maybe needs to move to the RPM list, instead of just saying "oh well" and continuing to screw over the users.
I've packaged for Debian before, but never an RPM distro - if it really is so much harder to do with spec files what is dead simple in a dpkg, then RPM needs work ("the problem"), versus ignoring it and using /etc/release to avoid use of real dependencies ("the hack").
Maybe we should pick a single topic (like automated detection of build requirements) and beat that one to death. But the current discussion touches too many topics at once and is fruitless. You won't get any
This I agree on. Sorry for my rambling nature. ^^;
packagers to prepare src.rpms for unknown target platforms by duplicating the work of "configure" scripts where a straight-forward detection of available packages and package versions wouldn't suffice. You would deal with things like detecting inter-library dependencies, compiler features/bugs, linker tests, and things like that.
Correct; which again needs to be brought up on the RPM list, not complaining abou tit on the fedora list. ~,^
Again, optional fetures should *never* be compile time. That's horrible, broken design, and any app doing that has no business being ona user's desktop to begin with. Compile the app will all features turned on,
Usually, this requires build dependencies to be available beforehand and fails where optional build dependencies are unavailable, and e.g. a plug-in cannot be built. I've seen Michael K. Johnson's comment on run-time configuration as opposed to compile-time configuration, but that refers to something else. In order to build an optional run-time
For a user-oriented app, enabling or disabling a feature is configuration. Different terms, exact same end-result tho.
configurable part of an app, you need to build it from source code at some point im time. This requires build dependencies to be available.
Which anyone building the RPMs are free to install if they want to build it, of course.
and the users can just install the dependencies. Otherwise, the app is broken from a user perspective. You really need to start looking at this from a users' point of view, and not a easily-packaged point of view. ;-)
You fail to understand the implications of build requirements.
Not sure what you mean here; a package has requirements, you install them before building. If you don't, it doesn't build. Just like any other piece of software. What "implications" am I missing? (seriously - I don't enjoy being clueless if I can help it ;-)
["official" RPM packaging policy ]
Other things can be glossed over to a degree. Compiler version really (mostly) only affected C++ stuff, and that's history now.
No one still using GCC 2.95.x or GCC 2.96 or early GCC 3 release and needs patches? What about glibc, Perl, Python? Also, do SuSE still use RPM 3.x? ;)
Legacy cruft isn't really a part of Fedora's goals, so I don't see Fedora bending over backwards to support it. ;-) And, since there really isn't any kind of standard yet, we can gloss over legacy since it never claimed to be a part of the standard. ^,^
I'd say its perfectly acceptable to state that packages only work on "RPM Policy 1.0 compliant operating systems"; packages for legacy systems, which will fade out soon enough (speaking in computer time anyways) will still be a pain, but what is one to do? Moving forward sucks sometimes, but it's sure as hell better than never progressing. :P
Perl/Python are co-installable with different versions, and thus are a different issue. Distros based on 2.96 were inherently broken anyhow, and 2.95 is history.
In any event, this is off on a tangent from the original discussion. ;-) I'll have to bring this up somewhere more relevant. ^,^
Michael, who doesn't reply to top posts and complete quotes anymore.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux)
iD8DBQE/fbdt0iMVcrivHFQRAri9AJwPF+Iy1S3tlnvfriSAv8aceUY3uQCggQz5 kw5PZq7Xb41MohP/IsvekmI= =gEMR -----END PGP SIGNATURE-----
-- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 03 Oct 2003 14:28:42 -0400, Sean Middleditch wrote:
Reading back a little ways, I'm now rather confused what your whole paragraph was about - we have the tools/networks, so newbies don't have to worry about libfoo-1.0 vs libfoo-0.9 and so on.
Let's see. Quoting you:
Users should just be able to grab any package online they want, and install it.
Do you see rpm-dep-hell complaints from apt/yum/up2date users? I don't. But rpmfind.net/rpmseek.com users complain regularly. This is the context of the reply to your comment.
No, your packaging method wants to screw over users, by making packages that are not easy or sane to use, as the features and dependencies present end up being based on silly things like where the packager built them, versus the place the user is installing them.
Far from it. The binary packages are easy to use while the src.rpms may need a few adjustments (sort of --define value) before they adapt to altered build environments.
The "low maintenance" of the packages results in "low flexibility" for the user, since they can no longer "just install" an app, but then become dependent on knowing things like which errata they have installed, which distro they have (most Windows users aren't even sure which of 4 versions of Windows they have, as a point of how bad it is to make a user know this stuff), etc.
You're exaggerating again, a lot.
Instead of fixing the problem, you're arguing about Fedora breaking the packaging habits you've been forced to develop to hack around the problem.
Huh? You you sure you don't confuse me with anyone else?
I give up here. Sorry. Going on with this "discussion" is wasted time as long as you don't propose a solution. A sentence like "*fix the problem*, don't fix the *hack* for the problem" is written quickly. It wipes away everything which has been commented on earlier.
Solution I've mentioned before - the applications are simply built one way, the way that makes most sense for the user, and not on local build location dependencies. Let the package system deal with both runtime and buildtime dependencies, don't just randmly apply patches or remove features because the person building the package is too lazy to download the dependency. There's you fix, as I've said before.
Insufficient since the packager needs to adapt to the software, not vice versa. But as I've said, it is beyond my time to try to rephrase again and again in the hope that you'll understand my point of view.
and the users can just install the dependencies. Otherwise, the app is broken from a user perspective. You really need to start looking at this from a users' point of view, and not a easily-packaged point of view. ;-)
You fail to understand the implications of build requirements.
Not sure what you mean here; a package has requirements, you install them before building. If you don't, it doesn't build. Just like any other piece of software. What "implications" am I missing? (seriously
- I don't enjoy being clueless if I can help it ;-)
_Optional_ features have _optional_ build dependencies. You can't depend on stuff that _is simply not available_. You can't install it because it's not available anywhere unless someone provides in in form of packages again. Now make the same unmodified src.rpm compile on multiple platforms, where a different set of build requirements is available and possibly a different set of patches may need to be applied. This is the scenario of my initial comments.
Perl/Python are co-installable with different versions, and thus are a different issue.
Oh, great, a second Perl installation. As if Python/Python2 wouldn't be enough already.
Distros based on 2.96 were inherently broken anyhow,
http://www.redhat.com/advice/speaks_gcc.html
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
On Fri, 2003-10-03 at 16:11, Michael Schwendt wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 03 Oct 2003 14:28:42 -0400, Sean Middleditch wrote:
Reading back a little ways, I'm now rather confused what your whole paragraph was about - we have the tools/networks, so newbies don't have to worry about libfoo-1.0 vs libfoo-0.9 and so on.
Let's see. Quoting you:
Users should just be able to grab any package online they want, and install it.
Do you see rpm-dep-hell complaints from apt/yum/up2date users? I don't. But rpmfind.net/rpmseek.com users complain regularly. This is the context of the reply to your comment.
Ah, you mean tools that don't grab dependencies on install. Those are a pain, aren't they? Whether you do "apt-get install foo" or "apt-get install ./foo.i386.rpm", either way depends should be grabbed and the package installed. Should be, anyways, they probably don't, do they? (that 30 second memory is kicking in again ;-)
No, your packaging method wants to screw over users, by making packages that are not easy or sane to use, as the features and dependencies present end up being based on silly things like where the packager built them, versus the place the user is installing them.
Far from it. The binary packages are easy to use while the src.rpms may need a few adjustments (sort of --define value) before they adapt to altered build environments.
I have a nifty idea. ^,^ Can you give me an example package where build depends are different per platform, but the resultant package is more or less the exact same (exact same feature set, exact same runtime dependencies) ? This hypothetical discussion is starting to loose focus of real issues versus imaginary ones. ;-)
The "low maintenance" of the packages results in "low flexibility" for the user, since they can no longer "just install" an app, but then become dependent on knowing things like which errata they have installed, which distro they have (most Windows users aren't even sure which of 4 versions of Windows they have, as a point of how bad it is to make a user know this stuff), etc.
You're exaggerating again, a lot.
You specifically said that packages are only intended to work on the platform they are built for, and working on anything else is just dumb luck. That's no fun.
Instead of fixing the problem, you're arguing about Fedora breaking the packaging habits you've been forced to develop to hack around the problem.
Huh? You you sure you don't confuse me with anyone else?
I don't think so - you're arguing for having Fedora avoid a perfectly legitimate change of the release version solely for the sake package dependencies, yes?
I give up here. Sorry. Going on with this "discussion" is wasted time as long as you don't propose a solution. A sentence like "*fix the problem*, don't fix the *hack* for the problem" is written quickly. It wipes away everything which has been commented on earlier.
Solution I've mentioned before - the applications are simply built one way, the way that makes most sense for the user, and not on local build location dependencies. Let the package system deal with both runtime and buildtime dependencies, don't just randmly apply patches or remove features because the person building the package is too lazy to download the dependency. There's you fix, as I've said before.
Insufficient since the packager needs to adapt to the software, not vice versa. But as I've said, it is beyond my time to try to rephrase again and again in the hope that you'll understand my point of view.
yes, im feeling the same way. aren't opinions fun? you can never win. ~,^
and the users can just install the dependencies. Otherwise, the app is broken from a user perspective. You really need to start looking at this from a users' point of view, and not a easily-packaged point of view. ;-)
You fail to understand the implications of build requirements.
Not sure what you mean here; a package has requirements, you install them before building. If you don't, it doesn't build. Just like any other piece of software. What "implications" am I missing? (seriously
- I don't enjoy being clueless if I can help it ;-)
_Optional_ features have _optional_ build dependencies. You can't depend on stuff that _is simply not available_. You can't install it because it's not available anywhere unless someone provides in in form of packages again. Now make the same unmodified src.rpm compile on
And what is the problem with getting the packages for the build dependency?
multiple platforms, where a different set of build requirements is available and possibly a different set of patches may need to be applied. This is the scenario of my initial comments.
Which if true is a serious problem, as I see it. Different flavours of Linux shouldn't need hugely different compilation options. Grab the stuff the package needs, build it, test it, done. If something else is needed based on something so silly as RH 8 vs RH 9 vs FC 1, something is wrong somewhere.
Perl/Python are co-installable with different versions, and thus are a different issue.
Oh, great, a second Perl installation. As if Python/Python2 wouldn't be enough already.
If that's what it takes to make things work, then that's what it takes. I didn't say it was perfect, just that it solves the problem that users shouldn't ever have to rebuild to software, and users shouldn't have to run around figuring out what their system is to find the right package and deal with that mess. In a truly ideal world, Perl/Python/etc. wouldn't keep breaking compatibility so often. ~,^ Since that's *not* reality, the only solution left for sane packages (form a user's point of view again) is to let any necessary versions be installed so the user's apps just work and the user doesn't even have to think about OS versions or dependencies.
Distros based on 2.96 were inherently broken anyhow,
As a developer who had to, and even recently still had to, explain to users why software didn't compile/work on RH7 (even tho it worked fine with both gcc 2.95 and gcc 3.2), that explanation has and still does fall very short. Even now, it's a pain, because it still kills intelligent packaging efforts for people who need to support those OS releases. Thank Red Hat. ;-)
Michael, who doesn't reply to top posts and complete quotes anymore.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux)
iD8DBQE/fdfk0iMVcrivHFQRAn5NAJ9ur6hZrTpasZLXrZaOenjLbO0NvACfaQRD h9VyPBaGSCEkBoaUeRFn9lM= =pkYV -----END PGP SIGNATURE-----
-- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
Le ven 03/10/2003 à 22:39, Sean Middleditch a écrit :
On Fri, 2003-10-03 at 16:11, Michael Schwendt wrote:
Perl/Python are co-installable with different versions, and thus are a different issue.
Oh, great, a second Perl installation. As if Python/Python2 wouldn't be enough already.
If that's what it takes to make things work, then that's what it takes. I didn't say it was perfect, just that it solves the problem that users shouldn't ever have to rebuild to software, and users shouldn't have to run around figuring out what their system is to find the right package and deal with that mess. In a truly ideal world, Perl/Python/etc. wouldn't keep breaking compatibility so often. ~,^ Since that's *not* reality, the only solution left for sane packages (form a user's point of view again) is to let any necessary versions be installed so the user's apps just work and the user doesn't even have to think about OS versions or dependencies.
Don't make me laugh. The user cares about duplicate stuff too. Before we build a serious infrastructure that enabled us to modularise stuff someone would complain every other week we shipped java 1.3 jars with our tomcat rpm (and those jars were necessary to run it with a 1.3 jvm, and didn't hurt when using a 1.4 jvm. But for a 1.4 user they were redundant stuff and we got complains).
Show me a repository with big fat packages that include all deps to be standalone and I'll show you a repository no one wants to use. Users may not all know the zen of packaging but it will only take a few long downloads or stuffed disks to enlighten them.
Cheers,
On Fri, 2003-10-03 at 17:43, Nicolas Mailhot wrote:
Le ven 03/10/2003 à 22:39, Sean Middleditch a écrit :
On Fri, 2003-10-03 at 16:11, Michael Schwendt wrote:
Perl/Python are co-installable with different versions, and thus are a different issue.
Oh, great, a second Perl installation. As if Python/Python2 wouldn't be enough already.
If that's what it takes to make things work, then that's what it takes. I didn't say it was perfect, just that it solves the problem that users shouldn't ever have to rebuild to software, and users shouldn't have to run around figuring out what their system is to find the right package and deal with that mess. In a truly ideal world, Perl/Python/etc. wouldn't keep breaking compatibility so often. ~,^ Since that's *not* reality, the only solution left for sane packages (form a user's point of view again) is to let any necessary versions be installed so the user's apps just work and the user doesn't even have to think about OS versions or dependencies.
Don't make me laugh. The user cares about duplicate stuff too. Before we build a serious infrastructure that enabled us to modularise stuff someone would complain every other week we shipped java 1.3 jars with our tomcat rpm (and those jars were necessary to run it with a 1.3 jvm, and didn't hurt when using a 1.4 jvm. But for a 1.4 user they were redundant stuff and we got complains).
Are you talking about users, or sysadmins/hackers? I'd doubt a user would even know a jar file is, or their installed version of Java. Certainly, every user I've dealt with recently (including a handful friends and a number of coworkers) would have no clue; they can barely remember their OS is "Microsoft Windows" and not "Compaq Explorer." ;-) (and no, that isn't an implication users are stupid, merely that they often don't know much about computers. i can't keep the names of various parts of a car engine straight, but that's only because i'm not really familiar with it, and I don't really care in the least, so long as it moves. just like many users don't care how the computer runs, jsut that it works. and yes, that was a real example, not my usual satirical exaggeration ;-)
Show me a repository with big fat packages that include all deps to be standalone and I'll show you a repository no one wants to use. Users may not all know the zen of packaging but it will only take a few long downloads or stuffed disks to enlighten them.
All dependencies embedded aren't at all needed. Just the ones people can't develop and/or package correctly. If things were developed using sane release and maintenance practices, you'd never have need to ship a dependency with an application. It's only when the dependency is released/maintained in the usual inexperienced 13-year-old style that you need to do that. ~,^
Cheers,
Le sam 04/10/2003 à 05:23, Sean Middleditch a écrit :
On Fri, 2003-10-03 at 17:43, Nicolas Mailhot wrote:
Le ven 03/10/2003 à 22:39, Sean Middleditch a écrit :
On Fri, 2003-10-03 at 16:11, Michael Schwendt wrote:
Perl/Python are co-installable with different versions, and thus are a different issue.
Oh, great, a second Perl installation. As if Python/Python2 wouldn't be enough already.
If that's what it takes to make things work, then that's what it takes. I didn't say it was perfect, just that it solves the problem that users shouldn't ever have to rebuild to software, and users shouldn't have to run around figuring out what their system is to find the right package and deal with that mess. In a truly ideal world, Perl/Python/etc. wouldn't keep breaking compatibility so often. ~,^ Since that's *not* reality, the only solution left for sane packages (form a user's point of view again) is to let any necessary versions be installed so the user's apps just work and the user doesn't even have to think about OS versions or dependencies.
Don't make me laugh. The user cares about duplicate stuff too. Before we build a serious infrastructure that enabled us to modularise stuff someone would complain every other week we shipped java 1.3 jars with our tomcat rpm (and those jars were necessary to run it with a 1.3 jvm, and didn't hurt when using a 1.4 jvm. But for a 1.4 user they were redundant stuff and we got complains).
Are you talking about users, or sysadmins/hackers? I'd doubt a user would even know a jar file is, or their installed version of Java.
Well java is a bit special. Most upstream projects do not care about packaging at all (you know, big ugly system-dependent mess, not like their WORA nirvana) so people are used to struggle to install stuff. When they're fed up they find a nice packaging project like jpackage and are delighted ; however they can see if you've done something less optimal that what they did manually before and will complain (loudly) in that case.
But even if the proportion of clueful users were less for a mainstream project like Fedora I strongly object to the idea bad packaging should be allowed to ease some users pain. If you drive out all enlightened users the contributor pool will dry up and Redhat will be left alone supporting Fedora - not something they're ready to I think.
Certainly, every user I've dealt with recently (including a handful friends and a number of coworkers) would have no clue; they can barely remember their OS is "Microsoft Windows" and not "Compaq Explorer." ;-) (and no, that isn't an implication users are stupid, merely that they often don't know much about computers. i can't keep the names of various parts of a car engine straight, but that's only because i'm not really familiar with it, and I don't really care in the least, so long as it moves. just like many users don't care how the computer runs, jsut that it works. and yes, that was a real example, not my usual satirical exaggeration ;-)
Sure. But do you see any of them installing Fedora by themselves ? If there is someone clueful enough somewhere to install Linux he will understand some packaging issues so your example is not relevant (and he won't have to understand everything to point out at least a few mistakes)
Show me a repository with big fat packages that include all deps to be standalone and I'll show you a repository no one wants to use. Users may not all know the zen of packaging but it will only take a few long downloads or stuffed disks to enlighten them.
All dependencies embedded aren't at all needed. Just the ones people can't develop and/or package correctly.
I like the "just" bit.
That was already the case for libgal for a long time. And it is a pita for normal users - package managers like apt do not like duplicate stuff so people have to largely handle this manually.
Don't open the gates if you're not prepared to handle the flood that *will* follow.
If things were developed using sane release and maintenance practices, you'd never have need to ship a dependency with an application. It's only when the dependency is released/maintained in the usual inexperienced 13-year-old style that you need to do that. ~,^
I see you've never tried to package a large pool of inter-dependant projects. In 80%+ of the cases the problem lies upstream. If the packager accepts the deps upstream wants to force on him the mess will only grow. Most of the upstream projects I work with would jump to the possibility to embed all the deps they need because that matches their vision of their app as the center of the system.
Someone must say stop at one point. It might be a larger project (Gnome), a packaging project (Fedora) or the end-user (because in the end duplicating stuff is a maintenance burden and you're only exchanging upstream-level work with packager or user work by embedding stuff).
You're right most users do not want to bother with package deps. You're wrong however in thinking they'll accept the mess a massive embedding policy would foster on them.
I'm perfectly happy with un-packageable projects (ie projects that do not address packager needs and can not work with the same deps as the rest of the system) being kicked out of Fedora. I'm also perfectly happy with old releases not getting the same kind of care as new ones. They have to die at some point - you can not support every single release at vitam eternam. Nobody here will knowingly make upgrades harder for older releases. Sacrificing the system sanity to the golden cow of eternal upgradeability however will only produce a Windows-like mess were everything "sort-of" works for everyone. You have to do some spring cleanups, even in real life.
Cheers,
On Sat, 2003-10-04 at 04:55, Nicolas Mailhot wrote:
Le sam 04/10/2003 à 05:23, Sean Middleditch a écrit :
Are you talking about users, or sysadmins/hackers? I'd doubt a user would even know a jar file is, or their installed version of Java.
Well java is a bit special. Most upstream projects do not care about packaging at all (you know, big ugly system-dependent mess, not like their WORA nirvana) so people are used to struggle to install stuff. When they're fed up they find a nice packaging project like jpackage and are delighted ; however they can see if you've done something less optimal that what they did manually before and will complain (loudly) in that case.
The package could have been split up nicely and with dependencies on either a 1.4 jvm or your 1.3 jar file addons, resulting in a small download for java 1.4 users and continued ease of use for java 1.3 users, no? (assuming you did something along these lines - haven'tu sed your packages)
But even if the proportion of clueful users were less for a mainstream project like Fedora I strongly object to the idea bad packaging should be allowed to ease some users pain. If you drive out all enlightened users the contributor pool will dry up and Redhat will be left alone supporting Fedora - not something they're ready to I think.
I'm not arguing for "bad packaging", i'm arguing for *correct* packaging. One package should always work; its dependencies just need to be correct to ensure this. Putting everything needed in one package is not the solution I've asked for at all, if you read this thread, with the sole exception of broken dependencies that can't be packaged any other way.
Certainly, every user I've dealt with recently (including a handful friends and a number of coworkers) would have no clue; they can barely remember their OS is "Microsoft Windows" and not "Compaq Explorer." ;-) (and no, that isn't an implication users are stupid, merely that they often don't know much about computers. i can't keep the names of various parts of a car engine straight, but that's only because i'm not really familiar with it, and I don't really care in the least, so long as it moves. just like many users don't care how the computer runs, jsut that it works. and yes, that was a real example, not my usual satirical exaggeration ;-)
Sure. But do you see any of them installing Fedora by themselves ? If there is someone clueful enough somewhere to install Linux he will understand some packaging issues so your example is not relevant (and he won't have to understand everything to point out at least a few mistakes)
This is complete nonsense. I've two friends just the last couple weeks that have RH9, one I installed for him (since I helped build his machine) and the other installed by the friend after I gave him CDs. Neither of them understand jack about the packaging, and I've already had to make tons of excusses for the insanity of it (along with other Linux stupidities, which are another topic entirely). Getting calls at 10pm because they can't figure out how to get OpenRPG installed (or whatever) is irritating.
Nobody should *have* to explain it to them, it should just work the way they'd expect - which is click on the package, and watch it install (possibly grabbing dependencies, or providing a very sane/clear message if they cannot be found, which is another RPM problem...)
Show me a repository with big fat packages that include all deps to be standalone and I'll show you a repository no one wants to use. Users may not all know the zen of packaging but it will only take a few long downloads or stuffed disks to enlighten them.
All dependencies embedded aren't at all needed. Just the ones people can't develop and/or package correctly.
I like the "just" bit.
That was already the case for libgal for a long time. And it is a pita for normal users - package managers like apt do not like duplicate stuff so people have to largely handle this manually.
Libgal wasn't really a "public" library, either. Apps depending on a library like that need to realize what that means.
Don't open the gates if you're not prepared to handle the flood that *will* follow.
If things were developed using sane release and maintenance practices, you'd never have need to ship a dependency with an application. It's only when the dependency is released/maintained in the usual inexperienced 13-year-old style that you need to do that. ~,^
I see you've never tried to package a large pool of inter-dependant projects. In 80%+ of the cases the problem lies upstream. If the packager accepts the deps upstream wants to force on him the mess will only grow. Most of the upstream projects I work with would jump to the possibility to embed all the deps they need because that matches their vision of their app as the center of the system.
Does anyone bring up these problems with the upstream in these cases, or just work around it?
I might also note that this is most important for user apps, not so much backend/server stuff; my friend Dan (very bright, but never had a computer until last week) isn't going to be installing Apache or a telephony system. ;-) A lot of backend apps half the time aren't ever *intended* to be packaged.
There will *always* be upstream sources from hell, but a solid packaging policy and avoidance of the cheap route out should let packagers deal with them in a sane manner. If not, then they will at least be the *exception*, and not the *rule*.
I never claimed things will be 100% perfect, just that the majority of packages should work sanely. ~,^
Someone must say stop at one point. It might be a larger project (Gnome), a packaging project (Fedora) or the end-user (because in the end duplicating stuff is a maintenance burden and you're only exchanging upstream-level work with packager or user work by embedding stuff).
You're right most users do not want to bother with package deps. You're wrong however in thinking they'll accept the mess a massive embedding policy would foster on them.
Again, *only* when embedding is the *only* choice. I don't like embedding any more than you, but when it's either embed or force the user to upgrade the entire system for one app...
Your other mail indicates you seem not to be following that. Embedding isn't the answer, it's the hack needed when upstream breaks things for us. The answer isn't always the extreme; there *are* shades of gray. ;-) (just hopefully mostly towards the bright side)
I'd imagine in almost all cases the embedding can be gone without. If nothing else, putting libraries/packages like that in different locations would help. A policy would be needed so people could know how to do that tho, and not end up with a bunhc of random, incompatible methods. ;-)
I'm perfectly happy with un-packageable projects (ie projects that do not address packager needs and can not work with the same deps as the rest of the system) being kicked out of Fedora. I'm also perfectly happy with old releases not getting the same kind of care as new ones. They have to die at some point - you can not support every single release at vitam eternam. Nobody here will knowingly make upgrades harder for older
Right ^,^
releases. Sacrificing the system sanity to the golden cow of eternal upgradeability however will only produce a Windows-like mess were everything "sort-of" works for everyone. You have to do some spring cleanups, even in real life.
This, of course, is the purpose of deprecation. We have GNOME2 and GNOME1 libs in RH9, but as soon as all apps move off of GNOME1/GTK1, those can be removed. Apps that, 5 years later, still rely on 5 year old incompatible libraries, probably need to be removed or replaced. Or at least patched to work on new systems, if they're somehow super-mandatory. (which would be an ugly situation)
You only need the legacy cruft installed if you are using a legacy app. Windows' mess is due to the fact that it can't grab legacy cruft on app install, so it always has to have it laying there. Fedora can always ship its core CD(CDs if we continue with the "stuff too much crap in the base OS" philosophy) with only the cutting-edge stuff, and put legacy support packages on the other CDs or online for fetching w/ up2date.
(It would also be nice if the system could detect when a "support" package, like a library, is no longer used and can be removed - tho that would need to be easily overridable for those of us installing thigns from source ;-)
Cheers,
Le sam 04/10/2003 à 18:23, Sean Middleditch a écrit :
On Sat, 2003-10-04 at 04:55, Nicolas Mailhot wrote:
Le sam 04/10/2003 à 05:23, Sean Middleditch a écrit :
Are you talking about users, or sysadmins/hackers? I'd doubt a user would even know a jar file is, or their installed version of Java.
Well java is a bit special. Most upstream projects do not care about packaging at all (you know, big ugly system-dependent mess, not like their WORA nirvana) so people are used to struggle to install stuff. When they're fed up they find a nice packaging project like jpackage and are delighted ; however they can see if you've done something less optimal that what they did manually before and will complain (loudly) in that case.
The package could have been split up nicely and with dependencies on either a 1.4 jvm or your 1.3 jar file addons, resulting in a small download for java 1.4 users and continued ease of use for java 1.3 users, no? (assuming you did something along these lines - haven'tu sed your packages)
All external deps spun out of the package, 1.4 jvm providing the same virtuals deps as standalone 1.3 bits, run-time shells scripts that picks whatever jars are present on the system. The tomcat rpm is now lean and mean and the jars can be reused by whatever other app (jboss...) can need them.
The only problem is if one installs a partial 1.4 system and a partial 1.3, and the sum of them provides all the virtuals tomcat needs but neither the 1.3 nor the 1.4 set of rpms is complete enough to support tomcat (ie there is no real way to tell rpm that A+B is ok and C+D is ok too, you have to require X and Y, have A & C provide X and B& D Y - which breaks on a A+d system). Thanksfully only developers ever try that.
But even if the proportion of clueful users were less for a mainstream project like Fedora I strongly object to the idea bad packaging should be allowed to ease some users pain. If you drive out all enlightened users the contributor pool will dry up and Redhat will be left alone supporting Fedora - not something they're ready to I think.
I'm not arguing for "bad packaging", i'm arguing for *correct* packaging. One package should always work; its dependencies just need to be correct to ensure this. Putting everything needed in one package is not the solution I've asked for at all, if you read this thread, with the sole exception of broken dependencies that can't be packaged any other way.
Then no one will argue with you here. I'm sure we'll always find a way not to package stuff in a single package;)
Certainly, every user I've dealt with recently (including a handful friends and a number of coworkers) would have no clue; they can barely remember their OS is "Microsoft Windows" and not "Compaq Explorer." ;-) (and no, that isn't an implication users are stupid, merely that they often don't know much about computers. i can't keep the names of various parts of a car engine straight, but that's only because i'm not really familiar with it, and I don't really care in the least, so long as it moves. just like many users don't care how the computer runs, jsut that it works. and yes, that was a real example, not my usual satirical exaggeration ;-)
Sure. But do you see any of them installing Fedora by themselves ? If there is someone clueful enough somewhere to install Linux he will understand some packaging issues so your example is not relevant (and he won't have to understand everything to point out at least a few mistakes)
This is complete nonsense. I've two friends just the last couple weeks that have RH9, one I installed for him (since I helped build his machine) and the other installed by the friend after I gave him CDs. Neither of them understand jack about the packaging, and I've already had to make tons of excusses for the insanity of it (along with other Linux stupidities, which are another topic entirely). Getting calls at 10pm because they can't figure out how to get OpenRPG installed (or whatever) is irritating.
So would have standalone apps changed anything ? No. They'd have called you because your integrated app used its own config files and didn't pick up the system settings they chose in the control panel.
What could have changed something is the app be properly packaged in Fedora (for example) and your friends being taught to use apt/yum/whatever.
Nobody should *have* to explain it to them, it should just work the way they'd expect - which is click on the package, and watch it install (possibly grabbing dependencies, or providing a very sane/clear message if they cannot be found, which is another RPM problem...)
ROTFL. The way they expect it is you doing the install for them. I have questions about MS office every other week even though I'm the only person at work who *never* use it 'cos I has a linux desktop (I have questions about office by people who use it all the day !). The problem is not following "what the user expects". The problem is having a working framework so people won't assume they should not bother learning it because it's as broken as the other ones and asking their resident guru is easier anyway.
[...]
I see you've never tried to package a large pool of inter-dependant projects. In 80%+ of the cases the problem lies upstream. If the packager accepts the deps upstream wants to force on him the mess will only grow. Most of the upstream projects I work with would jump to the possibility to embed all the deps they need because that matches their vision of their app as the center of the system.
Does anyone bring up these problems with the upstream in these cases, or just work around it?
Both. Strangely enough users understand readily enough what they win in a nicely modular system, while upstream developers more often than not strongly object to someone even suggesting they massive bundle of straight-from-cvs binaries is not proper packaging.
I might also note that this is most important for user apps, not so much backend/server stuff; my friend Dan (very bright, but never had a computer until last week) isn't going to be installing Apache or a telephony system. ;-) A lot of backend apps half the time aren't ever *intended* to be packaged.
A sysadmin likes its stuff properly packaged like anyone else.
[...]
I'd imagine in almost all cases the embedding can be gone without.
Then why bring it up now ? When you find an app that you feel absolutely needs embedding then it will be time to argue. Doing it now over an hypothetical case is pointless.
I'm sure someone on the list will always find a way to avoid embedding s
releases. Sacrificing the system sanity to the golden cow of eternal upgradeability however will only produce a Windows-like mess were everything "sort-of" works for everyone. You have to do some spring cleanups, even in real life.
This, of course, is the purpose of deprecation. We have GNOME2 and GNOME1 libs in RH9, but as soon as all apps move off of GNOME1/GTK1, those can be removed. Apps that, 5 years later, still rely on 5 year old incompatible libraries, probably need to be removed or replaced. Or at least patched to work on new systems, if they're somehow super-mandatory. (which would be an ugly situation)
Just freeze the repository at release time and have all new packages go into an unstable branch (or at least require them to go into unstable before hitting stable). If something didn't get packaged in unstable by the time it's time for another release - drop it.
The announced release frequency is high enough people can wait for the next release for new stuff.
Regards,
On Sat, 2003-10-04 at 13:15, Nicolas Mailhot wrote:
Le sam 04/10/2003 à 18:23, Sean Middleditch a écrit :
This is complete nonsense. I've two friends just the last couple weeks that have RH9, one I installed for him (since I helped build his machine) and the other installed by the friend after I gave him CDs. Neither of them understand jack about the packaging, and I've already had to make tons of excusses for the insanity of it (along with other Linux stupidities, which are another topic entirely). Getting calls at 10pm because they can't figure out how to get OpenRPG installed (or whatever) is irritating.
So would have standalone apps changed anything ? No. They'd have called you because your integrated app used its own config files and didn't pick up the system settings they chose in the control panel.
No offense, but... where the hell are you getting this stuff from? I haven't said *anythign* about system settings, and even if I did, "integrated" rather means it *would* use them.
What could have changed something is the app be properly packaged in Fedora (for example) and your friends being taught to use apt/yum/whatever.
I doubt Fedora is ever going to package all useful software, and users shouldn't need to learn to open up a command line and type cryptic names of software when they are on the software's website with a fricken link to it right there.
What should happen is they click on the osftware, *that* package is installed, and apt/yum is used for dependency resolution (without the user needing to type or see cryptic things like libfoo2-common-1:4.5.i386.rpm and so on.)
Nobody should *have* to explain it to them, it should just work the way they'd expect - which is click on the package, and watch it install (possibly grabbing dependencies, or providing a very sane/clear message if they cannot be found, which is another RPM problem...)
ROTFL. The way they expect it is you doing the install for them. I have questions about MS office every other week even though I'm the only person at work who *never* use it 'cos I has a linux desktop (I have questions about office by people who use it all the day !). The
I see no correlation between either of those statements... the user expects not to need to ask anyone about it, since it should just work they way they expect, not have cryptic/unobvious behaviour.
problem is not following "what the user expects". The problem is having a working framework so people won't assume they should not bother learning it because it's as broken as the other ones and asking their resident guru is easier anyway.
Right. A working framework is needed, not just a mass of packages that are recompiled everytime someone sneezes so they keep working on the exact snapshot version of some various random Linux OS.
Fix the system so its not broken, and users won't have learn much (why the hell should they need to care about apt/yum/rpm at all?) *and* it'll work like they expect
[...]
I see you've never tried to package a large pool of inter-dependant projects. In 80%+ of the cases the problem lies upstream. If the packager accepts the deps upstream wants to force on him the mess will only grow. Most of the upstream projects I work with would jump to the possibility to embed all the deps they need because that matches their vision of their app as the center of the system.
Does anyone bring up these problems with the upstream in these cases, or just work around it?
Both. Strangely enough users understand readily enough what they win in a nicely modular system, while upstream developers more often than not strongly object to someone even suggesting they massive bundle of straight-from-cvs binaries is not proper packaging.
Totally lost by that last sentence, sorry.
I might also note that this is most important for user apps, not so much backend/server stuff; my friend Dan (very bright, but never had a computer until last week) isn't going to be installing Apache or a telephony system. ;-) A lot of backend apps half the time aren't ever *intended* to be packaged.
A sysadmin likes its stuff properly packaged like anyone else.
Yes, but sysadmins are expected to understand the differences between apache, libapache-modssl, libapr, and evolution. Users, on the otherhand, would know about Evolution, and probably would care at all about Apache.
[...]
I'd imagine in almost all cases the embedding can be gone without.
Then why bring it up now ? When you find an app that you feel absolutely needs embedding then it will be time to argue. Doing it now over an hypothetical case is pointless.
I still have absolutely no clue what you are talking about. I haven't once at all argued any specific dependencies must be embedded. I said that *if* a dependency is so broken it can't be sanely coinstalled, at all, then it needs to be embedded, versus the current practice of only letting the user have one version installed at a time (and thus forcing them to use either package set A or B). That's it. That's all I said, ever, about the embedding topic. You're going on and on about something nobody's even arguing against you about. *applause*
I'm sure someone on the list will always find a way to avoid embedding s
releases. Sacrificing the system sanity to the golden cow of eternal upgradeability however will only produce a Windows-like mess were everything "sort-of" works for everyone. You have to do some spring cleanups, even in real life.
This, of course, is the purpose of deprecation. We have GNOME2 and GNOME1 libs in RH9, but as soon as all apps move off of GNOME1/GTK1, those can be removed. Apps that, 5 years later, still rely on 5 year old incompatible libraries, probably need to be removed or replaced. Or at least patched to work on new systems, if they're somehow super-mandatory. (which would be an ugly situation)
Just freeze the repository at release time and have all new packages go into an unstable branch (or at least require them to go into unstable before hitting stable). If something didn't get packaged in unstable by the time it's time for another release - drop it.
The announced release frequency is high enough people can wait for the next release for new stuff.
This is about as dumb as it can get. I have to upgrade an entire fricken OS for maybe one piece of software I want? *bzzt* Let's try to move forward, not backward, here.
Regards,
Le dim 05/10/2003 à 08:08, Sean Middleditch a écrit :
On Sat, 2003-10-04 at 13:15, Nicolas Mailhot wrote:
Le sam 04/10/2003 à 18:23, Sean Middleditch a écrit :
This is complete nonsense. I've two friends just the last couple weeks that have RH9, one I installed for him (since I helped build his machine) and the other installed by the friend after I gave him CDs. Neither of them understand jack about the packaging, and I've already had to make tons of excusses for the insanity of it (along with other Linux stupidities, which are another topic entirely). Getting calls at 10pm because they can't figure out how to get OpenRPG installed (or whatever) is irritating.
So would have standalone apps changed anything ? No. They'd have called you because your integrated app used its own config files and didn't pick up the system settings they chose in the control panel.
No offense, but... where the hell are you getting this stuff from? I haven't said *anythign* about system settings, and even if I did, "integrated" rather means it *would* use them.
I'm taking this from the last integrated solutions we've got : mozilla with its own xft stack, jvms we have to package and so on.
The fact is when you start defending against system libraries conflicts by shipping your own "tested working" private versions you also ship your own versions of the configuration files they use because even though you might be able to read the system versions of those files at first you know there's a big risk once the system updates its own backend their format may change and your private libraries will no longer grok them.
In my experience the kind of defensive packaging where all deps are bundled with the app always leads to configuration files duplication for this reason, with the direct result of the user having to do the same operation twice or more on the system because the standalone components won't talk to each other ot to the system.
What could have changed something is the app be properly packaged in Fedora (for example) and your friends being taught to use apt/yum/whatever.
I doubt Fedora is ever going to package all useful software,
They won't have to. Unlike old RedHat this is a community project now and people can submit packages here that work with the same repository instead of the semi-working stuff projects used to propose alongside their tar downloads.
It only needs to reach critical mass so enough stuff is packaged users can ignore unpackaged stuff (or shame upstream projects into packaging their own mess).
At this point I doubt many useful software will be left packageless.
and users shouldn't need to learn to open up a command line
There are beautiful clickey frontends you know.
and type cryptic names of software when they are on the software's website with a fricken link to it right there.
Co's they didn't have to type cryptic names in their navigation bar to get thier software right ? And the fricken link system you find on sites like sf (to name it) are not fricken more annoying than an apt system that does everything in the background ?
The answer to your problem is something like the profile I've discussed with Havoc, not some sort of standalone monstruosity (which will inevitably be installed userside and not be shared among the system users since unix rights are something else you won't want to teach your ex-ms users)
What should happen is they click on the osftware, *that* package is installed, and apt/yum is used for dependency resolution (without the user needing to type or see cryptic things like libfoo2-common-1:4.5.i386.rpm and so on.)
And here we agree at last. But take it any further and you'll see there no need to have them download the core package at all either since it can be apted like the rest.
[...]
problem is not following "what the user expects". The problem is having a working framework so people won't assume they should not bother learning it because it's as broken as the other ones and asking their resident guru is easier anyway.
Right. A working framework is needed, not just a mass of packages that are recompiled everytime someone sneezes so they keep working on the exact snapshot version of some various random Linux OS.
Here you introduce your own technical bias again. The user won't care if its rebuild every hour at long at it works.
Fix the system so its not broken, and users won't have learn much (why the hell should they need to care about apt/yum/rpm at all?)
Why the hell should they need to care about your installshield-like setup ? It's not even as if it were working as well as apt/yum/rpm.
[...]
I see you've never tried to package a large pool of inter-dependant projects. In 80%+ of the cases the problem lies upstream. If the packager accepts the deps upstream wants to force on him the mess will only grow. Most of the upstream projects I work with would jump to the possibility to embed all the deps they need because that matches their vision of their app as the center of the system.
Does anyone bring up these problems with the upstream in these cases, or just work around it?
Both. Strangely enough users understand readily enough what they win in a nicely modular system, while upstream developers more often than not strongly object to someone even suggesting they massive bundle of straight-from-cvs binaries is not proper packaging.
Totally lost by that last sentence, sorry.
Ie most of the times upstream are outraged we do not want to use the binary bundle they've assembled from their own code and binary dependencies they lovingly extracted from their own cvs. It works for them why should it not work for us ? Don't we trust the binaries they put into their cvs after applying who knows how many hastily hacked patches, changing the file names so they're sure no one will know where to download the dependencies original sources ? (short answer - we don't). Don't we know that if they use two-year old versions of those very same binaries that's because we should ? (not because they lacked the manpower to follow the projects they depend on)
I might also note that this is most important for user apps, not so much backend/server stuff; my friend Dan (very bright, but never had a computer until last week) isn't going to be installing Apache or a telephony system. ;-) A lot of backend apps half the time aren't ever *intended* to be packaged.
A sysadmin likes its stuff properly packaged like anyone else.
Yes, but sysadmins are expected to understand the differences between apache, libapache-modssl, libapr, and evolution. Users, on the otherhand, would know about Evolution, and probably would care at all about Apache.
Sysadmins can be asked to put systems on line in at a very short notice. They'd rather spend what little time they have to tune the system than following a ten-ages long manual procedure (which doesn't help much at crash-time anyway)
[...]
I'd imagine in almost all cases the embedding can be gone without.
Then why bring it up now ? When you find an app that you feel absolutely needs embedding then it will be time to argue. Doing it now over an hypothetical case is pointless.
I still have absolutely no clue what you are talking about. I haven't once at all argued any specific dependencies must be embedded. I said that *if* a dependency is so broken it can't be sanely coinstalled, at all, then it needs to be embedded, versus the current practice of only letting the user have one version installed at a time (and thus forcing them to use either package set A or B). That's it. That's all I said, ever, about the embedding topic. You're going on and on about something nobody's even arguing against you about. *applause*
But what you argue for is already current practice (see how the xft port of mozilla was done). And it should be limited, not extended because it has real costs for the end-user.
I'm sure someone on the list will always find a way to avoid embedding s
releases. Sacrificing the system sanity to the golden cow of eternal upgradeability however will only produce a Windows-like mess were everything "sort-of" works for everyone. You have to do some spring cleanups, even in real life.
This, of course, is the purpose of deprecation. We have GNOME2 and GNOME1 libs in RH9, but as soon as all apps move off of GNOME1/GTK1, those can be removed. Apps that, 5 years later, still rely on 5 year old incompatible libraries, probably need to be removed or replaced. Or at least patched to work on new systems, if they're somehow super-mandatory. (which would be an ugly situation)
Just freeze the repository at release time and have all new packages go into an unstable branch (or at least require them to go into unstable before hitting stable). If something didn't get packaged in unstable by the time it's time for another release - drop it.
The announced release frequency is high enough people can wait for the next release for new stuff.
This is about as dumb as it can get. I have to upgrade an entire fricken OS for maybe one piece of software I want? *bzzt* Let's try to move forward, not backward, here.
Really, I don't see why you oppose a clean upgrade every few months when your solution would result in massive code duplication. In my scenario at least the on-disk system size wouldn't grow exponentially over time.
Regards,
On Sun, 2003-10-05 at 04:23, Nicolas Mailhot wrote:
Le dim 05/10/2003 à 08:08, Sean Middleditch a écrit :
I still have absolutely no clue what you are talking about. I haven't once at all argued any specific dependencies must be embedded. I said that *if* a dependency is so broken it can't be sanely coinstalled, at all, then it needs to be embedded, versus the current practice of only letting the user have one version installed at a time (and thus forcing them to use either package set A or B). That's it. That's all I said, ever, about the embedding topic. You're going on and on about something nobody's even arguing against you about. *applause*
But what you argue for is already current practice (see how the xft port of mozilla was done). And it should be limited, not extended because it has real costs for the end-user.
I'm not even sure you know what I'm arguing for. Why are you going on and on about code duplication when that was one teensy little minor point I made just for *very* rare cases where there is *no* other solution? Seriously, *what* are you going on about that I'm disagreeing with?
I'm sure someone on the list will always find a way to avoid embedding s
releases. Sacrificing the system sanity to the golden cow of eternal upgradeability however will only produce a Windows-like mess were everything "sort-of" works for everyone. You have to do some spring cleanups, even in real life.
This, of course, is the purpose of deprecation. We have GNOME2 and GNOME1 libs in RH9, but as soon as all apps move off of GNOME1/GTK1, those can be removed. Apps that, 5 years later, still rely on 5 year old incompatible libraries, probably need to be removed or replaced. Or at least patched to work on new systems, if they're somehow super-mandatory. (which would be an ugly situation)
Just freeze the repository at release time and have all new packages go into an unstable branch (or at least require them to go into unstable before hitting stable). If something didn't get packaged in unstable by the time it's time for another release - drop it.
The announced release frequency is high enough people can wait for the next release for new stuff.
This is about as dumb as it can get. I have to upgrade an entire fricken OS for maybe one piece of software I want? *bzzt* Let's try to move forward, not backward, here.
Really, I don't see why you oppose a clean upgrade every few months when your solution would result in massive code duplication. In my scenario at least the on-disk system size wouldn't grow exponentially over time.
Because when people want software, they want it *now*. If you tell them, "well, you have to wait two months to install that software, even tho the software it already out now" then they're going to (rightly) switch to an OS that doesn't suffer from a complete lack of backward or forward compatibility.
And, again, the above has *ABSOLUTELY* nothing to do with with embedded dependencies. Just good packaging habits. Disk size only grows if the user *needs* multiple versions of something. Good packaging would *avoid* that as much as possible.
Take a look at Debian - they quite often a have a couple versions of certain components *available*, altho they usually aren't ever *installed*. If the user needed, say, and older version of Python for a piece of software, it's usually there for them. But, most of the time, the user doesn't actually need it.
So long as the dependency is available, then both older and newer software can be installed. The dependencies are *only* needed when app *actually* uses them. And even if you install version 1.2 of an app that needs dependency foo 1.0, you're still completely allowed to make a release of the app 1.2-1 that uses the more up-to-date foo 1.02. Users aren't forced to update their app, but those that do don't have to deal with the "legacy" dependency. Users of an OS release that only had foo 1.0, upon install of the new 1.2-1 version of the app, would have foo 1.02 pulled in as the dependency.
Regards,
Le dim 05/10/2003 à 17:15, Sean Middleditch a écrit :
Because when people want software, they want it *now*. If you tell them, "well, you have to wait two months to install that software, even tho the software it already out now" then they're going to (rightly) switch to an OS that doesn't suffer from a complete lack of backward or forward compatibility.
I can assure you the time between a release being dogfoodable and it hitting the end-users disks is much more than two months for those other OSs.
Two months is quite acceptable for a beta period. In fact it's quite fast - why do you thing Oracle and friends were screaming at RedHat for a slower release cycle ?
Regards,
On Sun, 2003-10-05 at 11:33, Nicolas Mailhot wrote:
Le dim 05/10/2003 à 17:15, Sean Middleditch a écrit :
Because when people want software, they want it *now*. If you tell them, "well, you have to wait two months to install that software, even tho the software it already out now" then they're going to (rightly) switch to an OS that doesn't suffer from a complete lack of backward or forward compatibility.
I can assure you the time between a release being dogfoodable and it hitting the end-users disks is much more than two months for those other OSs.
But they never have to wait to install new software, and older software just works. Even if behind the scenes its ugly as hell, which definitely sucks for developers, the users don't really care. ;-)
Two months is quite acceptable for a beta period. In fact it's quite fast - why do you thing Oracle and friends were screaming at RedHat for a slower release cycle ?
Yes, that's great if we're talking an OS release. I'm not, tho - I'm talking uers who want to install some piece of software right now, and don't want to be told they have to wait 2 months for a new version of their OS to come out; not when "other" OSs can run apps from 10 years ago, and even the newest apps coming out still run on at least several releases back, which covers a decent number of years (so far as computers go).
Fedora doesn't have to slow down releases, nor does it have to stop being cutting edge - it just has to make sure necessary dependencies are *available* (not necessarily installed, if the user doesn't need them) to cover at least a somewhat acceptable number of years of backwards compatibility.
There's also the commercial software people try to release, which can't make use of the "crap, a month went by and a new OS is out, we need to recompile!" and offer 67 RPMs slightly-different on their install CD for users. The way things are now, that software *does* ship all of their dependencies embedded, *even when they shouldn't*, because the OSs don't provide them; plus the software almost always uses shell scripts and other hacks that no user should *ever* need to use just to install something. But, they don't have much choice, at the moment.
Regards,
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sun, 05 Oct 2003 11:44:06 -0400, Sean Middleditch wrote:
I'm talking uers who want to install some piece of software right now, and don't want to be told they have to wait 2 months for a new version of their OS to come out; not when "other" OSs can run apps from 10 years ago, and even the newest apps coming out still run on at least several releases back, which covers a decent number of years (so far as computers go).
There's one of those "other" OSs, where a friend points me to a versatile media player I should check out. The only download option is an .EXE file, several MiB big. Great. That should be easy. But wait, it unpacks files into a temporary directory, then refuses to proceed and tells me I need a newer version of DirectX. No idea where to get that. I search the web and download the latest version which is even newer than what is required. I fail to find a place where to get exactly the version that is required. Hopefully it doesn't matter.
The installer seems to be happy about the new version. Then I'm told I need an update to DCOM and a couple of similar packages. Somehow I manage to find and install all this stuff. The application's installer finishes. But the application doesn't work completely. The developers tell me some of my system software is too new and not yet supported. Something would be special about my system, but they don't know what. I should get the latest version of the OS where the necessary stuff is preinstalled. Or I could also install the previous version where they've tested the application, too.
I then notice that some of the updates overwrote important system files, and now other applications don't work anymore. When asked, my friend admits he has had a few problems, too. But he thought the problems would be due to his own mistakes, and everywhere else it would work flawlessly. Thanks god I have a backup of my installation.
Before restoring my system, I take the chance to try a commercial application which I have been given on CD for evaluation. It says explicitly it supports my version of the OS. Its installer is smarter. Afterall, it's a commercial application. It offers to download missing components from the Internet. Unfortunately, after asking me half a dozen times on whether I would let it overwrite mysterious .DLL system files, it refuses to proceed. It wants a specific version of DirectX, an older version than what I've updated to. I get the chance to downgrade to a version on the CD, although I don't have the slightest idea whether that might affect any other applications, such as the partially working media player which is still installed.
At the end of the installation, most parts of my system are still working, the application disables one feature due to insufficient system features. So far so good, but a few parts of my OS now are in English and no longer in German. I ask a person who considers himself an expert on that operating system. He shakes his head and says I should reinstall from scratch or from backup. It could take hours to fix the mess manually. I should try custom installation. Once I would have figured out how to complete a working installation, everything would be fine.
He tells me about a message board where I should ask for help prior to installing from scratch. The people in that forum try to help. Half of them ask questions which seem to address a completely different OS. Some try to analyze my system with detailed questions on what options I see in the menus and what tools are installed. Then they excuse for not being helpful and write that they run the latest version of the OS where things are different. The other half blame me for not having a backup. I didn't tell them I do have backups. I just mentioned I would not want to reinstall. Seems that's the only option.
One wizard steps up in private mail, accusing the other board members of not knowing their stuff. He/she suggests I should uninstall the application and let it revert its changes. Sounds good. Unfortunately, the uninstaller doesn't do the job automatically. Instead -- as if it doesn't know better -- it asks me on whether to keep or erase each of maybe 50 modified system files. Guess what I did. I reinstalled from backup. Seems to be widespread and common practise.
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
On Sun, 2003-10-05 at 14:06, Michael Schwendt wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sun, 05 Oct 2003 11:44:06 -0400, Sean Middleditch wrote:
I'm talking uers who want to install some piece of software right now, and don't want to be told they have to wait 2 months for a new version of their OS to come out; not when "other" OSs can run apps from 10 years ago, and even the newest apps coming out still run on at least several releases back, which covers a decent number of years (so far as computers go).
[long story about one package for "some OS"]
Yes, that's what we'd also call Bad Packaging(tm). ;-) That is the *exception*, not the *rule*, on that OS. There are always exceptions, just like I expect there to be in Linux packaging for the rest of its existence.
The common case doesn't have to suffer (and actually doesn't, depending on OS) because of those rarities, however. ^,^
Michael, who doesn't reply to top posts and complete quotes anymore.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux)
iD8DBQE/gF2P0iMVcrivHFQRAorXAJ9q7oBYHUpQOmzGJUvKOuj6H3+cdQCfROOt 2ksyL+xucWY+JgeYjWl4Ktc= =Vxox -----END PGP SIGNATURE-----
-- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sun, 05 Oct 2003 14:23:56 -0400, Sean Middleditch wrote:
[long story about one package for "some OS"]
Actually, it was a story about two packages and about unavailability of dependencies.
Yes, that's what we'd also call Bad Packaging(tm). ;-) That is the *exception*, not the *rule*, on that OS.
If that is your personal experience, "Consider Yourself Lucky"(tm). :-P
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
-----Original Message----- From: fedora-devel-list-admin@redhat.com [mailto:fedora-devel-list- admin@redhat.com] On Behalf Of Michael Schwendt Sent: Sunday, October 05, 2003 1:06 PM To: fedora-devel-list@redhat.com Subject: Re: Kind request: fix your packages
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sun, 05 Oct 2003 11:44:06 -0400, Sean Middleditch wrote:
I'm talking uers who want to install some piece of software right now, and don't want to be told they have to wait 2 months for a new version of their OS to come out; not when "other" OSs can run apps from 10 years ago, and even the newest apps coming out still run on at least several releases back, which covers a decent number of years (so far as computers go).
There's one of those "other" OSs, where a friend points me to a versatile media player I should check out. The only download option is an .EXE file, several MiB big. Great. That should be easy. But wait, it unpacks files into a temporary directory, then refuses to proceed and tells me I need a newer version of DirectX. No idea where to get that. I search the web and download the latest version which is even newer than what is required. I fail to find a place where to get exactly the version that is required. Hopefully it doesn't matter.
The installer seems to be happy about the new version. Then I'm told I need an update to DCOM and a couple of similar packages. Somehow I manage to find and install all this stuff. The application's installer finishes. But the application doesn't work completely. The developers tell me some of my system software is too new and not yet supported. Something would be special about my system, but they don't know what. I should get the latest version of the OS where the necessary stuff is preinstalled. Or I could also install the previous version where they've tested the application, too.
I then notice that some of the updates overwrote important system files, and now other applications don't work anymore. When asked, my friend admits he has had a few problems, too. But he thought the problems would be due to his own mistakes, and everywhere else it would work flawlessly. Thanks god I have a backup of my installation.
Before restoring my system, I take the chance to try a commercial application which I have been given on CD for evaluation. It says explicitly it supports my version of the OS. Its installer is smarter. Afterall, it's a commercial application. It offers to download missing components from the Internet. Unfortunately, after asking me half a dozen times on whether I would let it overwrite mysterious .DLL system files, it refuses to proceed. It wants a specific version of DirectX, an older version than what I've updated to. I get the chance to downgrade to a version on the CD, although I don't have the slightest idea whether that might affect any other applications, such as the partially working media player which is still installed.
At the end of the installation, most parts of my system are still working, the application disables one feature due to insufficient system features. So far so good, but a few parts of my OS now are in English and no longer in German. I ask a person who considers himself an expert on that operating system. He shakes his head and says I should reinstall from scratch or from backup. It could take hours to fix the mess manually. I should try custom installation. Once I would have figured out how to complete a working installation, everything would be fine.
He tells me about a message board where I should ask for help prior to installing from scratch. The people in that forum try to help. Half of them ask questions which seem to address a completely different OS. Some try to analyze my system with detailed questions on what options I see in the menus and what tools are installed. Then they excuse for not being helpful and write that they run the latest version of the OS where things are different. The other half blame me for not having a backup. I didn't tell them I do have backups. I just mentioned I would not want to reinstall. Seems that's the only option.
One wizard steps up in private mail, accusing the other board members of not knowing their stuff. He/she suggests I should uninstall the application and let it revert its changes. Sounds good. Unfortunately, the uninstaller doesn't do the job automatically. Instead -- as if it doesn't know better -- it asks me on whether to keep or erase each of maybe 50 modified system files. Guess what I did. I reinstalled from backup. Seems to be widespread and common practise.
Michael, who doesn't reply to top posts and complete quotes anymore.
As with all development, it sounds as though you have the same problem that should be addressed by all developers. First is as in the medical world the first thing is "Thou shall do no harm". So the developer should be aware of what his package needs and check for those things, second, if a new version of a common routine is needed, then it should broadcast that fact and proceed no further, third, if a replacement routine is added then that routine should support all previous versions or make this fact widely known. Common courtesy should always be observed no matter the OS the existing system requirements. Some of the problems with Open Source software is obvious though and the community needs to address those concerns. The main one is that the development of software comes after the product is introduced and because of that there will always be a problems. The next one is that the manufacturers will only release interface to the open source community after the proprietary period expires or there is a great demand for it. Given these facts the open source community needs to make damn sure that what it puts out software that works and will cover most situations or users will be reluctant to use it even though it is free and I think that is what has already happened.
-----Original Message----- From: fedora-devel-list-admin@redhat.com [mailto:fedora-devel-list- admin@redhat.com] On Behalf Of Nicolas Mailhot Sent: Friday, October 03, 2003 4:44 PM To: fedora-devel-list@redhat.com Subject: Re: Kind request: fix your packages
Le ven 03/10/2003 à 22:39, Sean Middleditch a écrit :
On Fri, 2003-10-03 at 16:11, Michael Schwendt wrote:
Perl/Python are co-installable with different versions, and thus are
a
different issue.
Oh, great, a second Perl installation. As if Python/Python2 wouldn't be enough already.
If that's what it takes to make things work, then that's what it takes. I didn't say it was perfect, just that it solves the problem that users shouldn't ever have to rebuild to software, and users shouldn't have to run around figuring out what their system is to find the right package and deal with that mess. In a truly ideal world, Perl/Python/etc. wouldn't keep breaking compatibility so often. ~,^ Since that's *not* reality, the only solution left for sane packages (form a user's point of view again) is to let any necessary versions be installed so the user's apps just work and the user doesn't even have to think about OS versions or dependencies.
Don't make me laugh. The user cares about duplicate stuff too. Before we build a serious infrastructure that enabled us to modularise stuff someone would complain every other week we shipped java 1.3 jars with our tomcat rpm (and those jars were necessary to run it with a 1.3 jvm, and didn't hurt when using a 1.4 jvm. But for a 1.4 user they were redundant stuff and we got complains).
Show me a repository with big fat packages that include all deps to be standalone and I'll show you a repository no one wants to use. Users may not all know the zen of packaging but it will only take a few long downloads or stuffed disks to enlighten them.
Cheers,
-- Nicolas Mailhot
here's m .02 cents. The user shouldn't have to worry about how, what, or why it works or doesn't work. The object of good development is that it "all ways work". So what does that mean? It means that all packages should contain everything necessary for that instant to work now matter what happened before it arrives. When you obtain a package freely or pay for it, you don't need to dig up everything else that will require that package to work. We developers like to troubleshoot, but it a waste of time and effort to troubleshoot the needs and intentions of another developer. That's my opinion and I'm stuck with it.
Le sam 04/10/2003 à 12:54, Otto Haliburton a écrit :
here's m .02 cents. The user shouldn't have to worry about how, what, or why it works or doesn't work. The object of good development is that it "all ways work". So what does that mean? It means that all packages should contain everything necessary for that instant to work now matter what happened before it arrives. When you obtain a package freely or pay for it, you don't need to dig up everything else that will require that package to work. We developers like to troubleshoot, but it a waste of time and effort to troubleshoot the needs and intentions of another developer. That's my opinion and I'm stuck with it.
Dear user,
As we all know you're dumb and you can not learn to use a package manager ever. To ease your pain we've decided not to inflict you the so-called "dependency hell" and provide you an autonomous application. Just click on its auto-update menu and it will download everything you need, we swear it.
Of course when you've done you'll just have to do a cpan refresh, tell xemacs to auto-update itself, your movie player to download new codecs, mozilla to download a langage pack and a few themes and by the way did you know about this wonderful openoffice macro that will help you update your spellchecker ?
To help you keep up with updates we've thoughtfully provided a "time to update" popup that will remind you every few weeks you need to keep up with us. To avoid any misunderstanding we designed it to be completely different from the swarm of update popups you deal with every day.
Since another application might update some libraries we depend on at an inconvenient time, we've provided you with a private copy of all the dependencies we need. You can be sure no one will mess with their configuration files or change their format, since we also use a private copy of these files. God forbid it was trampled on by one of those nasty control panel thingies ! The system has no business telling us what fonts to use.
We swear we'll keep up with security fixes for all this peripheral uninteresting stuff, at least as long as it doesn't interfere with our application wellbeing but you won't mind because we know best and you trust us. To avoid migration breakage we even use different configuration files for each version. You just have to re-enter the few scores of options our app use and you're done ! No risk of automated conversion mess-up whatsoever ! The app will even refuse to start till you've filled the four screens of configuration to prevent mess-ups !
You might need to get yourself a new disk of pay for a broadband link, but that's truly trivial and you should thank us to help you keep with your times. We're pleased to inform you we're working on a bootable standalone CD that will drop you in your preferred application without having to worry about the system altogether. For the time being we've implemented a 100% user-side install so you won't have to worry about nasty unix permissions. Just click the setup icon and the whole app will install itself in Wonderful Technologies\Wonderful Technologies Application Plus Professional\6.6 SR6\ in your workspace. And it only takes 300 Mb per user!
We stay committed to provide products that work now no matter what happened before they arrive,
P.S.
Regarding you question on why you can't print from Wonderful Technologies Application Plus Professional we're sorry to inform you update of the print library is not scheduled before the release of Wonderful Technologies Application Plus Professional Entreprise 7 first quarter of next year. That it works with the rest of the system is purely a fluke of your imagination and due to you system provider bundling an unstable untested printing backend too early. You're better of with the Obsolete Corp backend we provide now - it has been tested a full two days with my nephew's inkjet printer. Hopefully our app-on-cd system will take care of those regrettable incidents soon. In the meanwhile we advise you to buy an Expensive Systems LaserColourWriter 9000000000000 which are known to work with Wonderful Technologies Application Plus Professional (at least for the american models). They even come bundled with vouchers for other Wonderful Technologies great products !
P.P.S.
If you can't afford an Expensive Systems LaserColourWriter 9000000000000 my nephew just wrote me we wants to sell its inkjet printer. JUST CALL !
On Sat, 2003-10-04 at 08:11, Nicolas Mailhot wrote:
Le sam 04/10/2003 à 12:54, Otto Haliburton a écrit :
here's m .02 cents. The user shouldn't have to worry about how, what, or why it works or doesn't work. The object of good development is that it "all ways work". So what does that mean? It means that all packages should contain everything necessary for that instant to work now matter what happened before it arrives. When you obtain a package freely or pay for it, you don't need to dig up everything else that will require that package to work. We developers like to troubleshoot, but it a waste of time and effort to troubleshoot the needs and intentions of another developer. That's my opinion and I'm stuck with it.
Dear user,
As we all know you're dumb and you can not learn to use a package manager ever. To ease your pain we've decided not to inflict you the so-called "dependency hell" and provide you an autonomous application. Just click on its auto-update menu and it will download everything you need, we swear it.
Of course when you've done you'll just have to do a cpan refresh, tell xemacs to auto-update itself, your movie player to download new codecs, mozilla to download a langage pack and a few themes and by the way did you know about this wonderful openoffice macro that will help you update your spellchecker ?
To help you keep up with updates we've thoughtfully provided a "time to update" popup that will remind you every few weeks you need to keep up with us. To avoid any misunderstanding we designed it to be completely different from the swarm of update popups you deal with every day.
Since another application might update some libraries we depend on at an inconvenient time, we've provided you with a private copy of all the dependencies we need. You can be sure no one will mess with their configuration files or change their format, since we also use a private copy of these files. God forbid it was trampled on by one of those nasty control panel thingies ! The system has no business telling us what fonts to use.
We swear we'll keep up with security fixes for all this peripheral uninteresting stuff, at least as long as it doesn't interfere with our application wellbeing but you won't mind because we know best and you trust us. To avoid migration breakage we even use different configuration files for each version. You just have to re-enter the few scores of options our app use and you're done ! No risk of automated conversion mess-up whatsoever ! The app will even refuse to start till you've filled the four screens of configuration to prevent mess-ups !
You might need to get yourself a new disk of pay for a broadband link, but that's truly trivial and you should thank us to help you keep with your times. We're pleased to inform you we're working on a bootable standalone CD that will drop you in your preferred application without having to worry about the system altogether. For the time being we've implemented a 100% user-side install so you won't have to worry about nasty unix permissions. Just click the setup icon and the whole app will install itself in Wonderful Technologies\Wonderful Technologies Application Plus Professional\6.6 SR6\ in your workspace. And it only takes 300 Mb per user!
We stay committed to provide products that work now no matter what happened before they arrive,
P.S.
Regarding you question on why you can't print from Wonderful Technologies Application Plus Professional we're sorry to inform you update of the print library is not scheduled before the release of Wonderful Technologies Application Plus Professional Entreprise 7 first quarter of next year. That it works with the rest of the system is purely a fluke of your imagination and due to you system provider bundling an unstable untested printing backend too early. You're better of with the Obsolete Corp backend we provide now - it has been tested a full two days with my nephew's inkjet printer. Hopefully our app-on-cd system will take care of those regrettable incidents soon. In the meanwhile we advise you to buy an Expensive Systems LaserColourWriter 9000000000000 which are known to work with Wonderful Technologies Application Plus Professional (at least for the american models). They even come bundled with vouchers for other Wonderful Technologies great products !
P.P.S.
If you can't afford an Expensive Systems LaserColourWriter 9000000000000 my nephew just wrote me we wants to sell its inkjet printer. JUST CALL !
So what's the problem. The developer wants his product used. If you don't fix your software/hardware/programs the user will rebel and not update it at all, then where's your problem. The user instinct is not to make changes cause "it works for what I want it to do", It may not be the best or have all the functions I want but if I got to go trouble shoot all the problems then f__ it!!!!!!
O
So what's the problem. The developer wants his product used. If you don't fix your software/hardware/programs the user will rebel and not update it at all, then where's your problem. The user instinct is not to make changes cause "it works for what I want it to do", It may not be the best or have all the functions I want but if I got to go trouble shoot all the problems then f__ it!!!!!!
Sorry about the back to back post. The above statement is exactly what Microsoft has found out on it constant updates to fix its system. The user will not make the change cause "it will break something else". So they are not keeping there system up with what MS calls critical updates. Think about it!!!!!
Le sam 04/10/2003 à 16:37, Otto Haliburton a écrit :
O
So what's the problem. The developer wants his product used. If you don't fix your software/hardware/programs the user will rebel and not update it at all, then where's your problem. The user instinct is not to make changes cause "it works for what I want it to do", It may not be the best or have all the functions I want but if I got to go trouble shoot all the problems then f__ it!!!!!!
Sorry about the back to back post. The above statement is exactly what Microsoft has found out on it constant updates to fix its system. The user will not make the change cause "it will break something else". So they are not keeping there system up with what MS calls critical updates. Think about it!!!!!
Oh, but I think about it. I can assure you each time I get a new swen virus in my mailbox I think bloody hard about it.
Read my post. Sleep on it. Then read it again.
Somehow you seem to assume dependencies are the worst thing that can happen to a user. I've given you a mailfull of real examples of people that thought the same thing, and ended up screwing the user big time because their solution was at best 80% of the cost of using a full-featured system package manager, and they conveniently forgot that 0.8*100 >> 1.
We're no longer talking about single-user single-application dedicated systems. Any single user nowadays will interact with multiple apps, and if apps integration is broken because they each use their own library set he will be immensely pissed of.
And this is only when you purposefully ignore the upgrade problems. Updates are a fact of life. Live with it. You need to upgrade because there are nasty people that will attack every single hole in your apps. You need to upgrade because your hardware will change and this means the software handlers must keep up with it. And, last and least of all you need to upgrade because you want the app enhancements like everyone else.
I can only ROTFL when you attack dependencies then write a few posts later about the user refusing updates because MS taught it they constantly break new things.
I'll give you another hint. Once. MS happens to sell a system with no dependencies checks. It's also a system that can not get updates right.
RPM and deb use dependencies. rpm and dep system have been seamlessly updated by users for years.
The truth is that the user is not hopelessly dumb as people seem so fond of writing. The user is L.A.Z.Y. The user will happily use whatever tech that helps him get rid of the broken application-centric update processes the MS and proprietary world have forced upon him for the last years. It is false to think he'll reject every new approach (for him). It is false to write he needs standalone apps because he's used to them (conveniently forgetting this standalone approach is what got us in the current mess, and that computer systems managed to replace VCRs as the most hated piece of tech in our everyday life). Integrated solutions like unified print systems, directx, etc have been a huge success. Modern package managers are nothing more that an integrated upgrade system. That mainstream operating systems don't use them yet is no reason to reject them. Quite the contrary since those very same systems have failed miserably to solve the update problem.
The average user will try apt/yum/up2date and find it saves him hours of update procedure time he can pass doing more interesting stuff. The average user like you wrote wouldn't care less if its package manager used dependencies, cold fusion or fuzzy logic. The average user cares that its works without sucking his precious time. This is the single success criterion.
Today Linux modular package installations pass this test hands-on. When I read someone advocating doing it the windows or mac standalone way I read someone that wants to save himself some packaging work at the expense of the end-user. The fun things is it's always advocated to spare the user the dreadful rpm experience.
rpm is not dreadful. With apt/yum/up2date/urpmi it's a lifesaver.
On Sat, 2003-10-04 at 09:11, Nicolas Mailhot wrote:
Dear user,
As we all know you're dumb and you can not learn to use a package manager ever. To ease your pain we've decided not to inflict you the so-called "dependency hell" and provide you an autonomous application. Just click on its auto-update menu and it will download everything you need, we swear it.
*bzzt* It has *nothing* to do with users being "dumb." *nothing* It has to do with the fact the user doesn't care, doesn't need to care, and shouldn't be forced to care. I don't have a fricken clue how a jet engine works, don't really care how it works, but that doesn't mean I'm not allowed to fly on a plane. Indeed, the usage of an airplane is completely independent of knowing how the engine works, including the pilots.
Computers aren't any different. There is no reason a user should *have* to understand the inner workings of it. That doesn't mean they *can't* know it, only that they don't need to.
Don't confuse those two ideas.
To help you keep up with updates we've thoughtfully provided a "time to update" popup that will remind you every few weeks you need to keep up with us. To avoid any misunderstanding we designed it to be completely different from the swarm of update popups you deal with every day.
Red Hat/Fedora offers the up2date applet, which does just this, and is a good thing. ^,^
On Sat, 2003-10-04 at 18:19, Sean Middleditch wrote:
On Sat, 2003-10-04 at 09:11, Nicolas Mailhot wrote:
Dear user,
As we all know you're dumb and you can not learn to use a package manager ever. To ease your pain we've decided not to inflict you the so-called "dependency hell" and provide you an autonomous application. Just click on its auto-update menu and it will download everything you need, we swear it.
*bzzt* It has *nothing* to do with users being "dumb." *nothing* It
I think Nicolas forgot the irony tags.
has to do with the fact the user doesn't care, doesn't need to care, and shouldn't be forced to care. I don't have a fricken clue how a jet engine works, don't really care how it works, but that doesn't mean I'm not allowed to fly on a plane. Indeed, the usage of an airplane is
To turn around your comparison, the pilot would be the person pushing around the mouse and pushing the keys for you (and everyone would be clapping hands when the document comes out of the printer ;-). Doesn't sound right? I thought so.
completely independent of knowing how the engine works, including the pilots.
They too have their specialists ("flight mechanics") for the engines, but at least basic knowledge how they work (e.g. the kernel) and intimate knowledge about flight mechanics (e.g. how the system boots, what package dependencies are and how to deal with them). If you look at small planes, the pilots often are the flight mechanics.
As I see it, the flight mechanics represent developers, the pilot represents an administrator, the passengers represent normal users. At the moment, passengers are restricted to using their seats, watching TV and going to the bathroom during flight -- with much assistance and hand-holding they could possibly emergency-land a machine if the pilots are sick or whatever, but what are the odds that they succeed?
I don't say that I don't wish that users could easily do complicate tasks on a computer, aided by carefully crafted tools (auto-pilot ;-) who take care of the innards they don't want to know about.
Computers aren't any different. There is no reason a user should *have* to understand the inner workings of it. That doesn't mean they *can't* know it, only that they don't need to.
A computer is a very complex machine and I don't see that today's tools are intelligent enough to hide all that from the user and still give them all the potential the machines have. I don't see the majority of the people having their private planes/gliders/etc. in the next years either ;-).
Just some thoughts...
Nils
On Sun, 2003-10-05 at 17:25, Nils Philippsen wrote:
On Sat, 2003-10-04 at 18:19, Sean Middleditch wrote:
has to do with the fact the user doesn't care, doesn't need to care, and shouldn't be forced to care. I don't have a fricken clue how a jet engine works, don't really care how it works, but that doesn't mean I'm not allowed to fly on a plane. Indeed, the usage of an airplane is
To turn around your comparison, the pilot would be the person pushing around the mouse and pushing the keys for you (and everyone would be clapping hands when the document comes out of the printer ;-). Doesn't sound right? I thought so.
The absolute worst thing about analogies is that people take parts of them that don't apply to the situation, and go with them. ~,^ Use an analogy of how an angry person is like a hungry bear, and someone will think that you are saying angry people get furry and crave meat...
The pilot vs. passenger distinction is irrelevant, both are airplane users. Perhaps I should've used cars as a cleaner analogy. ;-) I don't know how most of my Ford Ranger works, and I couldn't care - I just get in and drive the thing. (oh, no, wait, now someone is going to take the analogy to another extreme and start bringing in proper oil changes and driver's licenses or something - that's not what this analogy is about! misusing a computer doesn't cause deadly accidents, and lack of proper software maintenance doesn't cause the harder to rust and break. not relevant!)
completely independent of knowing how the engine works, including the pilots.
They too have their specialists ("flight mechanics") for the engines, but at least basic knowledge how they work (e.g. the kernel) and intimate knowledge about flight mechanics (e.g. how the system boots, what package dependencies are and how to deal with them). If you look at small planes, the pilots often are the flight mechanics.
Totally beyond the scope of my original analogy. My mother has a computer she uses just by herself (single pilot system), but she certainly doesn't know anything about the mechanics. Granted, she hasn't needed to know about mechanics to get her work done in the almost 20 years she's been using various definitely-not-Linux operating systems. ;-)
As I see it, the flight mechanics represent developers, the pilot represents an administrator, the passengers represent normal users. At the moment, passengers are restricted to using their seats, watching TV and going to the bathroom during flight -- with much assistance and hand-holding they could possibly emergency-land a machine if the pilots are sick or whatever, but what are the odds that they succeed?
I don't say that I don't wish that users could easily do complicate tasks on a computer, aided by carefully crafted tools (auto-pilot ;-) who take care of the innards they don't want to know about.
Eek. Double negatives, my English parser just got thrown off. :( You *do* say that you *do* wish users could do complicated tasks...? (sorry, I've always had trouble with double negatives ;-)
Computers aren't any different. There is no reason a user should *have* to understand the inner workings of it. That doesn't mean they *can't* know it, only that they don't need to.
A computer is a very complex machine and I don't see that today's tools are intelligent enough to hide all that from the user and still give them all the potential the machines have. I don't see the majority of the people having their private planes/gliders/etc. in the next years either ;-).
A lot of the potential a machine has, a user doesn't want. Computers can be just as flexible and complex for the users with the knowledge as our beautiful systems have today; we can still make it work efficiently (least amount of time and effort to complete a task) for users who see it as a tool for simpler tasks like writing a paper, browsing a website, playing a game, or chatting with friends/family. Not everyone is a hacker or sysadmin-wannabe. ~,^
I *definitely* don't want Fedora/Linux "dumbed down" to be easier - that's the wrong approach. It just needs to purify the stuff that's too complex for some, and needlessly cumbersome for others. I often compile software from source, have spent tons of time rebuilding RPMs and manually fixing dependencies, and develop software - that doesn't mean I enjoy having to go thru a lot of the trouble all that often requires. Simple != less flexible, and neither does complex == power. The current situation's correlation between the later doesn't imply causality in the least. ~,^
To go with a (hopefully) relevant analogy, look at GNOME2. Tons of uninformed fools whined on and on about it being "dumbed down," yet it's just as powerful as before. Whole components can be replaced if necessary (available flexibility that doesn't require other users to understand the mechanism), and its become a lot cleaner and easier for both the newbie *and* expert; it pulled the crap out of the way to let *everyone*, no matter the experience level, use it as a tool to get work done, versus being a desktop that gets in the way of getting work done. Simplified, *needless* complexity stripped out, yet fully functional for just about everyone save those with truly special needs. And of course, for those who don't like it, they can just use something besides GNOME. But a normal GNOME user doesn't need to know about that, or even what makes that possible, to be able to use GNOME. My friend Dan definitely wouldn't understand (again, not because he's stupid, just because he hasn't bothered to learn how a Linux system operates), yet he uses GNOME mostly happily (save for the questions about getting software installed I get from him, and other irrelevant-to-this-thread annoyances like media mounting.)
Cleaning up and purifying the package system and development methodologies doesn't have to sacrifice anything, assuming they are cleaned up *right*. (just removing external dependencies and unconditionally embedding them is definitely not *right*)
Just some thoughts...
Nils
-----Original Message----- From: fedora-devel-list-admin@redhat.com [mailto:fedora-devel-list- admin@redhat.com] On Behalf Of Nils Philippsen Sent: Sunday, October 05, 2003 4:25 PM To: fedora-devel-list@redhat.com Subject: RE: Kind request: fix your packages
On Sat, 2003-10-04 at 18:19, Sean Middleditch wrote:
On Sat, 2003-10-04 at 09:11, Nicolas Mailhot wrote:
Dear user,
As we all know you're dumb and you can not learn to use a package manager ever. To ease your pain we've decided not to inflict you the so-called "dependency hell" and provide you an autonomous application. Just click on its auto-update menu and it will download everything you need, we swear it.
*bzzt* It has *nothing* to do with users being "dumb." *nothing* It
I think Nicolas forgot the irony tags.
has to do with the fact the user doesn't care, doesn't need to care, and shouldn't be forced to care. I don't have a fricken clue how a jet engine works, don't really care how it works, but that doesn't mean I'm not allowed to fly on a plane. Indeed, the usage of an airplane is
To turn around your comparison, the pilot would be the person pushing around the mouse and pushing the keys for you (and everyone would be clapping hands when the document comes out of the printer ;-). Doesn't sound right? I thought so.
completely independent of knowing how the engine works, including the pilots.
They too have their specialists ("flight mechanics") for the engines, but at least basic knowledge how they work (e.g. the kernel) and intimate knowledge about flight mechanics (e.g. how the system boots, what package dependencies are and how to deal with them). If you look at small planes, the pilots often are the flight mechanics.
As I see it, the flight mechanics represent developers, the pilot represents an administrator, the passengers represent normal users. At the moment, passengers are restricted to using their seats, watching TV and going to the bathroom during flight -- with much assistance and hand-holding they could possibly emergency-land a machine if the pilots are sick or whatever, but what are the odds that they succeed?
I don't say that I don't wish that users could easily do complicate tasks on a computer, aided by carefully crafted tools (auto-pilot ;-) who take care of the innards they don't want to know about.
Computers aren't any different. There is no reason a user should *have* to understand the inner workings of it. That doesn't mean they *can't* know it, only that they don't need to.
A computer is a very complex machine and I don't see that today's tools are intelligent enough to hide all that from the user and still give them all the potential the machines have. I don't see the majority of the people having their private planes/gliders/etc. in the next years either ;-).
Just some thoughts...
Nils
Nils, I think you have forgotten what computers are. Computers are a tool. There are two phases to the tool. The user phase and the developer phase. The developer phase is the OS and all the things that go with making the computer work and do things. The user is the person that operates the tool and perform the task that the developer phase created. The user doesn't have a need to know anything about the developer phase, it's a waste of his time and energy. Once these relationships are understood then I think everybody will be happy. Think of it as a team, part of the team keeps the engine running and the other part does the driving.
On Sun, 2003-10-05 at 23:59, Otto Haliburton wrote:
-----Original Message----- From: fedora-devel-list-admin@redhat.com [mailto:fedora-devel-list- admin@redhat.com] On Behalf Of Nils Philippsen Sent: Sunday, October 05, 2003 4:25 PM To: fedora-devel-list@redhat.com Subject: RE: Kind request: fix your packages
On Sat, 2003-10-04 at 18:19, Sean Middleditch wrote:
On Sat, 2003-10-04 at 09:11, Nicolas Mailhot wrote:
Dear user,
As we all know you're dumb and you can not learn to use a package manager ever. To ease your pain we've decided not to inflict you the so-called "dependency hell" and provide you an autonomous application. Just click on its auto-update menu and it will download everything you need, we swear it.
*bzzt* It has *nothing* to do with users being "dumb." *nothing* It
I think Nicolas forgot the irony tags.
has to do with the fact the user doesn't care, doesn't need to care, and shouldn't be forced to care. I don't have a fricken clue how a jet engine works, don't really care how it works, but that doesn't mean I'm not allowed to fly on a plane. Indeed, the usage of an airplane is
To turn around your comparison, the pilot would be the person pushing around the mouse and pushing the keys for you (and everyone would be clapping hands when the document comes out of the printer ;-). Doesn't sound right? I thought so.
completely independent of knowing how the engine works, including the pilots.
They too have their specialists ("flight mechanics") for the engines, but at least basic knowledge how they work (e.g. the kernel) and intimate knowledge about flight mechanics (e.g. how the system boots, what package dependencies are and how to deal with them). If you look at small planes, the pilots often are the flight mechanics.
As I see it, the flight mechanics represent developers, the pilot represents an administrator, the passengers represent normal users. At the moment, passengers are restricted to using their seats, watching TV and going to the bathroom during flight -- with much assistance and hand-holding they could possibly emergency-land a machine if the pilots are sick or whatever, but what are the odds that they succeed?
I don't say that I don't wish that users could easily do complicate tasks on a computer, aided by carefully crafted tools (auto-pilot ;-) who take care of the innards they don't want to know about.
Computers aren't any different. There is no reason a user should *have* to understand the inner workings of it. That doesn't mean they *can't* know it, only that they don't need to.
A computer is a very complex machine and I don't see that today's tools are intelligent enough to hide all that from the user and still give them all the potential the machines have. I don't see the majority of the people having their private planes/gliders/etc. in the next years either ;-).
Just some thoughts...
Nils
Nils, I think you have forgotten what computers are. Computers are a
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ I don't think so, but see below.
tool. There are two phases to the tool. The user phase and the developer phase. The developer phase is the OS and all the things that go with making the computer work and do things. The user is the person that operates the tool and perform the task that the developer phase created. The user doesn't have a need to know anything about the developer phase, it's a waste of his time and energy. Once these relationships are understood then I think everybody will be happy. Think of it as a team, part of the team keeps the engine running and the other part does the driving.
I think we differ in how we define "driving". I think "driving" includes running installed programs, running the installer or updater but anything beyond that is clearly "off-road" for a mere driver. Fiddling with your engine or tuning the car with stuff from various sources is not "driving" for me, i.e. it requires knowledge to do properly. The "monolithic approach" I've seen in this thread (brings all needed stuff with it, allegedly works always) might work, but then this is basically like an appliance where the OS is just the kernel or in the (stretched into oblivion) car analogy: you can keep the engine, but everything else will be added to the car again. Can you imagine driving a car with more than one steering wheel? I hate unnecessary redundancy as it often brings more problems than it solves.
Nils
Le dim 05/10/2003 à 23:59, Otto Haliburton a écrit :
On Sat, 2003-10-04 at 18:19, Sean Middleditch wrote:
On Sat, 2003-10-04 at 09:11, Nicolas Mailhot wrote:
Dear user,
As we all know you're dumb and you can not learn to use a package manager ever. To ease your pain we've decided not to inflict you the so-called "dependency hell" and provide you an autonomous application. Just click on its auto-update menu and it will download everything you need, we swear it.
*bzzt* It has *nothing* to do with users being "dumb." *nothing* It
I think Nicolas forgot the irony tags.
How gross, mailman seems to have eaten part of my first reply. Trying again :
If I remember well the irony ponctuation mark died a quick death because people (of this time) were smart enough to perceive irony without using crutches. Though in our current unicode mail society, it may need to be reintroduced. Looks it would be more useful than klingon <irony/>
Le dim 05/10/2003 à 23:25, Nils Philippsen a écrit :
On Sat, 2003-10-04 at 18:19, Sean Middleditch wrote:
On Sat, 2003-10-04 at 09:11, Nicolas Mailhot wrote:
Dear user,
As we all know you're dumb and you can not learn to use a package manager ever. To ease your pain we've decided not to inflict you the so-called "dependency hell" and provide you an autonomous application
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Mon, 06 Oct 2003 10:42:57 +0200, Nicolas Mailhot wrote:
As we all know you're dumb and you can not learn to use a package manager ever. To ease your pain we've decided not to inflict you the so-called "dependency hell" and provide you an autonomous application--
fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
This posting looks as if you've been bitten by a bug in mailman, where it terminates postings at lines which contain a single dot character. It is as if it would treat it like the SMTP end of message body delimiter.
In September, I've posted about this to mailman-developers list, but there hasn't been any response. The posting included additional observations on mailman escaping "From" at the beginning of a line with a ">" character. It should not do that either before redistributing messages to subscribers. It breaks the formatting (">" is the quote prefix) as well as GPG signatures. They have a bugzilla system, but writing good bug reports takes more time, and I hate spending time on bug reports when developers are too lazy to comment briefly on a mail.
Maybe someone with insight reads this and posts a followup to my message on the mailman-developers list.
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 03 Oct 2003 16:39:48 -0400, Sean Middleditch wrote:
Let's see. Quoting you:
Users should just be able to grab any package online they want, and install it.
Do you see rpm-dep-hell complaints from apt/yum/up2date users? I don't. But rpmfind.net/rpmseek.com users complain regularly. This is the context of the reply to your comment.
Ah, you mean tools that don't grab dependencies on install. Those are a pain, aren't they?
You do know what rpmfind.net and rpmseek.com are, don't you?
I have a nifty idea. ^,^ Can you give me an example package where build depends are different per platform, but the resultant package is more or less the exact same (exact same feature set, exact same runtime dependencies) ?
No, because I've been referring to "different build deps and different feature set" all the time.
Package examples for that [different] scenario could be Bluefish (aspell), Sylpheed (aspell, gpgme, gnupg), Apt as well as packages that depend on NPTL, openssl and packages which set a distribution-specific package release tag themselves. And let me repeat -- might be necessary ;) -- that detection of some build platform features can be performed without examining /etc/redhat-release, but can get ugly.
This hypothetical discussion is starting to loose focus of real issues versus imaginary ones. ;-)
Wonder why? Not sure what I've been dragged into. :)
You specifically said that packages are only intended to work on the platform they are built for, and working on anything else is just dumb luck. That's no fun.
Doesn't matter. Packages are created for a known set of distributions. You cannot make sure that a package compiles with older software and that it will still compile with newer software. Such configurations are unsupported.
Instead of fixing the problem, you're arguing about Fedora breaking the packaging habits you've been forced to develop to hack around the problem.
Huh? You you sure you don't confuse me with anyone else?
I don't think so - you're arguing for having Fedora avoid a perfectly legitimate change of the release version solely for the sake package dependencies, yes?
No. Point me to the message in the archives, please.
_Optional_ features have _optional_ build dependencies. You can't depend on stuff that _is simply not available_. You can't install it because it's not available anywhere unless someone provides in in form of packages again. Now make the same unmodified src.rpm compile on
And what is the problem with getting the packages for the build dependency?
Lack of human resources to package the latest software for old distributions? Maybe even incompatibility due to insufficient compiler versions? Maybe conflicts with other components?
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
On Fri, 2003-10-03 at 18:09, Michael Schwendt wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 03 Oct 2003 16:39:48 -0400, Sean Middleditch wrote:
Let's see. Quoting you:
Users should just be able to grab any package online they want, and install it.
Do you see rpm-dep-hell complaints from apt/yum/up2date users? I don't. But rpmfind.net/rpmseek.com users complain regularly. This is the context of the reply to your comment.
Ah, you mean tools that don't grab dependencies on install. Those are a pain, aren't they?
You do know what rpmfind.net and rpmseek.com are, don't you?
Somewhat - big links of packages, yes? Good examples of sickeningly horrible site/UI design, if nothing else. ~,^
I have a nifty idea. ^,^ Can you give me an example package where build depends are different per platform, but the resultant package is more or less the exact same (exact same feature set, exact same runtime dependencies) ?
No, because I've been referring to "different build deps and different feature set" all the time.
I'm guessing the feature set argument is pointless, then, since neither of us agree which is more important.
Package examples for that [different] scenario could be Bluefish (aspell), Sylpheed (aspell, gpgme, gnupg), Apt as well as packages that depend on NPTL, openssl and packages which set a distribution-specific package release tag themselves. And let me repeat -- might be necessary ;) -- that detection of some build platform features can be performed without examining /etc/redhat-release, but can get ugly.
Right, which is the "fix the problem, not the hack" I've been mentioning a lot - if it *is* so ugly, then *that* needs to be fixed.
This hypothetical discussion is starting to loose focus of real issues versus imaginary ones. ;-)
Wonder why? Not sure what I've been dragged into. :)
:P
You specifically said that packages are only intended to work on the platform they are built for, and working on anything else is just dumb luck. That's no fun.
Doesn't matter. Packages are created for a known set of distributions. You cannot make sure that a package compiles with older software and that it will still compile with newer software. Such configurations are unsupported.
I've never said otherwise.
Instead of fixing the problem, you're arguing about Fedora breaking the packaging habits you've been forced to develop to hack around the problem.
Huh? You you sure you don't confuse me with anyone else?
I don't think so - you're arguing for having Fedora avoid a perfectly legitimate change of the release version solely for the sake package dependencies, yes?
No. Point me to the message in the archives, please.
http://www.redhat.com/archives/fedora-devel-list/2003-October/msg00061.html
Hmm, looks like I did get you confused with someone else there. my apologies. ^^;
_Optional_ features have _optional_ build dependencies. You can't depend on stuff that _is simply not available_. You can't install it because it's not available anywhere unless someone provides in in form of packages again. Now make the same unmodified src.rpm compile on
And what is the problem with getting the packages for the build dependency?
Lack of human resources to package the latest software for old distributions? Maybe even incompatibility due to insufficient compiler versions? Maybe conflicts with other components?
but the latest software shouldn't *need* repackaging for older distributions, of course.
Michael, who doesn't reply to top posts and complete quotes anymore.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux)
iD8DBQE/ffOq0iMVcrivHFQRAv0OAJ40rxTKKbU8vXt1jCno5PmEi5g1dACcD9Eu rAyiBTXOuRBhN63cqYT1Q7A= =JxIM -----END PGP SIGNATURE-----
-- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
On Fri, Oct 03, 2003 at 07:31:31PM -0400, Sean Middleditch wrote:
You do know what rpmfind.net and rpmseek.com are, don't you?
Somewhat - big links of packages, yes? Good examples of sickeningly horrible site/UI design, if nothing else. ~,^
I assume I have a right to defend myself :-)
Well, design is from 97, and I spend nearly no time working on it. "If it ain't broken, don't fix it" and since I very seldomly receive this kind of complaints (which might be perfectly justified :-) though I see a lot of use, so I don't change it. Another key point is that Cool URI don't change [1], and the trend of breaking all the links on a site to loose all the references, searches, and piss off the userbase every year or so, while very appreciated of a lot of the web designer apparently, it just against my own standards.
It's like an old truck, it's not shiny, it's a bit rusty, smells and leak oil, but when you need it, it's there and it gets you and your stuff where you want to go, though you won't use it to attract girls by driving downtown, it's not pretty.
Daniel
[1] http://www.w3.org/Provider/Style/URI.html
Le ven 03/10/2003 à 22:11, Michael Schwendt a écrit :
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 03 Oct 2003 14:28:42 -0400, Sean Middleditch wrote:
Reading back a little ways, I'm now rather confused what your whole paragraph was about - we have the tools/networks, so newbies don't have to worry about libfoo-1.0 vs libfoo-0.9 and so on.
Let's see. Quoting you:
Users should just be able to grab any package online they want, and install it.
Do you see rpm-dep-hell complaints from apt/yum/up2date users? I don't.
I do but that's for stupid non-free jvms stuff we decided to distribute on nosrc.rpm form only at jpackage.
A working gcj with the right provides would actually mean all the jakarta stuff could be cleanly distributed via apt/yum/whatever.
Cheers,
Michael Schwendt wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Wed, 01 Oct 2003 15:34:32 -0400, Sean Middleditch wrote:
No, you need to actually do the work of the configure script (perhaps you should actually use the app's configure script) - detect the individual bits in the system. Otherwise your package is broken.
What you describe is a maintenance nightmare. Assume an application wants aspell >= 0.50, but distribution B provides only aspell 0.30. A versioned build requirement on aspell >= 0.50 won't suffice, because the package won't build on B. But there is a configure switch to disable aspell support. So, what we can do is either examine aspell version somehow, e.g. with
$(rpm -q --qf "%{version}" aspell)
or check a file like redhat-release.
You're asking for completely generic rpms where the user, who has upgraded a component way beyond the version which was shipped with the original distribution, does not need to supply optional rpmbuild parameters (such as --define _with_aspell=1) for a src.rpm to build _with_ support for that optional component.
Add macros to the spec near the begining for the distros/version _you_support. Then include usage in the instruction on building your opackage. so they need only '--define RHL7.3=1' , '--define RHEL3=1', or '--define MDK9=1'. That give you the same result of using redhat-release, package or file, and the ability to support more distributions at the same time.
Each of the above defines set orther such as _with_aspell_0.3, or _with_aspell_0.5. These can then be overridden but the user easily as well. So you could more easily build for an older distro with a newer version of library.
I think Mike Harris does a really good job of this for XFree86, and the kernel spec is getting there.
The spec file should not be doing RPM querries or checking files. Perhaps I want to build a RHL7.3 RPM on my RHL9 box? If /etc/redhat-release is used, I get RHL9 versions regardless. If macros are used I can tell rpm easily to default to RHL7.3, use adifferent gcc, and alrternate libs as desired.
-Thomas
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Thu, 02 Oct 2003 12:42:37 -0500, Thomas Dodd wrote:
Add macros to the spec near the begining for the distros/version _you_support. Then include usage in the instruction on building your opackage. so they need only '--define RHL7.3=1' , '--define RHEL3=1', or '--define MDK9=1'. That give you the same result of using redhat-release, package or file, and the ability to support more distributions at the same time.
The spec file should not be doing RPM querries or checking files. Perhaps I want to build a RHL7.3 RPM on my RHL9 box?
With the added problem of how to automate the builds? And what if a user doesn't define any variable at all? Would the build fail or default to the wrong target platform? I still think the better approach is to move conditional code as much as possible out of the spec file. Instead of building a binary rpm with --define RHEL3=1, you would set BUILD_RHEL3=1 and "rpmbuild --rebuild foo.src.rpm" as usual. The spec file would call an external script that either evaluates the BUILD_RHEL3 variable or falls back to evaluating /etc/redhat-release as a default.
- -- Michael, who doesn't reply to top posts and complete quotes anymore.
On Wed, Oct 01, 2003 at 12:17:48PM -0400, Bill Nottingham wrote:
Axel Thimm (Axel.Thimm@physik.fu-berlin.de) said:
I'll also go with your suggestion, Rex. I'd call it the "it's written rh10, but it is pronounced Fedora Core 1" idiom ...
Now that's just patently misleading. It's *not* Red Hat Linux 10, it's Fedora Core 1. It's a shift in the development model, shifts in the goals of the release, and more. Hence, the new name, and new version.
By bumping all epochs of "Fedora Core" to ensure upgradability, and maintaining unnecessary multiple specfiles?
Huh? We aren't bumping all epochs of Fedora Core packages, and we don't have to to maintain upgradeability.
The alternative is to drop support for upgrading from RH <= 9 to FC, which is even uglier.
Uprgrades work... there were a couple hiccups in the test release, but by the time of the final release, I do believe there will only be epochs added to indexhtml and comps.
I think you lost the context, maybe I should have but Fedora Legacy in the subject.
It is not about Fedora Core, where the affected packages are only a few, but for Fedora Lagacy projects, which will host the same package in different Legacy repos and will have to ensure upgradability.
On Mon, Oct 06, 2003 at 10:53:46AM +0200, Axel Thimm wrote:
It is not about Fedora Core, where the affected packages are only a few, but for Fedora Lagacy projects, which will host the same package in different Legacy repos and will have to ensure upgradability.
But still, why should the release be numbered 10 then, as this only relates to the redhat-release (or whatever) package (and maybe a few more)?
On Mon, Oct 06, 2003 at 10:58:51AM +0200, Jos Vos wrote:
On Mon, Oct 06, 2003 at 10:53:46AM +0200, Axel Thimm wrote:
It is not about Fedora Core, where the affected packages are only a few, but for Fedora Lagacy projects, which will host the same package in different Legacy repos and will have to ensure upgradability.
But still, why should the release be numbered 10 then, as this only relates to the redhat-release (or whatever) package (and maybe a few more)?
See the "Fedora and Fedora Legacy package versioning schemes" thread.