Hello,
Here is a paragraph explaining why some libs may be usefull in their static version:
* in the case of user compiled programs doing numerical computations or data analysis, using static libraries may be useful. Indeed it allows to build static executables that have more chance to be run on other platforms than the box they were compiled in, that have different dynamic library versions or even that don't have the library installed at all. At the same time those applications, in general, don't need the features brought in by shared libraries (no need for nss, no security issue, no need for iconv...). Therefore it may be acceptable or even desirable to ship static libraries for numerical and data processing libraries to help users needing to link statically their locally compiled executables. The static libraries still need to be in separate sub-packages.
-- Pat
On Sat, 2007-05-26 at 12:41 +0200, Patrice Dumas wrote:
Hello,
Here is a paragraph explaining why some libs may be usefull in their static version:
- in the case of user compiled programs doing numerical computations or data analysis, using static libraries may be useful. Indeed it allows to build static executables that have more chance to be run on other platforms than the box they were compiled in, that have different dynamic library versions or even that don't have the library installed at all. At the same time those applications, in general, don't need the features brought in by shared libraries (no need for nss, no security issue, no need for iconv...). Therefore it may be acceptable or even desirable to ship static libraries for numerical and data processing libraries to help users needing to link statically their locally compiled executables. The static libraries still need to be in separate sub-packages.
This all can be summarized into:
Some people try to achieve cross-distro packaging by linking their applications statically. In cases, the target distros are similar enough, there are chances this will work in trivial cases (such as some subclass of numerical applications).
IMO, a) technically: * static linkage between Fedora packages are a maintenance nightmare (version tracking etc.) and a security risk to fedora. As such static linkage in Fedora shall be avoided whenever possible. * packaging static libs bloats the distro. * if distros are similar enough, similar portability between distros can be achieved by dynamical linkage. * static linkage does not achieve portability, except in very trivial cases.
b) politically: * cross-distro packaging is not an objective of the Fedora project.
I don't know what you try to achieve with this posting, whether this was meant to be a proposal for an addition to the FPG or if you were just agitating your position for the n-th time.
Ralf
On Sun, May 27, 2007 at 08:33:24AM +0200, Ralf Corsepius wrote:
This all can be summarized into:
Some people try to achieve cross-distro packaging by linking their applications statically. In cases, the target distros are similar enough, there are chances this will work in trivial cases (such as some subclass of numerical applications).
The other point is that shared linking is not needed for those apps.
IMO, a) technically:
- static linkage between Fedora packages are a maintenance nightmare
I only advocate shipping static libraries, no statically linked packages against those libraries. These libs are for use for locally compiled programs, not for packages shipped with fedora. So no maintainance issue.
(version tracking etc.) and a security risk to fedora. As such static linkage in Fedora shall be avoided whenever possible.
Maybe I was not clear but I didn't advocated static linkage in fedora. Only shipping some static libraries.
- packaging static libs bloats the distro.
Sure, but they are in separate -static subpackages, so they bloat only the mirrors and repos not users computers -- except if they want static libs.
- if distros are similar enough, similar portability between distros can
be achieved by dynamical linkage.
My experience is that it is not true, even between Centos and fedora.
Static linkage doesn't resolve everything since today the limit of portability is imposed by needing a 2.6.9 kernel, but it is already a big improvement.
- static linkage does not achieve portability, except in very trivial
cases.
This is important in my opinion for scientists doing numerical models. This may not be a large part of the fedora userbase, but in my opinion they are users of community contributors. I won't develop extensively here, but in general scientists (contrary to sysadmins) tend to avoid participating into the free software community and complain afterward that everything they need is broken -- at least that's what my colleagues do ;-) and they are in general under-represented (and free riding). Another symptom of this issue is that they do fine codes but very bad packaging in general.
b) politically:
- cross-distro packaging is not an objective of the Fedora project.
Even between different fedora versions and fedora and Centos?
I don't know what you try to achieve with this posting, whether this was meant to be a proposal for an addition to the FPG or if you were just agitating your position for the n-th time.
Onr thing is certain, I am agitating my position once again because I promised to try to state it as clearly as possible ni a way that could be submitted for ratification in a response to Toshio. So I would like to have it somewhere, it seems to me that it would fit in
http://fedoraproject.org/wiki/PackagingDrafts/StaticLibraryChanges
at the end of 'Packaging Static Libraries'.
-- Pat
Le dimanche 27 mai 2007 à 10:15 +0200, Patrice Dumas a écrit :
b) politically:
- cross-distro packaging is not an objective of the Fedora project.
Even between different fedora versions and fedora and Centos?
They all get their own branch in Fedora. We don't build one package to fit all releases. We don't even use one source package to fit all releases. So yes cross-distro packaging is not an objective of the Fedora project, even for distributions we support directly.
On Sun, May 27, 2007 at 11:22:21AM +0200, Nicolas Mailhot wrote:
Le dimanche 27 mai 2007 à 10:15 +0200, Patrice Dumas a écrit :
b) politically:
- cross-distro packaging is not an objective of the Fedora project.
Even between different fedora versions and fedora and Centos?
They all get their own branch in Fedora. We don't build one package to
That's not for packaged softwares, but for locally compiled executables.
fit all releases. We don't even use one source package to fit all releases. So yes cross-distro packaging is not an objective of the Fedora project, even for distributions we support directly.
It is not for fedora packages but for locally built executables. Say you build a program on a fedora distro and want to run it on a centos box. (My use case is a numerical model that use some libraries shipped in fedora, gsl, lapack or cernlib for example).
-- Pat
On Sun, 2007-05-27 at 10:15 +0200, Patrice Dumas wrote:
I only advocate shipping static libraries, no statically linked packages against those libraries. These libs are for use for locally compiled programs, not for packages shipped with fedora. So no maintainance issue.
The obvious question is, if this is only for locally compiled programs, why not compile the necessary static libraries locally as well ? Why should we carry that burden ?
On 27/05/07, Matthias Clasen mclasen@redhat.com wrote:
On Sun, 2007-05-27 at 10:15 +0200, Patrice Dumas wrote:
I only advocate shipping static libraries, no statically linked packages against those libraries. These libs are for use for locally compiled programs, not for packages shipped with fedora. So no maintainance issue.
The obvious question is, if this is only for locally compiled programs, why not compile the necessary static libraries locally as well ? Why should we carry that burden ?
+10,0000.
I regard myself as falling into the niche of scientic/numerical programming. However, I see no advantage to myself being able to compile staticly linked binaries in the name of portability. It doesn't really gain much, and actually I have seen doing such things give rather bizarre results.
Besides which, if you were to want to statically link a binary and send it to run elsewhere, Fedora isn't the platform to be doing it on. If you're looking for that sort of portability, you should be using a consistent and reliable platform for the calculations, like RHEL.
The right fix here is to educate scientific programmers as to why statically linking in libraries doesn't actually get them what they want, and that it is broken.
Jonathan.
On Sun, May 27, 2007 at 04:27:02PM +0100, Jonathan Underwood wrote:
I regard myself as falling into the niche of scientic/numerical programming. However, I see no advantage to myself being able to compile staticly linked binaries in the name of portability. It doesn't really gain much, and actually I have seen doing such things give rather bizarre results.
I have exactly the opposite experience. I have issues with g77/gfortran incompatibilities, for example. Or missing libraries on platform I am not administrator. Or libraries with different sonames. In what case did you have bizarre results?
Besides which, if you were to want to statically link a binary and send it to run elsewhere, Fedora isn't the platform to be doing it on.
Why not?
If you're looking for that sort of portability, you should be using a consistent and reliable platform for the calculations, like RHEL.
What a bizarre suggestion. Fedora should be good for numerical models. If fedora isn't good for that RHEL wont be either.
The right fix here is to educate scientific programmers as to why statically linking in libraries doesn't actually get them what they want, and that it is broken.
I don't want to give false ideas, in many real life cases statically linking numerical models gave a binary that gave a similar result on all the platforms. I prefer educating people that believe that static linking doesn't bring in portability.
-- Pat
On 27/05/07, Patrice Dumas pertusus@free.fr wrote:
I have exactly the opposite experience. I have issues with g77/gfortran incompatibilities, for example. Or missing libraries on platform I am not administrator. Or libraries with different sonames. In what case did you have bizarre results?
A program statically linked with GMP running on two different platforms giving different answers.
Besides which, if you were to want to statically link a binary and send it to run elsewhere, Fedora isn't the platform to be doing it on.
Why not?
Fedora is usually based on the latest glibc. Segfaults seem to ensue when running a statically linked binary compiled on Fedora with a system with an earlier glibc.
If you're looking for that sort of portability, you should be using a consistent and reliable platform for the calculations, like RHEL.
What a bizarre suggestion. Fedora should be good for numerical models. If fedora isn't good for that RHEL wont be either.
Yeah, I phrased the poorly. I use Fedora daily for numerical work. What i meant was, if you're really loking for a reliable and reproducible setup for running numerical calcs based on static linking and distributing binaries, you wouldn't want to use a distro with rapid ABI turnover.
The right fix here is to educate scientific programmers as to why statically linking in libraries doesn't actually get them what they want, and that it is broken.
I don't want to give false ideas, in many real life cases statically linking numerical models gave a binary that gave a similar result on all the platforms. I prefer educating people that believe that static linking doesn't bring in portability.
Right - that's what I meant the last sentence - so why go to the bother of making statically linkable libraries available if it doesn't actually achieve the goal of producing portable binaries, but rather gives people false hope. That's what I'd call wasting people's time, or giving them a noose to hang themselves with.
It seems to me that the use case doesn't justify what you're asking for though. The solution you're proposing (allowing users to link statically to system wide libraries) doesn't achieve the goal (producing "run anywhere" binaries). As Matthias pointed out, if someone is hell bent on producing statically linked binaries, then they can download the source for the libraries they need and build and link statically against them.
J.
What I should add though is that, what you're proposing for statically linkable library subpackages seems entirely sensible if you leave all of the questions about the utility to end users of doing so to one side.
On Sun, May 27, 2007 at 07:59:44PM +0100, Jonathan Underwood wrote:
A program statically linked with GMP running on two different platforms giving different answers.
That's not surprising. I never made the assumption that the results of the binaries would be exactly the same.
Fedora is usually based on the latest glibc. Segfaults seem to ensue when running a statically linked binary compiled on Fedora with a system with an earlier glibc.
I never experienced that.
Yeah, I phrased the poorly. I use Fedora daily for numerical work. What i meant was, if you're really loking for a reliable and reproducible setup for running numerical calcs based on static linking
I am not wanting that, not at all. I just want to be able to run the numerical model on the other platform, not to have the same results. For some models I also expect the results to be the same, but not in all cases.
and distributing binaries, you wouldn't want to use a distro with rapid ABI turnover.
Certainly more important is the hardware.
Right - that's what I meant the last sentence - so why go to the bother of making statically linkable libraries available if it doesn't actually achieve the goal of producing portable binaries, but rather gives people false hope. That's what I'd call wasting people's time, or giving them a noose to hang themselves with.
I am not saying that the results will be the same. I don't want to achieve reproducability, I just want a binary that runs on that platform. I don't expect a chaotic model to give the same results, for example.
It seems to me that the use case doesn't justify what you're asking for though. The solution you're proposing (allowing users to link statically to system wide libraries) doesn't achieve the goal (producing "run anywhere" binaries). As Matthias pointed out, if
It does so. At least for me binaries linked on fedora may be run on centos4 with a different set of libraries/compilers.
someone is hell bent on producing statically linked binaries, then they can download the source for the libraries they need and build and link statically against them.
Of course, but it would be way better for them if we could help them.
-- Pat
On 27/05/07, Patrice Dumas pertusus@free.fr wrote:
someone is hell bent on producing statically linked binaries, then they can download the source for the libraries they need and build and link statically against them.
Of course, but it would be way better for them if we could help them.
Actually, having read through your and Axel's emails I can see why you're so keen to see static library packages available. Actually, another use case is when you have a cluster of Fedora machines set up with a front end node where users do their work, and a load of back end compute nodes with a heavily stripped down installation, you may want statically linked binaries for executing under MPI on those nodes - this is a situation I have come across and used, and has nothing to do with the portability to different platforms thing. So, yeah, I can see why you might want static libs packages, even though more often than not, they're abused by users.
J.
On Sun, May 27, 2007 at 04:27:02PM +0100, Jonathan Underwood wrote:
On 27/05/07, Matthias Clasen mclasen@redhat.com wrote:
On Sun, 2007-05-27 at 10:15 +0200, Patrice Dumas wrote:
I only advocate shipping static libraries, no statically linked packages against those libraries. These libs are for use for locally compiled programs, not for packages shipped with fedora. So no maintainance issue.
Besides which, if you were to want to statically link a binary and send it to run elsewhere, Fedora isn't the platform to be doing it on. If you're looking for that sort of portability, you should be using a consistent and reliable platform for the calculations, like RHEL.
Whatever policies Fedora follows here, RHEL will not be different, so if static libs don't exist on Fedora, they will unlikely do so on a RHEL release of the same timeframe.
The right fix here is to educate scientific programmers as to why statically linking in libraries doesn't actually get them what they want, and that it is broken.
Actually the situation in scientific camps is not that easy. There are tons of situations where having a statically linked binary saves the day. You typically have a complete mix of very heterogeneous Linux distributions and releases thereof with semi-bogus libs under /usr/local as a bonus. At least that's what larger phys/chem institutions and educational facilities look like in de/uk/fr/ru. Institutions that have enough budget to hire a large IT staff look a bit different and are located in the US and Japan. :)
And you also have aged distros, for example a project I was working on until lately is Fedora loyal, but is still running FC4. But I need gfortran of at least FC5 to build my code.
It is also quite common for scientific networks (networks as in networked institutions, not as in byte traffic ones) to share binaries for 100% reproducablity as well. Sometimes even the slightest optimization in normal operations or slight ieee754 violations in libs will change the result in the range of 10^-16 and validation will fail. Just feed some Lanczos code with it and see *every* Fedora release giving you a different result.
It's a situation where you either demand from the user to adapt to your distro or your distro adapts to the users. And believe me the users in scientific camps take the line of least resistance, if Fedora doesn't deliver they will look around what does. I've lost many former academic RHL "customers" to Gentoo and Ubuntu (not sure how they deal with static libs, though).
On Sunday 27 May 2007 16:01:36 Axel Thimm wrote:
Whatever policies Fedora follows here, RHEL will not be different, so if static libs don't exist on Fedora, they will unlikely do so on a RHEL release of the same timeframe.
Or more to the point, RHEL may not build the static libs at all as RHEL may not want to support having static libs on the system.
On Sun, May 27, 2007 at 07:56:50PM -0400, Jesse Keating wrote:
On Sunday 27 May 2007 16:01:36 Axel Thimm wrote:
Whatever policies Fedora follows here, RHEL will not be different, so if static libs don't exist on Fedora, they will unlikely do so on a RHEL release of the same timeframe.
Or more to the point, RHEL may not build the static libs at all as RHEL may not want to support having static libs on the system.
You'd want to do that even if they are useful for some users?
In any case the point wasn't about choices you may make in RHEL, but that one cannot say something don't work on Fedora and will on RHEL. Of course you may chose the reverse (that is something that works on fedora doesn't work on RHEL) -- even for good reasons (I don't think removing static libs that are in Fedora is a good thing given that they should be carefully chosen such as to be only the static libs that may have a use).
-- Pat
Axel Thimm (Axel.Thimm@ATrpms.net) said:
The right fix here is to educate scientific programmers as to why statically linking in libraries doesn't actually get them what they want, and that it is broken.
Actually the situation in scientific camps is not that easy. There are tons of situations where having a statically linked binary saves the day. You typically have a complete mix of very heterogeneous Linux distributions and releases thereof with semi-bogus libs under /usr/local as a bonus. At least that's what larger phys/chem institutions and educational facilities look like in de/uk/fr/ru.
So, my reading of this is 'larger phys/chem institutions are crazy and don't understand sane systems management'. Am I reading this wrong?
If you want consistent results, run a consistent platform.
Bill
On Monday 28 May 2007 05:41:04 Bill Nottingham wrote:
So, my reading of this is 'larger phys/chem institutions are crazy and don't understand sane systems management'. Am I reading this wrong?
Don't downplay their efforts, most of their problems run for months and they are very carefull about the results, after all is their reputation that is at stake. :-)
Since the computational environment can so heterogeneous people follows the path of least resistance and use tricks/techniques like this.
If you want consistent results, run a consistent platform.
Come on, that is not always practical or even possible. It really helps that we have linux now, because the different proprietary unices before made this even worse and a nightmare. :-)
Bill
José Matos (jamatos@fc.up.pt) said:
On Monday 28 May 2007 05:41:04 Bill Nottingham wrote:
So, my reading of this is 'larger phys/chem institutions are crazy and don't understand sane systems management'. Am I reading this wrong?
Don't downplay their efforts, most of their problems run for months and they are very carefull about the results, after all is their reputation that is at stake. :-)
Yes, but... describing a situation where results are run on whatever machine of whatever OS, with whatever random libs happen to be in /usr/local? Doesnt sound like an environment that intends to be replicable, or easily managed. What happens when a disk goes out? Or someone replaces one of those libraries?
Bill
On Mon, May 28, 2007 at 03:34:22AM -0400, Bill Nottingham wrote:
José Matos (jamatos@fc.up.pt) said:
On Monday 28 May 2007 05:41:04 Bill Nottingham wrote:
So, my reading of this is 'larger phys/chem institutions are crazy and don't understand sane systems management'. Am I reading this wrong?
Don't downplay their efforts, most of their problems run for months and they are very carefull about the results, after all is their reputation that is at stake. :-)
Yes, but... describing a situation where results are run on whatever machine of whatever OS, with whatever random libs happen to be in /usr/local? Doesnt sound like an environment that intends to be replicable, or easily managed. What happens when a disk goes out? Or someone replaces one of those libraries?
Nothing, because the code was statically linked. That's where you get deterministic, non-environment dependent results from, no matter what the environment looks like (in sensible limits, of course): Either a too old distro (running FC6 builds on FC5), a cross-distro build (running FC6 builds on SLES clusters), or in general missing libs, old libs, old compilers, broken parts under /usr/local and so on. With a static build, you don't care and have the same results as in your local tests.
To be completely fair, the real number crunchers like clustered systems or mpp systems do have more careful setups with no self-built debris lying in /usr/local (but old or missing libs nonetheless). But the users developing the code do need to build/install bits under their /usr/local.
Therefore submitting a job sometimes means you have to statically link your code, do a final test on your system, upload it and queue it into whatever queueing system is supported. Or you apply for a sysadmin to build the required libs for you, so you don't have to statically link and you find out that your contract expires before all parts are in place. Not to speak about asking the admin a year later to upgrade the libs.
On Mon, May 28, 2007 at 12:41:04AM -0400, Bill Nottingham wrote:
So, my reading of this is 'larger phys/chem institutions are crazy and don't understand sane systems management'. Am I reading this wrong?
different levels of funding leads to different management strategies. Besides, why don't use this kind of solutions when they work fine?
If you want consistent results, run a consistent platform.
Right, results may not be 100% consistent on different platforms even with statically linked executables, but, given that there are different platforms it helps a lot and even though results are not exactly the same they may be close enough -- just like if the same program was run on 2 different computers.
-- Pat
On Mon, 2007-05-28 at 00:41 -0400, Bill Nottingham wrote:
Axel Thimm (Axel.Thimm@ATrpms.net) said:
The right fix here is to educate scientific programmers as to why statically linking in libraries doesn't actually get them what they want, and that it is broken.
Actually the situation in scientific camps is not that easy. There are tons of situations where having a statically linked binary saves the day. You typically have a complete mix of very heterogeneous Linux distributions and releases thereof with semi-bogus libs under /usr/local as a bonus. At least that's what larger phys/chem institutions and educational facilities look like in de/uk/fr/ru.
So, my reading of this is 'larger phys/chem institutions are crazy and don't understand sane systems management'. Am I reading this wrong?
Yes.
In most cases, these people are non-IT people with little to no skills in program development nor interest in program development.
To them, programming (and IT in general) is an unloved, unavoidable duty, they actually are not interested in nor are they interested in getting deeper into it.
They use Linux/Unix because "somebody told them so", they program in Fortran, Cobol, Algol or Modula, because "somebody told them so", they do something "this way" because they don't know better and don't "want to know better".
Ralf
On Mon, May 28, 2007 at 09:34:20AM +0200, Ralf Corsepius wrote:
They use Linux/Unix because "somebody told them so", they program in Fortran, Cobol, Algol or Modula, because "somebody told them so", they do something "this way" because they don't know better and don't "want to know better".
That's a bit of oversimplification. In general scientists do coding just fine but don't want to do more nor even think about it (no packaging, no thoughts on system administration...). However there are IT people working together with scientists who do system administration well.
Still the needs are specific and very different from other environments.
-- Pat
On Mon, 2007-05-28 at 09:44 +0200, Patrice Dumas wrote:
On Mon, May 28, 2007 at 09:34:20AM +0200, Ralf Corsepius wrote:
They use Linux/Unix because "somebody told them so", they program in Fortran, Cobol, Algol or Modula, because "somebody told them so", they do something "this way" because they don't know better and don't "want to know better".
That's a bit of oversimplification.
I've worked in such an environment for many years, I know what I am talking about.
An anecdote: I once met an EE-professor, who, when being asked why they were using Fortran answered: "Because our simulations are based on the Fortran punch cards I wrote during my PhD thesis 25 years ago". Consequently, his students and employees were programming Fortran.
In general scientists do coding just fine but don't want to do more nor even think about it (no packaging, no thoughts on system administration...).
Well, in 90% of all such cases, "their coding" goes into implementing complex algorithms, while their programs complexity is not much different from "hello world".
However there are IT people working together with scientists who do system administration well.
Still the needs are specific and very different from other environments.
I can not disagree more.
These guys relation to programming / sys-administration is not much different from that of a 14-year old kid, whose IT skills are "browsing the web, running games, playing mp3s and using word processors", when it had a course in "programming in C" at school, and then starts to discover the subtleties of programming afterwards.
Ralf
On Mon, May 28, 2007 at 10:15:01AM +0200, Ralf Corsepius wrote:
On Mon, 2007-05-28 at 09:44 +0200, Patrice Dumas wrote:
On Mon, May 28, 2007 at 09:34:20AM +0200, Ralf Corsepius wrote:
They use Linux/Unix because "somebody told them so", they program in Fortran, Cobol, Algol or Modula, because "somebody told them so", they do something "this way" because they don't know better and don't "want to know better".
That's a bit of oversimplification.
I've worked in such an environment for many years, I know what I am talking about.
An anecdote: I once met an EE-professor, who, when being asked why they were using Fortran answered: "Because our simulations are based on the Fortran punch cards I wrote during my PhD thesis 25 years ago". Consequently, his students and employees were programming Fortran.
In general scientists do coding just fine but don't want to do more nor even think about it (no packaging, no thoughts on system administration...).
Well, in 90% of all such cases, "their coding" goes into implementing complex algorithms, while their programs complexity is not much different from "hello world".
This sounds quite arrogant.
However there are IT people working together with scientists who do system administration well.
Still the needs are specific and very different from other environments.
I can not disagree more.
These guys relation to programming / sys-administration is not much different from that of a 14-year old kid, whose IT skills are "browsing the web, running games, playing mp3s and using word processors", when it had a course in "programming in C" at school, and then starts to discover the subtleties of programming afterwards.
And in their spare time they invented the web including the first implementation of web servers and clients.
On Mon, 2007-05-28 at 11:47 +0200, Axel Thimm wrote:
On Mon, May 28, 2007 at 10:15:01AM +0200, Ralf Corsepius wrote:
On Mon, 2007-05-28 at 09:44 +0200, Patrice Dumas wrote:
On Mon, May 28, 2007 at 09:34:20AM +0200, Ralf Corsepius wrote:
They use Linux/Unix because "somebody told them so", they program in Fortran, Cobol, Algol or Modula, because "somebody told them so", they do something "this way" because they don't know better and don't "want to know better".
That's a bit of oversimplification.
I've worked in such an environment for many years, I know what I am talking about.
An anecdote: I once met an EE-professor, who, when being asked why they were using Fortran answered: "Because our simulations are based on the Fortran punch cards I wrote during my PhD thesis 25 years ago". Consequently, his students and employees were programming Fortran.
In general scientists do coding just fine but don't want to do more nor even think about it (no packaging, no thoughts on system administration...).
Well, in 90% of all such cases, "their coding" goes into implementing complex algorithms, while their programs complexity is not much different from "hello world".
This sounds quite arrogant.
Feel free to think what you want - These number cruncher guys apps condense down to a
READ STDIN CALL ALGORITHM PRINT STDOUT
Their typical usage:
./myapp < inputdata >output ... wait <couple of days> ... lpr output
However there are IT people working together with scientists who do system administration well.
Still the needs are specific and very different from other environments.
I can not disagree more.
These guys relation to programming / sys-administration is not much different from that of a 14-year old kid, whose IT skills are "browsing the web, running games, playing mp3s and using word processors", when it had a course in "programming in C" at school, and then starts to discover the subtleties of programming afterwards.
And in their spare time they invented the web including the first implementation of web servers and clients.
May-be some of them ... The others were busy keeping their machines hot.
Ralf
On Mon, May 28, 2007 at 01:46:03PM +0200, Ralf Corsepius wrote:
Well, in 90% of all such cases, "their coding" goes into implementing complex algorithms, while their programs complexity is not much different from "hello world".
This sounds quite arrogant.
Feel free to think what you want - These number cruncher guys apps condense down to a
READ STDIN CALL ALGORITHM PRINT STDOUT
Their typical usage:
./myapp < inputdata >output ... wait <couple of days> ... lpr output
Hm, you have never seen any mpp code. I/O is the most complex part of large scale numerics.
On Mon, 2007-05-28 at 14:01 +0200, Axel Thimm wrote:
On Mon, May 28, 2007 at 01:46:03PM +0200, Ralf Corsepius wrote:
Well, in 90% of all such cases, "their coding" goes into implementing complex algorithms, while their programs complexity is not much different from "hello world".
This sounds quite arrogant.
Feel free to think what you want - These number cruncher guys apps condense down to a
READ STDIN CALL ALGORITHM PRINT STDOUT
Their typical usage:
./myapp < inputdata >output ... wait <couple of days> ... lpr output
Hm, you have never seen any mpp code.
No I haven't, but I have seen many matlab, lapack, octave and reduce users.
I/O is the most complex part of large scale numerics.
May be in your case, but not in the cases I am familiar with (Optimization theory)
Ralf
On Mon, 28 May 2007 13:46:03 +0200 Ralf Corsepius wrote:
On Mon, 2007-05-28 at 11:47 +0200, Axel Thimm wrote:
On Mon, May 28, 2007 at 10:15:01AM +0200, Ralf Corsepius wrote:
On Mon, 2007-05-28 at 09:44 +0200, Patrice Dumas wrote:
On Mon, May 28, 2007 at 09:34:20AM +0200, Ralf Corsepius wrote:
In general scientists do coding just fine but don't want to do more nor even think about it (no packaging, no thoughts on system administration...).
Well, in 90% of all such cases, "their coding" goes into implementing complex algorithms, while their programs complexity is not much different from "hello world".
This sounds quite arrogant.
Feel free to think what you want - These number cruncher guys apps condense down to a
READ STDIN CALL ALGORITHM PRINT STDOUT
Their typical usage:
./myapp < inputdata >output ... wait <couple of days> ... lpr output
Oh Ralf, you're such a sweetheart! Bursting with optimism about all of human-kind! :-)
Yes, some scientists/engineers are dreadful coders and should never be allowed to admin any systems, not even small one-button toaster ovens. Some are brilliant coders and/or admins who develop novel algorithms/ approaches and build/manage their own clusters. Many fall somewhere between those two extremes.
Your "condense down" comments are just silly. Really... silly...
Ed
On Mon, 2007-05-28 at 09:49 -0400, Ed Hill wrote:
On Mon, 28 May 2007 13:46:03 +0200 Ralf Corsepius wrote:
On Mon, 2007-05-28 at 11:47 +0200, Axel Thimm wrote:
On Mon, May 28, 2007 at 10:15:01AM +0200, Ralf Corsepius wrote:
On Mon, 2007-05-28 at 09:44 +0200, Patrice Dumas wrote:
On Mon, May 28, 2007 at 09:34:20AM +0200, Ralf Corsepius wrote:
In general scientists do coding just fine but don't want to do more nor even think about it (no packaging, no thoughts on system administration...).
Well, in 90% of all such cases, "their coding" goes into implementing complex algorithms, while their programs complexity is not much different from "hello world".
This sounds quite arrogant.
Feel free to think what you want - These number cruncher guys apps condense down to a
READ STDIN CALL ALGORITHM PRINT STDOUT
Their typical usage:
./myapp < inputdata >output ... wait <couple of days> ... lpr output
Oh Ralf, you're such a sweetheart! Bursting with optimism about all of human-kind! :-)
Yes, some scientists/engineers are dreadful coders and should never be allowed to admin any systems, not even small one-button toaster ovens. Some are brilliant coders and/or admins who develop novel algorithms/ approaches and build/manage their own clusters. Many fall somewhere between those two extremes.
Your "condense down" comments are just silly. Really... silly...
Thank you, very much - I must have been living on a different planet than you for the last 15 years. I must have been watching a different kind of scientists, blocking networks and machines with their number crunching jobs over all these years ...
Ralf
On Mon, May 28, 2007 at 04:17:39PM +0200, Ralf Corsepius wrote:
On Mon, 2007-05-28 at 09:49 -0400, Ed Hill wrote:
On Mon, 28 May 2007 13:46:03 +0200 Ralf Corsepius wrote:
On Mon, 2007-05-28 at 11:47 +0200, Axel Thimm wrote:
On Mon, May 28, 2007 at 10:15:01AM +0200, Ralf Corsepius wrote:
On Mon, 2007-05-28 at 09:44 +0200, Patrice Dumas wrote:
On Mon, May 28, 2007 at 09:34:20AM +0200, Ralf Corsepius wrote:
In general scientists do coding just fine but don't want to do more nor even think about it (no packaging, no thoughts on system administration...).
Well, in 90% of all such cases, "their coding" goes into implementing complex algorithms, while their programs complexity is not much different from "hello world".
This sounds quite arrogant.
Feel free to think what you want - These number cruncher guys apps condense down to a
READ STDIN CALL ALGORITHM PRINT STDOUT
Their typical usage:
./myapp < inputdata >output ... wait <couple of days> ... lpr output
Oh Ralf, you're such a sweetheart! Bursting with optimism about all of human-kind! :-)
Yes, some scientists/engineers are dreadful coders and should never be allowed to admin any systems, not even small one-button toaster ovens. Some are brilliant coders and/or admins who develop novel algorithms/ approaches and build/manage their own clusters. Many fall somewhere between those two extremes.
Your "condense down" comments are just silly. Really... silly...
Thank you, very much - I must have been living on a different planet than you for the last 15 years.
From the way you describe the scientific community this seems quite possible.
I must have been watching a different kind of scientists, blocking networks and machines with their number crunching jobs over all these years ...
Well ... the networks and machines are there for doing number crunching and not mp3, games, i18n and whatnot.
On Mon, 2007-05-28 at 16:44 +0200, Axel Thimm wrote:
On Mon, May 28, 2007 at 04:17:39PM +0200, Ralf Corsepius wrote:
On Mon, 2007-05-28 at 09:49 -0400, Ed Hill wrote:
On Mon, 28 May 2007 13:46:03 +0200 Ralf Corsepius wrote:
On Mon, 2007-05-28 at 11:47 +0200, Axel Thimm wrote:
On Mon, May 28, 2007 at 10:15:01AM +0200, Ralf Corsepius wrote:
On Mon, 2007-05-28 at 09:44 +0200, Patrice Dumas wrote: > On Mon, May 28, 2007 at 09:34:20AM +0200, Ralf Corsepius wrote:
> In general scientists do coding > just fine but don't want to do more nor even think about it (no > packaging, no thoughts on system administration...). Well, in 90% of all such cases, "their coding" goes into implementing complex algorithms, while their programs complexity is not much different from "hello world".
This sounds quite arrogant.
Feel free to think what you want - These number cruncher guys apps condense down to a
READ STDIN CALL ALGORITHM PRINT STDOUT
Their typical usage:
./myapp < inputdata >output ... wait <couple of days> ... lpr output
Oh Ralf, you're such a sweetheart! Bursting with optimism about all of human-kind! :-)
Yes, some scientists/engineers are dreadful coders and should never be allowed to admin any systems, not even small one-button toaster ovens. Some are brilliant coders and/or admins who develop novel algorithms/ approaches and build/manage their own clusters. Many fall somewhere between those two extremes.
Your "condense down" comments are just silly. Really... silly...
Thank you, very much - I must have been living on a different planet than you for the last 15 years.
From the way you describe the scientific community this seems quite possible.
I am talking about non-IT/CS scientists: biologists, medics, mathematicians, physicists, electrical/mechanical/construction engineers, etc. at all levels, from 1st semester students to professors.
Many of them were brilliant scientists, but more or less illiterate on programming and system adminstration. Some of them just knew enough to be able to launch a shell, use an editor and run the scripts/makefiles and similar others wrote for them.
Of cause there were others, who actively "learned by doing" got deeper "into the matters".
I must have been watching a different kind of scientists, blocking networks and machines with their number crunching jobs over all these years ...
Well ... the networks and machines are there for doing number crunching and not mp3, games, i18n and whatnot.
Right, but these persons skills stemmed from "using PCs" at home and from courses at school.
A prototypical situation had been: A math student writing his Diploma thesis on "Comparison of different algorithms on application xyz using matlab".
Ralf
On Mon, May 28, 2007 at 09:34:20AM +0200, Ralf Corsepius wrote:
On Mon, 2007-05-28 at 00:41 -0400, Bill Nottingham wrote:
Axel Thimm (Axel.Thimm@ATrpms.net) said:
The right fix here is to educate scientific programmers as to why statically linking in libraries doesn't actually get them what they want, and that it is broken.
Actually the situation in scientific camps is not that easy. There are tons of situations where having a statically linked binary saves the day. You typically have a complete mix of very heterogeneous Linux distributions and releases thereof with semi-bogus libs under /usr/local as a bonus. At least that's what larger phys/chem institutions and educational facilities look like in de/uk/fr/ru.
So, my reading of this is 'larger phys/chem institutions are crazy and don't understand sane systems management'. Am I reading this wrong?
Yes.
In most cases, these people are non-IT people with little to no skills in program development nor interest in program development.
Well, they are not really idiots. Some of them produce great code. But they don't care to autoconf it, or to spend more time on polishing it. If it runs with good performance (which is the key element in numerical code) and they can easily deploy it on the available hardware, then they can turn to solving actual problems from their field with this tool.
To them, programming (and IT in general) is an unloved, unavoidable duty, they actually are not interested in nor are they interested in getting deeper into it.
IT yes, programming I disagree.
They use Linux/Unix because "somebody told them so", they program in Fortran, Cobol, Algol or Modula, because "somebody told them so", they do something "this way" because they don't know better and don't "want to know better".
Nah, that's not the case, and you will not really find phys/chem doing Cobol, Algol or Modula, that's reserved for cs/eco - Phys/chem does 90% Fortran, 10% C/C++ (with emphasis on C).
The reason is not "somebody told them so", but that Fortran has a very simple language interface which still offered language elements like complex numbers ages before C/C++ did, as well as advanced libraries to do the number crunching. Furthermore there is vast knowledge of Fortran in these camps, if a rookie starts doing his work in C++ he's rather isolated and needs to work through it by himself. Yes, I did C++.
Anyway, we're getting off-topic, the facts are that there are camps that rightfully use static libs a lot. Either Fedora cares about these camps, or they are considered a minority to not really cater for their needs.
Le lundi 28 mai 2007 à 11:45 +0200, Axel Thimm a écrit :
Well, they are not really idiots. Some of them produce great code. But they don't care to autoconf it, or to spend more time on polishing it. If it runs with good performance (which is the key element in numerical code) and they can easily deploy it on the available hardware, then they can turn to solving actual problems from their field with this tool.
Which sort-of hints shipping static libs is a short-term workaround and energy would be better spend making tools like eclipse integrate our best-of-breed infrastructure (autoconf, rpm, yum, mock whatever) transparently so these people don't have to spend time on plumbing to produce modern sofwtare.
That would help adoption with the huge Visual Basic & Java crowd too BTW.
On Mon, May 28, 2007 at 12:00:42PM +0200, Nicolas Mailhot wrote:
Which sort-of hints shipping static libs is a short-term workaround and energy would be better spend making tools like eclipse integrate our best-of-breed infrastructure (autoconf, rpm, yum, mock whatever) transparently so these people don't have to spend time on plumbing to produce modern sofwtare.
I can't see the connection between eclipse and the issues at stake here... Shipping static libs is not necessarily a short term workaround, dynamic libs keep changing ABI and one cannot necesarily install everytime the libs on all the computers he may want to run a program. How is it related with eclipse???
-- Pat
Le lundi 28 mai 2007 à 13:16 +0200, Patrice Dumas a écrit :
On Mon, May 28, 2007 at 12:00:42PM +0200, Nicolas Mailhot wrote:
Which sort-of hints shipping static libs is a short-term workaround and energy would be better spend making tools like eclipse integrate our best-of-breed infrastructure (autoconf, rpm, yum, mock whatever) transparently so these people don't have to spend time on plumbing to produce modern sofwtare.
I can't see the connection between eclipse and the issues at stake here... Shipping static libs is not necessarily a short term workaround, dynamic libs keep changing ABI and one cannot necesarily install everytime the libs on all the computers he may want to run a program. How is it related with eclipse???
Replace one can not with it's too much hassle to and you'll see where I was going. Reduce the hassle factor doing things right and suddenly static libs will get less attractive.
On Mon, May 28, 2007 at 05:12:55PM +0200, Nicolas Mailhot wrote:
Replace one can not with it's too much hassle to and you'll see where I was going. Reduce the hassle factor doing things right and suddenly static libs will get less attractive.
Ok. What will be replacing them? Adding automatically shared libs and a wrapper to use added libs or system libs based on sonames? It could inded help some users, especially if it is also attractive to those (like me) who write themselves their Makefile and shell scripts (I would never ever use a GUI to develop, but I guess eclipse can do portability stuff on the command line).
In any case I doubt it may be as simple as what we have with static libs, with statically linked executables created by adding -static to the link command line...
-- Pat
Le lundi 28 mai 2007 à 17:24 +0200, Patrice Dumas a écrit :
On Mon, May 28, 2007 at 05:12:55PM +0200, Nicolas Mailhot wrote:
Replace one can not with it's too much hassle to and you'll see where I was going. Reduce the hassle factor doing things right and suddenly static libs will get less attractive.
Ok. What will be replacing them?
Help create properly autotooled rpm transparently for people that don't care about infrastructure stuff. You already have cluster managers that use rpm as a payload. That takes care of the deployment, of the interfering stuff in /usr/local, etc
In any case I doubt it may be as simple as what we have with static libs, with statically linked executables created by adding -static to the link command line...
You focus too much on the current technical solution and not enough on user needs. The problem is not to replicate the same old & broken solution ad vitam eternam but to make the correct technical solution attractive enough for users to switch.
I won't share nuggets of ass-backwards common wisdom here, that would strike to close to my employer systems, but sometimes you need to re-asses why a particular solution was chosen at a time and if you can not achieve the original goals better now with stuff that was not available a decade ago.
On Mon, May 28, 2007 at 06:10:36PM +0200, Nicolas Mailhot wrote:
Help create properly autotooled rpm transparently for people that don't care about infrastructure stuff. You already have cluster managers that use rpm as a payload. That takes care of the deployment, of the interfering stuff in /usr/local, etc
Ok, I am not really knowledgable on that matters. Is there something viewable, usable?
You focus too much on the current technical solution and not enough on user needs. The problem is not to replicate the same old & broken solution ad vitam eternam but to make the correct technical solution attractive enough for users to switch.
I am very open to new solutions, but the use of static libs is not necessarily broken in all cases.
I won't share nuggets of ass-backwards common wisdom here,
I don't understand that sentence, ca donne quoi en francais ?
that would strike to close to my employer systems, but sometimes you need to re-asses why a particular solution was chosen at a time and if you can not achieve the original goals better now with stuff that was not available a decade ago.
Once again I am open to new stuff, but I haven't seen anything that would be as simple and effective as building statically (in the case of specific scientific apps I am referring to, of course). And the long thread on fedora-devel only comfort that view since people complained about dstatically compiling but only proposed very complicated alternatives. If those alternatives can be automated and give the same advantages, it could become usable, but currently this is not the case (or nobody pointed to the right solution).
-- Pat
Le lundi 28 mai 2007 à 20:04 +0200, Patrice Dumas a écrit :
On Mon, May 28, 2007 at 06:10:36PM +0200, Nicolas Mailhot wrote:
Help create properly autotooled rpm transparently for people that don't care about infrastructure stuff. You already have cluster managers that use rpm as a payload. That takes care of the deployment, of the interfering stuff in /usr/local, etc
Ok, I am not really knowledgable on that matters. Is there something viewable, usable?
I'm sorry, I haven't kept the bookmark, and I can't find it with 30s of googling
You focus too much on the current technical solution and not enough on user needs. The problem is not to replicate the same old & broken solution ad vitam eternam but to make the correct technical solution attractive enough for users to switch.
I am very open to new solutions, but the use of static libs is not necessarily broken in all cases.
I won't share nuggets of ass-backwards common wisdom here,
I don't understand that sentence, ca donne quoi en francais ?
Une traduction extrêmement approximative qui sonne moins bien "pépites d'absurdités considérées comme des évidences"
Once again I am open to new stuff, but I haven't seen anything that would be as simple and effective as building statically (in the case of specific scientific apps I am referring to, of course).
You have two missing bits: 1. a nice create-autotooled-rpm-for-dummies environment 2. a deployment framework
You have only to look at koji to realise the technical basis for 2. is already there, and IIRC there are products on the market that do the package as payload thing.
1. is harder and is in its infancy today. That's why package systems with broken dependency engines sell and rpm/deb don't.
But if you don't get a package feed up people do manual deployments, slowly rot the cluster and make OS upgrades impossible. static is just a way to partially hide the rot.
On Mon, May 28, 2007 at 08:53:58PM +0200, Nicolas Mailhot wrote:
Une traduction extrêmement approximative qui sonne moins bien "pépites d'absurdités considérées comme des évidences"
C'est joli en tout cas ;-). Pas forcement au niveau du fond, mais au moins de la forme ;-)
Once again I am open to new stuff, but I haven't seen anything that would be as simple and effective as building statically (in the case of specific scientific apps I am referring to, of course).
You have two missing bits:
- a nice create-autotooled-rpm-for-dummies environment
- a deployment framework
You have only to look at koji to realise the technical basis for 2. is
It is not what koji looks like from the perspective of a fedora contributor, but maybe once there is more doc it will appear more clearly...
already there, and IIRC there are products on the market that do the package as payload thing.
- is harder and is in its infancy today. That's why package systems
with broken dependency engines sell and rpm/deb don't.
It is certainly something different from rpm/deb since it should be doable as a user.
But if you don't get a package feed up people do manual deployments, slowly rot the cluster and make OS upgrades impossible. static is just a way to partially hide the rot.
That is not the way I see the usefulness of static linked apps. They are for immediate consumption in my use case.
-- Pat
Le lundi 28 mai 2007 à 21:39 +0200, Patrice Dumas a écrit :
But if you don't get a package feed up people do manual deployments, slowly rot the cluster and make OS upgrades impossible. static is just a way to partially hide the rot.
That is not the way I see the usefulness of static linked apps. They are for immediate consumption in my use case.
This is not the only use case. Sometimes the funding used to write numeric models is dependant on the result being runable on the clusters of the BU that earns the money.
On Mon, 2007-05-28 at 11:45 +0200, Axel Thimm wrote:
On Mon, May 28, 2007 at 09:34:20AM +0200, Ralf Corsepius wrote:
On Mon, 2007-05-28 at 00:41 -0400, Bill Nottingham wrote:
Axel Thimm (Axel.Thimm@ATrpms.net) said:
The right fix here is to educate scientific programmers as to why statically linking in libraries doesn't actually get them what they want, and that it is broken.
Actually the situation in scientific camps is not that easy. There are tons of situations where having a statically linked binary saves the day. You typically have a complete mix of very heterogeneous Linux distributions and releases thereof with semi-bogus libs under /usr/local as a bonus. At least that's what larger phys/chem institutions and educational facilities look like in de/uk/fr/ru.
So, my reading of this is 'larger phys/chem institutions are crazy and don't understand sane systems management'. Am I reading this wrong?
Yes.
In most cases, these people are non-IT people with little to no skills in program development nor interest in program development.
Well, they are not really idiots.
Nobody said that.
Most of these guys are just beginners to everything but "algorithms", which to them means "math".
They never heard nor thought about "sockets", "ipc", i18n, threading, ABIs/APIs, system-integration, ... etc.
Anyway, we're getting off-topic, the facts are that there are camps that rightfully use static libs a lot. Either Fedora cares about these camps, or they are considered a minority to not really cater for their needs.
IMO, this is not a matter of "minorities" nor of demands. I feel this to be a matter of "limitation of knowledge" trying to push their mistakes into the distro.
Ralf
On Mon, May 28, 2007 at 01:54:33PM +0200, Ralf Corsepius wrote:
Most of these guys are just beginners to everything but "algorithms", which to them means "math".
They never heard nor thought about "sockets", "ipc", i18n, threading, ABIs/APIs, system-integration, ... etc.
Ehm, I'm not sure what you are describing, ipc is a very big deal especially cross-processor-wise. Threading? mpi/openmp was developed out of these group's needs.
Anyway, we're getting off-topic, the facts are that there are camps that rightfully use static libs a lot. Either Fedora cares about these camps, or they are considered a minority to not really cater for their needs.
IMO, this is not a matter of "minorities" nor of demands. I feel this to be a matter of "limitation of knowledge" trying to push their mistakes into the distro.
I think the lack of knowledge is perhaps on the other side of the fence.
On Mon, 2007-05-28 at 14:10 +0200, Axel Thimm wrote:
On Mon, May 28, 2007 at 01:54:33PM +0200, Ralf Corsepius wrote:
Most of these guys are just beginners to everything but "algorithms", which to them means "math".
They never heard nor thought about "sockets", "ipc", i18n, threading, ABIs/APIs, system-integration, ... etc.
Ehm, I'm not sure what you are describing, ipc is a very big deal especially cross-processor-wise. Threading? mpi/openmp was developed out of these group's needs.
Anyway, we're getting off-topic, the facts are that there are camps that rightfully use static libs a lot. Either Fedora cares about these camps, or they are considered a minority to not really cater for their needs.
IMO, this is not a matter of "minorities" nor of demands. I feel this to be a matter of "limitation of knowledge" trying to push their mistakes into the distro.
I think the lack of knowledge is perhaps on the other side of the fence.
Now that's arrogant - PLONK.
Ralf
On Mon, May 28, 2007 at 12:41:04AM -0400, Bill Nottingham wrote:
Axel Thimm (Axel.Thimm@ATrpms.net) said:
The right fix here is to educate scientific programmers as to why statically linking in libraries doesn't actually get them what they want, and that it is broken.
Actually the situation in scientific camps is not that easy. There are tons of situations where having a statically linked binary saves the day. You typically have a complete mix of very heterogeneous Linux distributions and releases thereof with semi-bogus libs under /usr/local as a bonus. At least that's what larger phys/chem institutions and educational facilities look like in de/uk/fr/ru.
So, my reading of this is 'larger phys/chem institutions are crazy and don't understand sane systems management'. Am I reading this wrong?
Yes, it's very wrong. Read it as "Real life". If you got enough cash to pay a large IT staff to do your system deployment *and* evaluation of system requirements of the given numerical problem to solve, then you can talk about "system's management".
Even the very big players like Fermi or DESY that do have large IT staffs to deploy many-year scenarios can only go as far as providing a RHEL clone with some additional IT infrastructural elements like openafs, but not really providing all necessary numerical and scientific libraries (a lot of them are non open source).
And I only mentioned that the Linux part is homogeneous. Ever wondered why the majority of Unix admins that have skills in managing heterogeneous Unix system have a physicist's background? It is far more important to have a good mips/$ and some scientists on salary, than to spend all budget for the IT staff's system management.
If you want consistent results, run a consistent platform.
So you outrule Fedora? Because consistent means even more than a stable API/ABI, RHEL comes close to that, but switching to RHEL because a distro does not want to offer static libs is not reason enough, especially in light of development of key components like gfortran that is reflected in RHEL only a couple years after it makes it into the non-enterprise platforms.
Loss of static libs and similar issues moves this share of users to Ubuntu/Gentoo these days. It's not speculation, it's what I see.
Axel Thimm (Axel.Thimm@ATrpms.net) said:
And I only mentioned that the Linux part is homogeneous. Ever wondered why the majority of Unix admins that have skills in managing heterogeneous Unix system have a physicist's background? It is far more important to have a good mips/$ and some scientists on salary, than to spend all budget for the IT staff's system management.
If you are spending all the budget for IT staff to do system management, you're doing it wrong; there's no reason that systems management should be on the par you're talking about. There are places that run hundreds to thousands of machines with a single administrator. Honestly? It sounds like a vicious cycle of "we don't think we have the time to set up a consistent platform, so we don't, so we have to spend too much time managing it, so we don't have the time to set up a new platform..."
If you want consistent results, run a consistent platform.
So you outrule Fedora? Because consistent means even more than a stable API/ABI, RHEL comes close to that, but switching to RHEL because a distro does not want to offer static libs is not reason enough, especially in light of development of key components like gfortran that is reflected in RHEL only a couple years after it makes it into the non-enterprise platforms.
RHEL doesn't even *ship* this scientific stuff, for a large part.
All I'm saying is that we shouldn't continue to support this sort of fundamentally-unsupportable setup ad nauseam - it's time to think about how to solve this in a sane manner, rather than continuing to paper over the problem. I don't see how, at a minium, moving the static libraries to -static packages changes things - if, as you say, everyone just chucks libraries manually in /usr/local, then how is this making anything worse for them?
Bill
On Mon, May 28, 2007 at 05:22:48PM -0400, Bill Nottingham wrote:
Axel Thimm (Axel.Thimm@ATrpms.net) said:
And I only mentioned that the Linux part is homogeneous. Ever wondered why the majority of Unix admins that have skills in managing heterogeneous Unix system have a physicist's background? It is far more important to have a good mips/$ and some scientists on salary, than to spend all budget for the IT staff's system management.
If you are spending all the budget for IT staff to do system management, you're doing it wrong; there's no reason that systems management should be on the par you're talking about. There are places that run hundreds to thousands of machines with a single administrator. Honestly? It sounds like a vicious cycle of "we don't think we have the time to set up a consistent platform, so we don't, so we have to spend too much time managing it, so we don't have the time to set up a new platform..."
And how did that single admin get 1000 systems that are obviously similar enough to be managed by a single person? Maybe because the budget allowed to buy a rather homegeneous pile of hardware?
You have large hardware histories in large institutions, partly due to vendor availability at a given time. It's not a school, where you can throw out a couple hundred 500$ PC and replace them with the same every 2-3 years, so you can keep a sane hardware status.
If you want consistent results, run a consistent platform.
So you outrule Fedora? Because consistent means even more than a stable API/ABI, RHEL comes close to that, but switching to RHEL because a distro does not want to offer static libs is not reason enough, especially in light of development of key components like gfortran that is reflected in RHEL only a couple years after it makes it into the non-enterprise platforms.
RHEL doesn't even *ship* this scientific stuff, for a large part.
It's not about shipping scientific applications but allowing the base OS to build/use them on your own.
All I'm saying is that we shouldn't continue to support this sort of fundamentally-unsupportable setup ad nauseam - it's time to think about how to solve this in a sane manner, rather than continuing to paper over the problem. I don't see how, at a minium, moving the static libraries to -static packages changes things - if, as you say, everyone just chucks libraries manually in /usr/local, then how is this making anything worse for them?
No problem at all with moving away static libs into their subpackage! But the thread went on to claim that static libs are not useful in general, and some people including myself just showed the typical use cases where it makes very much sense to have static libs around.
On Mon, 2007-05-28 at 23:32 +0200, Axel Thimm wrote:
And how did that single admin get 1000 systems that are obviously similar enough to be managed by a single person? Maybe because the budget allowed to buy a rather homegeneous pile of hardware?
I call crap on that. I was a single systems person for a bunch of systems for a long time. You keep your sanity by enforcing policy and you can do that by using the tools available to you to simplify the tasks. You don't need homogeneous hw, you need LARGELY homogeneous hw. So you limit the processor archs and you try to focus things down as much as possible.
It took me 2 years to get rid of all the sun crap in the physics department but I did. I still had wacky AXP crap after that but then another few years and that was gone, too.
My point is it didn't have much to do with the distro, it had more to do with me finding and working policy in place where it needed to happen. and THAT is how a sysadmin survives and thrives. By learning where they can engineer around a solution and where they need to use policy to help themselves.
In this case Bill is right, this is where we realize that we can make the engineering help get rid of years of bad policy.
-sv
On Mon, May 28, 2007 at 10:14:21PM -0400, seth vidal wrote:
On Mon, 2007-05-28 at 23:32 +0200, Axel Thimm wrote:
And how did that single admin get 1000 systems that are obviously similar enough to be managed by a single person? Maybe because the budget allowed to buy a rather homegeneous pile of hardware?
I call crap on that. I was a single systems person for a bunch of systems for a long time. You keep your sanity by enforcing policy and you can do that by using the tools available to you to simplify the tasks. You don't need homogeneous hw, you need LARGELY homogeneous hw.
Well, I used the phrase RATHER homogeneous hw, so we're not that far apart.
So you limit the processor archs and you try to focus things down as much as possible.
It took me 2 years to get rid of all the sun crap in the physics department but I did.
Which implies that you had enough resources (or cheap enough hardware) to replace these machines in a two year turnaround. That's what I call a high budget. Any mpp system I have used had a turnaround of at least 5-7 years, they do have 8 digit $/€ costs after all.
I still had wacky AXP crap after that but then another few years and that was gone, too.
My point is it didn't have much to do with the distro,
No, the discussion is not about the distro at all: It is about acknowledging that phys/chem institutions and academia do not have the luxury of a flat same-hw-same-os infrastructure for various reasons, be that due to the project's constraints (at DESY we've been developing our own set of VLIW processors), separation of development and number crunch systems/clusters, exchange across the institutional border in scientific networks, different ages of the distro, and what else has been said in this thread.
US/Japan have a better budgeting in IT infrastucture, that's an ongoing joke among European/Russian physicists. So Duke may have given you a different impression.
it had more to do with me finding and working policy in place where it needed to happen. and THAT is how a sysadmin survives and thrives. By learning where they can engineer around a solution and where they need to use policy to help themselves.
In this case Bill is right, this is where we realize that we can make the engineering help get rid of years of bad policy.
The reality is that you will not chnage these structures and that these users will pick what distro suits them. If it becomes harder and harder to run their codes on Fedora and RHEL then they will check for alternatives, and not wait for the next generation solution to show up.
On Mon, May 28, 2007 at 10:14:21PM -0400, seth vidal wrote:
I call crap on that. I was a single systems person for a bunch of systems for a long time. You keep your sanity by enforcing policy and you can do that by using the tools available to you to simplify the tasks. You don't need homogeneous hw, you need LARGELY homogeneous hw. So you limit the processor archs and you try to focus things down as much as possible.
It isn't necessarily enough. Do you consider a mix of centos4 and fedora to be homogeneous? This is what I use and, mainly due to gfortran/g77 differences, I have to statically build on fedora to run on centos4.
-- Pat
On Wed, 2007-05-30 at 09:09 +0200, Patrice Dumas wrote:
On Mon, May 28, 2007 at 10:14:21PM -0400, seth vidal wrote:
I call crap on that. I was a single systems person for a bunch of systems for a long time. You keep your sanity by enforcing policy and you can do that by using the tools available to you to simplify the tasks. You don't need homogeneous hw, you need LARGELY homogeneous hw. So you limit the processor archs and you try to focus things down as much as possible.
It isn't necessarily enough. Do you consider a mix of centos4 and fedora to be homogeneous?
No ...
This is what I use and, mainly due to gfortran/g77 differences, I have to statically build on fedora to run on centos4.
... and this is the proof.
Ralf
Le Mer 30 mai 2007 09:09, Patrice Dumas a écrit :
On Mon, May 28, 2007 at 10:14:21PM -0400, seth vidal wrote:
I call crap on that. I was a single systems person for a bunch of systems for a long time. You keep your sanity by enforcing policy and you can do that by using the tools available to you to simplify the tasks. You don't need homogeneous hw, you need LARGELY homogeneous hw. So you limit the processor archs and you try to focus things down as much as possible.
It isn't necessarily enough. Do you consider a mix of centos4 and fedora to be homogeneous?
Once you have a largely homegeneous hw parc you can deploy homogenous sw platform. That's the main point of having homegeneous hw. If one could sanely deploy the same sw platform on heterogeneous hw having homogenous hw would not matter as much.
On Wed, May 30, 2007 at 10:03:55AM +0200, Nicolas Mailhot wrote:
Le Mer 30 mai 2007 09:09, Patrice Dumas a écrit :
On Mon, May 28, 2007 at 10:14:21PM -0400, seth vidal wrote:
I call crap on that. I was a single systems person for a bunch of systems for a long time. You keep your sanity by enforcing policy and you can do that by using the tools available to you to simplify the tasks. You don't need homogeneous hw, you need LARGELY homogeneous hw. So you limit the processor archs and you try to focus things down as much as possible.
It isn't necessarily enough. Do you consider a mix of centos4 and fedora to be homogeneous?
Once you have a largely homegeneous hw parc you can deploy homogenous sw platform.
But the systems may serve different purposes like number crunchers (stable and secure, long-lived and low maintenance components aka RHEL/CentOS/SL) vs desktops (bigger choice, fresher content aka Fedora). Is strongly assume this is Patrice's need for separation.
That's the main point of having homegeneous hw. If one could sanely deploy the same sw platform on heterogeneous hw having homogenous hw would not matter as much.
Le Jeu 31 mai 2007 00:02, Axel Thimm a écrit :
On Wed, May 30, 2007 at 10:03:55AM +0200, Nicolas Mailhot wrote:
Le Mer 30 mai 2007 09:09, Patrice Dumas a écrit :
On Mon, May 28, 2007 at 10:14:21PM -0400, seth vidal wrote:
I call crap on that. I was a single systems person for a bunch of systems for a long time. You keep your sanity by enforcing policy and you can do that by using the tools available to you to simplify
the
tasks. You don't need homogeneous hw, you need LARGELY
homogeneous
hw. So you limit the processor archs and you try to focus things down
as
much as possible.
It isn't necessarily enough. Do you consider a mix of centos4 and fedora to be homogeneous?
Once you have a largely homegeneous hw parc you can deploy homogenous sw platform.
But the systems may serve different purposes like number crunchers (stable and secure, long-lived and low maintenance components aka RHEL/CentOS/SL) vs desktops (bigger choice, fresher content aka Fedora). Is strongly assume this is Patrice's need for separation.
Well either they have different purposes and you don't have the problem of deploying the same fortran modules on them, or they don't and the whole argument fails.
On Thu, May 31, 2007 at 09:51:31AM +0200, Nicolas Mailhot wrote:
Le Jeu 31 mai 2007 00:02, Axel Thimm a écrit :
On Wed, May 30, 2007 at 10:03:55AM +0200, Nicolas Mailhot wrote:
Le Mer 30 mai 2007 09:09, Patrice Dumas a écrit :
On Mon, May 28, 2007 at 10:14:21PM -0400, seth vidal wrote:
I call crap on that. I was a single systems person for a bunch of systems for a long time. You keep your sanity by enforcing policy and you can do that by using the tools available to you to simplify
the
tasks. You don't need homogeneous hw, you need LARGELY
homogeneous
hw. So you limit the processor archs and you try to focus things down
as
much as possible.
It isn't necessarily enough. Do you consider a mix of centos4 and fedora to be homogeneous?
Once you have a largely homegeneous hw parc you can deploy homogenous sw platform.
But the systems may serve different purposes like number crunchers (stable and secure, long-lived and low maintenance components aka RHEL/CentOS/SL) vs desktops (bigger choice, fresher content aka Fedora). Is strongly assume this is Patrice's need for separation.
Well either they have different purposes and you don't have the problem of deploying the same fortran modules on them,
Why not? The scientists develop on their desktop PC, test whether the code actually gives numbers that make sense for their model, and once the bits and bolts are in places run the large jobs on the real rack-mounted X-less number crunchers.
That's the most typical work model of theoretical physicists. Sometimes the number crunches is cross-institution or even cross-country. European science networks sharing common access to large mpp resources are more and more common these days.
or they don't and the whole argument fails.
It's not a theoretical argument, it's real life.
I've been on both sides of the equation, being a physicist doing numerics and as an admin catering for others doing so (with even an overlap of duties). So please believe me that I'm talking about real situations, not any academic example.
The question is just whether we care or not. From a market share we should, because if these institutions move further away from Fedora/RHEL, students that go through this process end up being Linux consultants and want to deploy what they are familiar with. The predominance of Debian in German universities leads to more German Linux consultants doing Debian/Ubuntu later on.
Le Jeu 31 mai 2007 10:10, Axel Thimm a écrit :
On Thu, May 31, 2007 at 09:51:31AM +0200, Nicolas Mailhot wrote:
Well either they have different purposes and you don't have the problem of deploying the same fortran modules on them,
Why not? The scientists develop on their desktop PC, test whether the code actually gives numbers that make sense for their model, and once the bits and bolts are in places run the large jobs on the real rack-mounted X-less number crunchers.
At that point they can and should take the time to rebuild it for the large number cruncher. Just dump the developper/scientist local builds on the expensive production systems is a very bad idea (and that's not a problem limited to numeric stuff)
On Thu, May 31, 2007 at 10:32:34AM +0200, Nicolas Mailhot wrote:
Le Jeu 31 mai 2007 10:10, Axel Thimm a écrit :
On Thu, May 31, 2007 at 09:51:31AM +0200, Nicolas Mailhot wrote:
Well either they have different purposes and you don't have the problem of deploying the same fortran modules on them,
Why not? The scientists develop on their desktop PC, test whether the code actually gives numbers that make sense for their model, and once the bits and bolts are in places run the large jobs on the real rack-mounted X-less number crunchers.
At that point they can and should take the time to rebuild it for the large number cruncher. Just dump the developper/scientist local builds on the expensive production systems is a very bad idea (and that's not a problem limited to numeric stuff)
Well, climb up the thread and see that the centos running cluster at Patrice's site is not carrying the proper Fortran. Even further up the thread you'll find references that you don't even have all libraries on the target system, or not the ones you really need.
FWIW even w/o neededing any other special fortran features building with FC6's fortran and running on RHEL4 or FC4 was improving some code performance by a factor of the range of 30%. So instead of publishing your paper in 3 months you can save a month of numerics, produce more papers, get a higher budget (where budgets depend on the amount and quality of publications, e.g. almost everywhere), use the budget to hire more physicists. That's an unbeatable argument.
(well, it's simpliyfied as the above could imply that all numerical physicists do is to fire up number crunchers, although some bad mouthed people will say they always knew that ;)
On 30/05/07, Patrice Dumas pertusus@free.fr wrote:
On Mon, May 28, 2007 at 10:14:21PM -0400, seth vidal wrote:
I call crap on that. I was a single systems person for a bunch of systems for a long time. You keep your sanity by enforcing policy and you can do that by using the tools available to you to simplify the tasks. You don't need homogeneous hw, you need LARGELY homogeneous hw. So you limit the processor archs and you try to focus things down as much as possible.
It isn't necessarily enough. Do you consider a mix of centos4 and fedora to be homogeneous? This is what I use and, mainly due to gfortran/g77 differences, I have to statically build on fedora to run on centos4.
Sounds like an argument for packaging a later, parallel installable gcc for EPEL to me.
On Wed, May 30, 2007 at 10:33:17AM +0100, Jonathan Underwood wrote:
Sounds like an argument for packaging a later, parallel installable gcc for EPEL to me.
It is more or less done (although I am not sure that the version isn't lagging in which case it may not be so usefull) but then you also need to have the libraries compiled for both compilers, it becomes unmanageable and I don't think people at redhat would accept doing that. I can repackage it in private repos (I already do that sort of things) but all in all it is much more complicated.
(As a side note I have submitted a review for the cernlib compiled with g77 now that the main cern lib is compiled with gfortran, but I dont' think I will maintain a gfortran compiled cernlib on centos4).
-- Pat
Axel Thimm (Axel.Thimm@ATrpms.net) said:
All I'm saying is that we shouldn't continue to support this sort of fundamentally-unsupportable setup ad nauseam - it's time to think about how to solve this in a sane manner, rather than continuing to paper over the problem. I don't see how, at a minium, moving the static libraries to -static packages changes things - if, as you say, everyone just chucks libraries manually in /usr/local, then how is this making anything worse for them?
No problem at all with moving away static libs into their subpackage! But the thread went on to claim that static libs are not useful in general, and some people including myself just showed the typical use cases where it makes very much sense to have static libs around.
They aren't useful *in general*. It's supporting an outmoded, inefficient mode of use (shuffling libraries and binaries around between machines and OSes), and it's no different than various other outmoded, inefficient, past UNIX-isms. We don't support every app parsing the password file (or more) - we support authenticating via PAM. We don't support making cdrecord setuid - we support fixing the kernel to DTRT. We don't encourage logging in as root to do all tasks - we support consolehelper, and moving to things like consolekit and separated helpers from their UI frontends. We don't support creating specific groups to own devices - we support pam_console and then ACLs added via ConsoleKit.
We don't support every single usage case that people want in Fedora - it's about trying to solve the problems in the right ways that scale going forward.
Bill
On Mon, May 28, 2007 at 11:18:43PM -0400, Bill Nottingham wrote:
They aren't useful *in general*. It's supporting an outmoded, inefficient mode of use (shuffling libraries and binaries around between machines and OSes), and it's no different than various other outmoded, inefficient, past UNIX-isms.
It is efficient, but not general. What we are asking is to let the possibility to the user to do this use when it makes sense.
We don't support every app parsing the password file (or more) - we support authenticating via PAM. We don't support making
But you still havent replaced /etc/passwd with something that couldn't be parsed by the user.
cdrecord setuid - we support fixing the kernel to DTRT. We don't
But people can still set the setuid bit.
encourage logging in as root to do all tasks - we support consolehelper,
Still it is possible to log in as root if one wants.
and moving to things like consolekit and separated helpers from their UI frontends. We don't support creating specific groups to own devices - we support pam_console and then ACLs added via ConsoleKit.
Once again a user can use groups to own devices by changing configuration (at least I hope so...). Regarding the use of Consolekit it is too new to me to have an advice.
The fact that it isn't supported doesn't mean that it should be prevented. At least I hope that's not what you do with RHEL customers (and I guess that you cannot legally). Of course the support could be void in those cases.
We don't support every single usage case that people want in Fedora - it's about trying to solve the problems in the right ways that scale going forward.
So what is 'the right ways that scale going forward' for that issue? Once again it is not about linking statically in fedora, but about letting this possibility to the user, especially in cases when it could be usefull -- you don't have to support the user doing this. I hope that shipping something in RHEL doesn't mean that you support every use of that piece of code.
-- Pat
On Mon, May 28, 2007 at 11:18:43PM -0400, Bill Nottingham wrote:
Axel Thimm (Axel.Thimm@ATrpms.net) said:
All I'm saying is that we shouldn't continue to support this sort of fundamentally-unsupportable setup ad nauseam - it's time to think about how to solve this in a sane manner, rather than continuing to paper over the problem. I don't see how, at a minium, moving the static libraries to -static packages changes things - if, as you say, everyone just chucks libraries manually in /usr/local, then how is this making anything worse for them?
No problem at all with moving away static libs into their subpackage! But the thread went on to claim that static libs are not useful in general, and some people including myself just showed the typical use cases where it makes very much sense to have static libs around.
They aren't useful *in general*.
When I wrote that the claim is false that they are not useful in general, I didn't mean that "they are always useful", the opposite is that "there are many cases where statically linking makes very much sense".
It's supporting an outmoded, inefficient mode of use (shuffling libraries and binaries around between machines and OSes), and it's no different than various other outmoded, inefficient, past UNIX-isms. We don't support every app parsing the password file (or more) - we support authenticating via PAM. We don't support making cdrecord setuid - we support fixing the kernel to DTRT. We don't encourage logging in as root to do all tasks - we support consolehelper, and moving to things like consolekit and separated helpers from their UI frontends. We don't support creating specific groups to own devices - we support pam_console and then ACLs added via ConsoleKit.
IMHO you're comapring apples and organges. Statically linking has nothing to do with being modern or outmoded, we're not in the fashion business ;)
Statically linking means to closely (and efficiently!) bundle all bits that are needed to run together at a given time. No worries if your update of the gsl of lapack will influence the numerical precision duo to ieee746 shortcuts, no worries if the other machine has a different set of runtime libs (like missing some). That has nothing to do with modernism.
We don't support every single usage case that people want in Fedora
Sure, that's why I asked previously in this thread whether the scientifc gorups are considered worth supporting or not.
- it's about trying to solve the problems in the right ways that
scale going forward.
The moment you present a better alternative than statically linking people will listen.
Le Lun 28 mai 2007 23:32, Axel Thimm a écrit :
And how did that single admin get 1000 systems that are obviously similar enough to be managed by a single person? Maybe because the budget allowed to buy a rather homegeneous pile of hardware?
Planning. You only buy systems from the same shortlist several years on a row because you know the maintenance burden associated with new configs overweights their short-term advantages (that also gives you volumes to negociate prices with hardware vendors).
On Tue, May 29, 2007 at 10:05:20AM +0200, Nicolas Mailhot wrote:
Le Lun 28 mai 2007 23:32, Axel Thimm a écrit :
And how did that single admin get 1000 systems that are obviously similar enough to be managed by a single person? Maybe because the budget allowed to buy a rather homegeneous pile of hardware?
Planning. You only buy systems from the same shortlist several years on a row because you know the maintenance burden associated with new configs overweights their short-term advantages (that also gives you volumes to negociate prices with hardware vendors).
But in serious number crunching the decision of hardware planning is not left to the IT staff, but the project managers themselves.
We're talking two kind of hw here, simple desktops, where you can recycle them every second or third year and clusters/mpp systems that do the hard work.
Le Mar 29 mai 2007 10:40, Axel Thimm a écrit :
On Tue, May 29, 2007 at 10:05:20AM +0200, Nicolas Mailhot wrote:
Le Lun 28 mai 2007 23:32, Axel Thimm a écrit :
And how did that single admin get 1000 systems that are obviously similar enough to be managed by a single person? Maybe because the budget allowed to buy a rather homegeneous pile of hardware?
Planning. You only buy systems from the same shortlist several years on a row because you know the maintenance burden associated with new configs overweights their short-term advantages (that also gives you volumes to negociate prices with hardware vendors).
But in serious number crunching the decision of hardware planning is not left to the IT staff, but the project managers themselves.
So what? You complain about overly small IT budgets, and then you say the people who choose the hardware don't consult the people that know how much it'll cost to run it? Do I need to draw you a picture?
On Tue, May 29, 2007 at 10:58:54AM +0200, Nicolas Mailhot wrote:
Le Mar 29 mai 2007 10:40, Axel Thimm a écrit :
On Tue, May 29, 2007 at 10:05:20AM +0200, Nicolas Mailhot wrote:
Le Lun 28 mai 2007 23:32, Axel Thimm a écrit :
And how did that single admin get 1000 systems that are obviously similar enough to be managed by a single person? Maybe because the budget allowed to buy a rather homegeneous pile of hardware?
Planning. You only buy systems from the same shortlist several years on a row because you know the maintenance burden associated with new configs overweights their short-term advantages (that also gives you volumes to negociate prices with hardware vendors).
But in serious number crunching the decision of hardware planning is not left to the IT staff, but the project managers themselves.
So what? You complain about overly small IT budgets, and then you say the people who choose the hardware don't consult the people that know how much it'll cost to run it? Do I need to draw you a picture?
Well, for one I'm just quoting facts, no more, no less. And yes, the people evaluating mips/$ are not the IT staff (because they usually don't even understand what the code does and how to find out where the mips could be improved). You get scientists benchmarking the systems with tuned gcc or assembly to see how the platfomr fits their numerical calculus.
And once the system is found that will provide the nuerics for the next 5-7 years the IT staff has to support it. AT that point it time it doesn't matter if it will be running FC4, RHEL, SLES, Altrix or a non-Linux environment.
So, sitting as the IT staff manager on the other side and planning to deploy F7 everywhere is not going to work. You only get to choose the desktops and perhaps a smallish cluster. It's different in experimental groups - there you get more power as an IT admin.
I'm not making this up, that's what I'm seeing in the last 18 years in phys/chem (and it certainly wasn't better before that).
The question is will Fedora just say "we don't care, your IT management model sucks", or will it try to keep this group happy? I think it sounds more like the former, but whatever it is, we don't really have to fight over their IT management system: Whether it actually fits the surroundings or could be improved, we're not going to change it, we're just either supporting it or not.
Le Mar 29 mai 2007 11:28, Axel Thimm a écrit :
On Tue, May 29, 2007 at 10:58:54AM +0200, Nicolas Mailhot wrote:
Le Mar 29 mai 2007 10:40, Axel Thimm a écrit :
But in serious number crunching the decision of hardware planning
is
not left to the IT staff, but the project managers themselves.
So what? You complain about overly small IT budgets, and then you say the people who choose the hardware don't consult the people that know how much it'll cost to run it? Do I need to draw you a picture?
Well, for one I'm just quoting facts, no more, no less. And yes, the people evaluating mips/$ are not the IT staff (because they usually don't even understand what the code does and how to find out where the mips could be improved).
It's different in experimental groups - there you get more power as an IT admin.
Well here projects get to evaluate/choose the hardware/software they want, but they need IT entities sign-in before any purchase is un-blocked, so if their choice is not on the existing shortlist they have the choice between gowing back to the drawing board or agree to cough up the associated maintenance budget extension.
Does wonders to avoid gratuituous platform inflation. And no this is not an experimental group context - quite the contrary.
On Mon, May 28, 2007 at 05:22:48PM -0400, Bill Nottingham wrote:
RHEL doesn't even *ship* this scientific stuff, for a large part.
All I'm saying is that we shouldn't continue to support this sort of fundamentally-unsupportable setup ad nauseam - it's time to think about how to solve this in a sane manner, rather than continuing to paper over the problem. I don't see how, at a minium, moving the static libraries to -static packages changes things - if, as you say, everyone just chucks libraries manually in /usr/local, then how is this making anything worse for them?
The static libs are needed on the fedora box to be able to link statically on that platform and use it where there are libraries installed in /usr/local without needing to relink.
-- Pat
Le Lun 28 mai 2007 23:22, Bill Nottingham a écrit :
Honestly? It sounds like a vicious cycle of "we don't think we have the time to set up a consistent platform, so we don't, so we have to spend too much time managing it, so we don't have the time to set up a new platform..."
+10 Letting platforms lag & diverge is the quickest path to IT maintenance overspending. You win some short term and lose a lot mid-term.
On Sun, May 27, 2007 at 11:10:22AM -0400, Matthias Clasen wrote:
On Sun, 2007-05-27 at 10:15 +0200, Patrice Dumas wrote:
I only advocate shipping static libraries, no statically linked packages against those libraries. These libs are for use for locally compiled programs, not for packages shipped with fedora. So no maintainance issue.
The obvious question is, if this is only for locally compiled programs, why not compile the necessary static libraries locally as well ? Why should we carry that burden ?
Because we care about users?
-- Pat
packaging@lists.fedoraproject.org