#1886: Should update openmpi in F11 prior to final
Fedora Release Engineering
rel-eng at fedoraproject.org
Tue May 26 23:28:11 UTC 2009
#1886: Should update openmpi in F11 prior to final
------------------------------+---------------------------------------------
Reporter: dledford | Owner: rel-eng at lists.fedoraproject.org
Type: task | Status: new
Milestone: Fedora 11 Final | Component: koji
Resolution: | Keywords:
------------------------------+---------------------------------------------
Comment (by dledford):
Replying to [comment:9 spot]:
> I'm not making that assumption. Up to recently, scalapack had
BuildRequires: openmpi-devel (which no longer exists after you reverted my
changes), and a detected Requires on openmpi's shared libraries. It is
specific to the mpi stack it was built against.
After having had the time to sort a few things out via code inspection, I
would just like to point these facts out:
1) scalapack has no direct dependency on MPI, only indirect via blacs
2) the only uses of MPI in scalapack are in the REDIST/TESTING directory,
and in all those cases the test app is a proper MPI app
3) this is backed up by the mpiblacs_issues.ps file in blacs that points
out that the user space app linked against it must call MPI_init for the
library because the library can't do it itself
4) the situation used to be one where the application was linked against
openmpi and ran fine, even if openmpi wasn't set up to be useful, and now
the application must do something in order to use openmpi
So, from what I can tell, a user needs a *working* MPI stack to use the
MPI aspect of scalapack/blacs. Just having the application find the
library it links against isn't good enough. That being the case, the
argument that it "just works" out of the box is incorrect. It runs, but
it doesn't work until the user has a properly setup MPI environment. And
it's not at all certain that every other user doesn't suffer a performance
penalty for the checks to see if MPI is initialized and if so to map
certain blacs functions to MPI variables.
To me, that means the real issue is that you want it to run out of the
box, linked against an MPI stack that isn't necessarily setup, so that if
the person happens to want to use MPI then they can.
However, I find it somewhat incongruous that we support multiple MPI
stacks, yet we link only against this one MPI stack, and without it, our
application doesn't run. That puts a bit of deception to the claim that
we support multiple MPI stacks in that really we support one MPI stack out
of the box (sorta), and all other MPI stacks will need to recompile
certain libraries to work against those MPI stacks.
So, why special case openmpi? Why not just build blacs without MPI
support and treat all the MPI stacks equally? If you want to run blacs on
that MPI stack, compile against it. Or create subpackages that are
different builds of blacs that do support a given MPI stack, so that a
user could install scalapack/blacs to use them without MPI, scalapack-
openmpi/blacs-openmpi to use them via the openmpi stack, etc. That would
seem the most equal treatment of all. And it really shouldn't be that big
a deal for users that want MPI support considering that *real* MPI support
does *not* work out of the box and does require configuration. For those
people, compiling blacs against their MPI stack or doing a yum install of
the scalapack-openmpi package is just a simple step amongst others that
they have to do already because things *don't* work out of the box.
--
Ticket URL: <https://fedorahosted.org/rel-eng/ticket/1886#comment:10>
Fedora Release Engineering <http://fedorahosted.org/rel-eng>
Release Engineering for the Fedora Project
More information about the rel-eng
mailing list