Matthew Miller wrote:
Rust tends to be more fine-grained. I don't think this is
necessarily
rust-specific _really_ — I think it's a trend as people get more used to
this way of doing things.
And this is inherently a PITA to package, unfortunately.
It is indeed not Rust-specific, other new fancy languages are a similar
mess, see Node.js (with NPM), Go, etc. Another reason why GNU/Linux
developers should stick to C and/or C++.
That said, both KDE and GNOME have also gone through a phase of splitting
craze whose consequences we are still suffering from: The software from the
KDE project used to be a dozen packages that could be updated and built by
one person by hand in a day. Now we have hundreds of packages released on 3
different release cycles (4 if you include the new Plasma Mobile Gear) that
need scripts to update, take days to build even with scripts, and lead to
updates on whose sheer number of packages Bodhi and OpenQA frequently choke.
And this goes also and especially for the libraries: kdelibs used to be one
package, now there are dozens of kf5-* packages, likewise Qt. And it is no
better in GNOME land, they started splitting stuff even before KDE did.
But the situation is worse in all those "modern" languages like Rust or Go.
(And also that language that calls itself "modern" even though it has been
designed decades ago to allow for pointlessly jumping text on websites. NPM
is infamous for containing "packages" with a single one-line function.)
I have not tried this with any Rust package. My experience in the
past is
that many upstreams find this the kind of thing that makes them go on long
blog rants about distro packaging -- they picked a version, it works fine,
they don't need the distraction of being told they must update it.
That unhelpfulness (and complete lack of understanding of and for how
distributions work) by upstreams is a big issue in those parallel
ecosystems.
But even when this doesn't happen, it gets into the matter of
expertise.
If I need to update a dependency for a newer-version of the
sub-dependency, and I don't know enough about either code base to do
anything other than file a "please update" bug, then everything is blocked
on that.
Normally, it should just be a matter of changing the version number. If that
fails to build, IMHO, the dependency (the library) is broken. Library
upstream "maintainers" with a complete disregard for backwards compatibility
are a PITA, no matter in what language.
> We only maintain compat packages where porting to the new version
(and
> submitting the changes upstream) is not feasible. Again, isn't that
> how Fedora is supposed to work?
I guess it depends on how broadly one reads "feasible". :)
We normally try pretty aggressively to port packages to new library versions
where the incompatibilities are not too bad. I do not see why it should be
any different when the library happens to be written in Rust.
> Examples of that might be:
> - wasmtime: I ultimately abandoned the attempt to package it "because
> Fedora Legal", but the packages themselves worked fine
An aside, but: did I miss something with this on the Legal list? The only
thing I'm finding is a question about how to phrase `Apache-2.0 WITH
LLVM-exception`.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=2130953
> We have talked about this multiple times, but it won't work.
> I think this was tried with first-class maven artifact support in
> koji, but we all know how the Java packaging fiasco ended.
I would rather see it as: we learned some lessons from that approach and
can do it better.
Without a concrete proposal on how you want to "do it better", there is
really nothing to discuss, because the only thing that we can talk about as
is is the approach that we know did not work. So, suggest a new approach and
we can analyze whether it has any chance of working any better or not.
My guess is that any working approach to allow foreign artifact types in
Koji, and also reliably deliver them to users (including ones that want to
build or rebuild software), would ultimately be more work than just using
RPMs.
> - we change build flags to default to dynamically linking to
system
> libraries instead of statically linking against vendored copies
This too.
Mostly, at least. Assuming this isn't _prebuilt binaries_ or similar,
upstream may or may not have a good reason or strong opinion.
Why would we care about a "strong opinion"? Either there is a good reason or
there is not. Irrational demands by unreasonable, uncooperative upstreams
ought to be just ignored. Free Software means we can adapt the software to
our needs. If upstream will not allow that, it is not Free Software.
I really hope we can look at these and learn how to do it better,
instead
of deciding that better isn't possible. And — while I'm not really up on
node — I have pretty good hindsight on what went wrong with modularity.
(Not enough to try modularity _again_ just yet... but that's a different
thing. A whole talk for next year's Nest/Flock, maybe....)
Why would trying modularity ever again even be up for discussion? Can we not
finally stop beating that dead horse?
Modularity failed exactly because of the fundamental design issue I had
warned about from day one: Modules can contain non-leaf packages, other
modules can depend on a those and typically depend on a specific version,
and those specific versions cannot coexist on the same system. That
necessarily leads to conflicts. And in the non-modular world, we already
have ways to solve exactly those conflicts (e.g., compat packages).
> alternatives, all attempts at trying different approaches (maven
> artifacts in koji, vendoring NodeJS dependencies, Java Modules, etc.)
> have *failed* and ultimately made things worse instead of improving
> the situation - the only thing that has proven to be sustainable (for
> now) is ... maybe surprisingly, plain RPM packages.
I'll take "for now". :)
I do not expect that to change, ever.
Kevin Kofler