On 11/15/19 11:27 AM, Petr Pisar wrote:
No. Modularity solves this combination problem with "stream
Sources for such module exists only once, you submit them for building
with fedpkg only once, but a build systems computes all combinations
(this the stream expansion) and schedules a build for each of the
combination. That will result in multiple module builds with the same
module name, stream, version, but differing with a special
discriminator called "context".
so for one module with two versions, we will have 2 builds, for 2
modules with two versions we'll have four builds, and in general for N
modules with M versions on average, we will have N^M builds? This is a
textbook combinatorial explosion: 100 modules with average 3 versions
each is a million builds and tests, with million resulting versions to
be picked from.
Of course in practice the combinatorial behavior only happens within the
subsets of software that depend on each other, but, nevertheless, it
seems to me that this means that we have to control and limit the number
of interdependent modules drastically, like to single digits.
BTW, it always bothered me that in some sense the prime case for modules
is the kernel---but the kernel has always been treated specially and is
not being subsumed into modules. I think that is because we are thinking
about the whole thing wrong; we haven't found the right abstraction for
dealing with software versioning yet.