I'm a contributor to the Wine project. To summarize the following mail,
Wine needs special versions of some of its normal dependencies, such as
libfreetype and libgnutls, built using the MinGW cross-compiler, and I'm
sending out a mail to major distributions in order to get some feedback
from our packagers on how these should be built and packaged.
For a long time Wine has built all of its Win32 libraries (DLLs and
EXEs) as ELF binaries. For various reasons related to application
compatibility, we have started building our binaries as PE instead,
using the MinGW cross-compiler. It is our intent to expand this to some
of our dependencies as well. The list of dependencies that we intend to
build using MinGW is not quite fixed yet, but we expect it to include
and be mostly limited to the following:
* zlib (currently included via manual source import)
and dependencies of the above packages (not including CRT dependencies,
which Wine provides).
There is currently some internal discussion about how these dependencies
should be built and linked. There are essentially three questions I see
that need to be resolved, and while these resolutions have a significant
impact on the Wine building and development process, they also have an
impact on distributions, and accordingly I'd like to get input from our
packagers to ensure that their considerations are accurately taken into
(1) Should we build via source import, or link statically, or dynamically?
Static linking and source imports are dispreferred by Fedora  , as
by many distributions, on the grounds that they cause duplication of
libraries on disk and in memory, and make it harder to update the
libraries in question (see also question 2). They also make building and
Note however that if they are linked dynamically, we need to make sure
that we load our packages instead of MinGW builds of open-source
libraries with applications ship with. Accordingly we need each library
to be renamed, and to link to renamed dependencies. For example, if
application X ships with its own copy of libfreetype-6.dll, we need to
make sure that our gdi32.dll links to libwinefreetype-6.dll instead, and
that libwinefreetype-6.dll links to libwineharfbuzz-0.dll and
winezlib.dll. I think, although I haven't completely verified yet, that
this can be done just with build scripts (i.e. no source patches), by
using e.g. --with-zlib=/path/to/winezlib.dll.
Accordingly, although static linking and source imports are generally
disprefered, it may quite likely be preferable in our case. We don't get
the benefits of on-disk deduplication, since Wine is essentially the
only piece of software which needs these libraries.
(2) If we use dynamic libraries, should dependencies be included in the
main wine package, or packaged separately?
This is mostly a question for packagers, although it also relates to (3).
I expect that Fedora (and most distributions) want to answer "packaged
separately" here, on the grounds that this lets them update (say) Wine's
libgnutls separately, and in sync with ELF libgnutls, if some security
fix is needed. There is a snag, though: we need libraries to be copied
into the prefix (there's some internal effort to allow using something
like symlinks instead, but this hard and not done yet). Normally we
perform this copy every time Wine is updated, but if Wine and its
dependencies aren't updated on the same schedule, we may end up loading
an old version of a dependency in the prefix, thus missing the point of
(3) If dependencies are packaged separately, should Wine build them as
part of its build tree (e.g. using submodules), or find and link
(statically or dynamically) to existing binaries?
Linking to existing binaries is generally preferable: it avoids
duplication on disk; it reduces compile times when compiling a single
package from source (especially the first time). However, we aren't
going to benefit from on-disk duplication. And, most importantly, unlike
with ELF dependencies, there is no standardized way to locate MinGW
libraries—especially if it comes to Wine-specific libraries. We would
need a way for Wine's configure script to find these packages—and
ideally find them automatically, or else fall back to a submodule-based
If we rely on distributions to provide our dependencies, the best idea I
have here would be something like a x86_64-w64-mingw32-pkg-config. And
if we use shared libraries rather than static, things get worse: we need
to know the exact path of each library and its dependencies so that we
can copy (or symlink) them into a user's WINEPREFIX.
For what it's worth, the current proposed solution (which has the
support of the Wine maintainer) involves source imports and submodules.
There's probably room for changing our approach even after things are
committed, but I'd still like to get early feedback from distributions,
and make sure that their interests are accurately represented, before we
commit. In short, it's not clear whether distributions want their
no-static-library policies to apply to us as well, or whether we're
enough of a special case and would be enough of a pain to package that
they'd rather we deal with the hard parts, and I don't want us to make
during the Fedora 34 development cycle a year ago, I've reported the following
buzgillas about packages that don't install:
They were set to ASSIGNED by their maintainers but since then, they still don't
install on Fedora 34, Fedora 35 or Fedora 36.
I see no point in keeping such packages in the repositories, yet the policy
does not currently allow to do anything other than keep them.
Should I take some steps, or do we keep building and shipping the broken
This is a continuation of the discussion from F36 Change: GNU Toolchain
Uninitialized variables are a big problem. They can be sources of information
exposure if parts of a buffer are not initialized. They can also cause
unexpected execution paths if the attacker can groom the memory to a value of
their choosing. If the variable is a pointer to heap, this can cause free to
corrupt memory under certain circumstances. If the uninitialized memory is
part of user input, this can lead to improper input validation. This is not
hypothetical. All of these come from a paper doing an emprical study of
android flaws.  The data used in the paper is here. 
Part of the problem is that compilers and static analysis tools can't always
find them. I created a test program that has 8 uses of unintialized variables.
Gcc 11 misses all of them. Gcc 12 finds 2. Clang 13 finds 1. cppcheck finds 2 or
3 - but does so much complaining you'd think it found all. Valgrind finds 2.
Flexelint, a commercial linter, finds 1.
Since tools can't always find them, the only option we have right now is force
initialization to something the attacker cannot control. Kees Cook started a
discussion on the llvm developers mail list a while back. He makes a very
clear argument. I would be repeating his points, so please read the original
discussion here (also read the replies):
He talks about -ftrivial-auto-var-init=zero being used for production builds
and -ftrivial-auto-var-init=<pattern> being used for debug builds. The use
is not just the kernel. Consider a server that returns data across the
network to a client. It could possibly leak crypto keys or passwords if the
returned data structure has uninitialized memory.
For more background, the creator of this technology for LLVM presented a talk
about this feature at a past LLVM developer conference:
He said this would have prevented over 900 fixed CVE's in Chrome and 12% of
all Android CVE's.
From deep inside the LLVM thread above, comes this nugget:
To add in, we (Microsoft) currently use zero initialization technology in
Visual Studio in a large amount of production code we ship to customers (all
kernel components, a number of user-mode components). This code is both C and
We already have had multiple vulnerabilities killed because we shipped this
technology in production. We received bug reports with repros that worked on
older versions of Windows without the mitigation and new versions of Windows
that do have it. The new versions don't repro, the old ones do.
Microsoft is also digging in to uninitialized variables. They have a lengthy
blog post that talks about extending this to heap memory. 
I think this would be an important step forward to turn this on across all
compilations. We could wipe out an entire class of bugs in one fell swoop.
But then, what about heap allocations? Calloc has existed for a long time. It
might be worthwhile to have a CFLAG that can tell glibc (or other allocators)
to substitute something like calloc for malloc.
 - https://picture.iczhiku.com/resource/paper/shkeTWJEaFUuWCMc.pdf
 - http://ml-papers.gitlab.io/android.vulnerabilities-2017/appendix/
 - https://msrc-blog.microsoft.com/2020/07/02/solving-uninitialized-kernel-p...
In one week (October 6), or slightly later, I will build grpc 1.41.0 for
Rawhide (F36). Fedora 35 will remain on 1.39.1.
As is traditional for minor releases of grpc, the C++ ABI was broken
(soversion bumped from 1.40 to 1.41). This time, the C (core) ABI was
also broken (soversion bumped from 18 to 19).
I will coordinate builds in a side tag of packages that use the C (core)
and/or C++ libraries. Maintainers of the following packages should have
received this email directly:
Packages that use the Python bindings should be unaffected, as there
should be no incompatible API changes:
• python-opencensus (orphaned)
Good day. darktable maintainer here.
I am experiencing a ppc64le build failure on F34 and EPEL8, EPEL8-next only.
On F34 the error is
/builddir/build/BUILD/darktable-3.8.0/src/iop/channelmixerrgb.c: In function '_convert_GUI_colors.part.0.constprop':
/builddir/build/BUILD/darktable-3.8.0/src/iop/channelmixerrgb.c:3085:1: internal compiler error: in patch_jump_insn, at cfgrtl.c:1299
3085 | }
Please submit a full bug report,
with preprocessed source if appropriate.
Other branches can successfully build on aarch64, x86_64, pcc64le.
Do you know what can be the problem?
So at this week's blocker review meeting, the fact that we don't have
explicit networking requirements in the release criteria really started
to bite us. In the past we have squeezed networking-related issues in
under other criteria, but for some issues that's really difficult,
notably VPN issues. So, we agreed we should draft some explicit
This turns out to be a big area and quite hard to cover (who'd've
thought!), but here is at least a first draft for us to start from. My
proposal would be to add this to the Basic criteria. I have left out
some wikitext stuff from the proposal for clarity; I'd add it back in
on actually applying the proposed changes. It's just formatting stuff,
nothing that'd change the meaning. Anyone have thoughts, complaints,
alternative approaches, supplements? Thanks!
=== Network requirements ===
Each of these requirements apply to both installer and installed system
environments. For any given installer environment, the 'default network
configuration tools' are considered to be those the installer documents
as supported ways to configure networking (e.g. for anaconda-based
environments, configuration via kernel command line options, a
kickstart, or interactively in anaconda itself are included).
==== Basic networking ====
It must be possible to establish both IPv4 and IPv6 network connections
using DHCP and static addressing. The default network configuration
tools for the console and for release-blocking desktops must work well
enough to allow typical network connection configuration operations
without major workarounds. Standard network functions such as address
resolution and connections with common protocols such as ping, HTTP and
ssh must work as expected.
Footnote titled "Supported hardware": Supported network hardware is
hardware for which the Fedora kernel includes drivers and, where
necessary, for which a firmware package is available. If support for a
commonly-used piece or type of network hardware that would usually be
present is omitted, that may constitute a violation of this criterion,
after consideration of the [[Blocker_Bug_FAQ|hardware-dependent-
issues|normal factors for hardware-dependent issues]]. Similarly,
violations of this criteria that are hardware or configuration
dependent are, as usual, subject to consideration of those factors when
determining whether they are release-blocking
==== VPN connections ====
Using the default network configuration tools for the console and for
release-blocking desktops, it must be possible to establish a working
connection to common OpenVPN, openconnect-supported and vpnc-supported
VNC servers with typical configurations.
Footnote title "Supported servers and configurations": As there are
many different VPN server applications and configurations, blocker
reviewers must use their best judgment in determining whether
violations of this criterion are likely to be encountered commonly
enough to block a release, and if so, at which milestone. As a general
principle, the more people are likely to use affected servers and the
less complicated the configuration required to hit the bug, the more
likely it is to be a blocker.
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
I'm trying to build mold for epel8. A bunch of mold's unit tests produce
statically linked 32-bit binaries, so on x86_64 we need a build dependency on
a static multilib glibc.
I have learnt that multilib build dependencies are a tricky thing in Koji, and
the only solution I have found to work is to specify a filename-based
This works fine for rawhide, f35, f34 and epel9 but fails on epel8:
No matching package to install: '/usr/lib/libc.a'
Full build log:
I recently assumed the sphinx package maintenance for Fedora.
When I observed the package structure, I realized it is based on source
code developed in C, with latest stable version available in next link:
Latest binary versions from previous link (for example, v3.4.1) link to
python base source code.
Once I have inspected the package structure and the upstream project, I
realized Sphinx was migrated to python, and newer versions can be
downloaded via the "pip" tool. Apart from that, there is already a
"python-sphinx" package to handle sphinx tools.
So I guess the correct way to go here is to orphan the C based package
(sphinx), unless there is some detail I am missing to keep it maintained.
Sergio Arroutbi Braojos
Software Engineer at Red Hat - Special Projects (SECENGSP)
Red Hat <http://redhat.com>