autoconf breakage on x86_64.
by Sam Varshavchik
I don't know the right way to fix this, but something is definitely broken;
and something needs to be fixed, one way or the other. The question is what
exactly needs to be fixed.
Consider something like this:
LIBS="-lresolv $LIBS"
AC_TRY_LINK_FUNC(res_query, AC_MSG_RESULT(yes), AC_MSG_RESULT(no))
Here's what happens on x86_64:
gcc -o conftest -g -O2 -Wall -I.. -I./.. conftest.c -lresolv >&5
/tmp/ccW7EeDX.o(.text+0x7): In function `main':
/home/mrsam/src/courier/authlib/configure:5160: undefined reference to
`res_query'
collect2: ld returned 1 exit status
configure:5147: $? = 1
configure: failed program was:
[ blah blah blah ]
| /* We use char because int might match the return type of a gcc2
| builtin and then its argument prototype would still apply. */
| char res_query ();
| int
| main ()
| {
| res_query ();
| ;
| return 0;
| }
The same exact test on FC1 x86 will work.
The reason appears to be that you have to #include <resolv.conf> on x86_64
in order to succesfully pull res_query() out of libresolv.so. You don't
need to do this on x86, and the test program generated by AC_TRY_LINK_FUNC
does not include any headers, but uses a manual prototype.
So, what now?
14 years, 4 months
gnupg newpg gpgme gpgme03 cryptplug isuses
by Dennis Gilmore
Recently the version of libgcrypt was increased to 1.1.94 as a result of
this newpg would not build against the newer libgcrypt i sent an email to
gcrypt-devel list and this is what i got back
<quote>
Don't build newpg at all. It has been superseded by gnupg-1.9.x .
You would need an old libgcrypt < 1.1.42 to build it. The configure
sscript was not able to detect newer versions with a changed API.
</quote>
we have a problem here seems we no longer need newpg but we need things it
provides for gpgme gpgme03 cryptplug gpa kgpg (which doesnt complain so
much just says its not there) to get these things it provides we need to
go to the newer gnupg or we need to revert back to the older libgcrypt.
being so late in the cycle i dont know which would be best. so i thought id
ask before filling a bug against something. it does need to be fixed before
final is out as people will complain if they cant decrypt email in kmail or
mutt gpa doesnt work etc. i think we should probably go back to the old
version of libgcrypt
Dennis
16 years, 3 months
udev in initrd
by Thomas Woerner
There are test packages in http://people.redhat.com/twoerner/UDEV/ for using
udev in initrd with persistent devices.
Usage
-----
- If you want to enable udev in initrd, then install the test packages and
create an initrd with mkinitrd.
- If you want to turn off udev, set USE_UDEV="no" in /etc/sysconfig/udev.
- For another udev root directory (not /dev) set udev_root="/some dir/" in
/etc/udev/udev.conf - This will not disable udev in initrd. The result is an
unusable initrd
- For disabling persistent /dev filesystem set UDEV_KEEP_DEV="no" in
/etc/sysconfig/udev. Your /dev filesystem will not be the same in inird and
the running system.
- You have to recreate the initrd after changing any of these options.
Warnings
--------
- The new mkinitrd is not tested heavily (especially lvm support).
- It will not work with devfs.
- Make a backup copy of the original initrd (best is to make an additional
boot entry in grub with the new initrd)
Information about udev and the new mkinitrd
-------------------------------------------
The benefit of udev is that there are only those device nodes which are bound
to devices in your computer and that you can have additional device naming
schemes like 'by label' or 'by id'.
However there is a small problem with dynamic device nodes: For all devices,
which are not recognized before a specific module is loaded, there will be no
device node until the driver is loaded either by hand of by a program. kudzu
would be a good candidate for this, but it has to be started earlier, then.
udev is using helpers for additional device naming schemes, which are c
programs or shell scripts. Therefore it is necessary to put tools like sed,
awk, grep and so on in the initrd. These programs are not small and the initrd
would be very big. The solution for this is to use a static compiled busybox,
which combines tiny versions of many utilities into a single executable.
Thus the new mkinitrd is using busybox for the initrd with udev support.
Disabling udev results in a normal initrd with nash. It is easy to modify
mkinitrd to build the normal initrd with busybox.
Here are the flowcharts for the standard initrd of fc2 (without lvm support)
and the udev version:
Standard initrd - using nash
----------------------------
1) mount /proc and /sys in initrd
2) load modules (eg: controller, filesystem)
3) umount /sys
4) locate root device
5) create block devices
6) mount system root on /sysroot
7) change root to /sysroot and initrd to /sysroot/initrd
8) umount /initrd/proc
udev initrd - using busybox and ramfs
-------------------------------------
1) mount /proc and /sys
2) mount /dev as ramfs
3) create initial devices (eg: console, null, zero, loopX) and links for std
files
4) start udev, use udevsend as hotplug
5) load modules (eg. controller, filesystem)
6) umount /sys
7) locate root device
8) mount system root on /sysroot
9) bind /dev to /sysroot [UDEV_KEEP_DEV="yes"]
10) change root to /sysroot and initrd to /sysroot/initrd
11) umount /initrd/proc
12) umount /initrd/dev [UDEV_KEEP_DEV="yes"]
Have fun,
Thomas
--
Thomas Woerner, Software Developer Phone: +49-711-96437-0
Red Hat GmbH Fax : +49-711-96437-111
Hauptstaetterstr. 58 Email: twoerner(a)redhat.com
D-70178 Stuttgart Web : http://www.redhat.de/
16 years, 6 months
rpmbuild-nonroot %{version} interpreted incorrectly
by Leonard den Ottolander
Hi,
When using Mike Harris' rpmbuild-nonroot setup (except for the no archs
hack) I have a problem building rpm. See
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=124364 .
rpmbuild -bp rpm.spec fails:
error: File /data/rpmbuild-fc1/rpm-1.8.1/rpm-4.2.1.tar.gz: No such
file or directory
It looks like the last "Version:" (popt's) is being used in the path
expansion for %setup.
$ rpm --showrc | grep sourcedir
RPM_SOURCE_DIR="%{u2p:%{_sourcedir}}"
RPM_SOURCE_DIR="%{_sourcedir}"
-14: _sourcedir %{_topdir}/%{name}-%{version}
-14: _specdir %{_sourcedir}
The latest "Version:" tag in the spec file is the one for popt. It
looks as if this value is substituted for %{version}. How can this be
fixed?
Leonard.
--
mount -t life -o ro /dev/dna /genetic/research
16 years, 7 months
Fedora.us sync with devel
by Ivan Gyurdiev
Why is there no repository for the fedora.us packages that's in sync
with fedora development? I've noticed fedora-devel packages tend to
update in batches in order not to break dependencies. It becomes a pain
to make fedora devel work with fedora.us packages against the last
release. Isn't the idea to merge the two anyway? When will that happen?
16 years, 7 months
fc2, xorg, 2.6.x, scheduling latency peaks
by Fernando Lopez-Lezcano
Hi all. I'm trying to track the cause of high scheduling latency peaks
in FC2 that make the system unusable for low latency audio work.
Test systems: PIV laptop, radeon video chipset, AMD64 desktop, radeon
video chipset. How I test: I run the Jack (Jack Audio Connection Kit)
sound server with 2 or 3 128 frame buffers for low latency operation
through the Qjackctl gui front-end. I then start GUI apps that use Jack
(for example Freqtweak, Hydrogen and Jamin). I see plenty of buffer
xruns of varying durations during the app load process and afterwards.
The same (laptop) system booting FC1 + XFree + 2.4.26 + low latency
patches has rock solid performance and no xruns in the same conditions.
If I boot FC2 into 2.4.26 + low latency patches I _still_ see xruns, so
it looks like the kernel itself is not triggering them. At least not all
of them.
I have no hard data but the xruns appear to coincide with X activity. If
I turn off video acceleration I see _more_ xruns (significantly more).
If I switch to the vesa driver I still see xruns (so it would seem that
the radeon driver itself is not to blame).
Does anyone out there know if something has changed between XFree and
xorg that may account for this change in behavior?
Thanks for _any_ help.... (I also tried several other configuration
options for xorg.conf with no effect whatsoever[*]).
-- Fernando
[*] and even tried to downgrade to FC1's XFree86 just to see if I could
isolate the cause but the result was too unstable and messy to draw any
conclusions :-)
16 years, 7 months
Re: small improvements for fedora 3: sound mixing
by Christian Paratschek
> The problem is that dmix doesn't work for all apps.
>
> http://article.gmane.org/gmane.comp.video.xine.devel/7058/
>
> is an article of one complaint, for example.
>
> Bill
>
hmmm... i wouldn't call that a problem. it only needs to work for the
apps that are included. who cares if xine works - it is not included in
fedora anyway.
my point is: proper soundmixing should be a goal for fc3. i don't know
which ways there are to achieve this goal, if its dmix or anything else,
but it needs to be done (imho).
christian
16 years, 7 months
Spamassassin [Fwd: 3.0.0 schedule]
by Warren Togami
FYI. Rawhide now contains a snapshot of spamassassin-3.0.0, which will
be updated next week when the official pre-release happens. I encourage
all spamassassin users to rebuild the rawhide spamassassin SRPM for use
on FC1 or FC2 in order to provide production testing, and report bugs in
upstream Bugzilla. I personally have been using 3.0.0 svn snapshots on
my personal server for a few weeks now, and it has been much better than
spamassassin-2.63 for me.
Warren Togami
wtogami(a)redhat.com
-------- Original Message --------
Subject: 3.0.0 schedule
Date: 28 May 2004 17:41:24 -0700
From: Daniel Quinlan <quinlan(a)pathname.com>
Reply-To: quinlan(a)pathname.com
To: spamassassin-dev(a)incubator.apache.org
So, here's the new schedule, based on Theo's last schedule:
(a) bug squashing is not part of schedule; this can and should be
scheduled independently
(b) there is no concept of "all bugs" any more, only "critical bugs"
(c) warning people about mass-check runs is also decoupled as that can
be done independently -- we can warn people now, even
(d) critical bugs had better darn well show up in red in my bugzilla
screen (that is, "Severity" field set to "critical") or the bug
doesn't count as critical.
feature freeze:
05/31: feature freeze, enter Review-then-Commit mode at 0900 UTC to
enforce feature freeze
pre-release cycle:
06/03: first pre-release
do {
3 to 7 days of testing of pre-release
issue new pre-release
} while (critical bugs found in testing)
mass-check cycle:
day 0: announce mass-check run 1 (sets 0 and 1), run 7 days
day +7: generate scores, etc.
day +9: new pre-release with new scores, announce mass-check run 2
(sets 2 and 3), run 7 days
day +11: generate scores, etc.
release-candidate cycle:
day 0: release 3.0.0-rc1
do {
3 to 7 days of testing of release candidate
issue new release candidate
} while (critical bugs found in testing)
day ?: issue 3.0.0-final
------------------------------------------------------------------------
RATIONALE:
First, we need R-T-C to have the feature freeze. Second, my experience
is that we actually move faster once we enter R-T-C mode because
everyone is following development closer.
We should not try to get everything done or close every bug before
moving forward with pre-releases (which must precede the mass-checks to
get the bugs out) because we'll always have bugs being added. 2.63 is
not as good as 3.0, so let's release and help out our users.
So, no more bug squashing events, no more delays, let's enter R-T-C mode
on Monday and do a pre-release next week. WE ARE READY.
Also, if someone wants to replace the scores, then you have a week.
Open a bug. ;-)
I believe tying everything together in the schedule is adding delays and
making it easier to slip more and more stuff into the release. We never
had to lock-step things before, we just reviewed the open bugs at each
stage of the simple schedule and decided whether or not we were ready to
proceed to the next step or if we had to cut a new pre/rc release. It's
how most open source projects work. Assigning dates far out in the
future is pretty pointless and just makes things frustrating.
As you can see, I've attempted to streamline the schedule, for example,
by allowing for as little as 3 days of testing (if a bug can be verified
as fixed quickly) so we can plow through rather than stay stuck in the
mud. I limited things to 3 days, though, so we don't issue new releases
every day or every other day.
Finally, since we're not in R-T-C mode yet, I'm calling this the 3.0.0
schedule. If you want to veto...
Daniel
--
Daniel Quinlan
http://www.pathname.com/~quinlan/
16 years, 7 months