Gnome-common and gnome-autogen.sh
by W. Michael Petullo
Many GNOME CVS trees require a script called gnome-autogen.sh to build
their configure scripts and such. Mandrake and Debian provide
gnome-autogen.sh within their gnome-common package but it seems missing
from Red Hat/Fedora.
Am I missing something?
--
Mike
:wq
19 years, 11 months
Self-Introduction: Thomas J. Baker
by Thomas J. Baker
Name: Thomas J. Baker
Address: USA, Durham, NH
Profession: Systems Programmer
Company: University of New Hampshire, Research Computing Center
Goals:
-- help fedora contain up to date versions of some already included
software and some missing software like galeon, seahorse, synergy,
gthumb, xmms-shn, glabels
-- I'll willing to do QA but it will usually be limited to software I
or someone I support or work with needs
-- I currently build quite a few rpms on my own which I'd like to
see in fedora. I maintain my own apt/yum repository for supported
Red Hat releases and my extra packages. I support over 40 linux
systems and that number is increasing all the time. I'll probably
add a fedora mirror to my mirror list soon.
Qualifications:
-- I've beta tested every Red Hat release since 6.2. I'm a regular
bugzilla contributer for Red Hat, Gnome, Ximian, and GPE, amoung
others. I beta test the GPE palmtop environment on iPAQs.
-- I know C, C++, Java, Perl, Python, csh, sh, and other 'dead'
languages. I do applications, systems, and web programming on
a daily basis.
GPG KEYID and fingerprint:
wintermute> gpg --fingerprint 1CEE63C4
pub 1024D/1CEE63C4 2001-10-08 Thomas J. Baker <tjb(a)unh.edu>
Key fingerprint = 7AB2 D9FD 1B5A 4CCC 95A9 7E2E A02B 638E 1CEE 63C4
sub 1024g/2DA4F4E2 2001-10-08
neuromancer> gpg --fingerprint 7AFDB8C4
pub 1024D/7AFDB8C4 2001-10-08 Thomas J. Baker <tjb(a)bakerconsulting.com>
Key fingerprint = 18DB A077 00BB 54EB 569B 517E FC87 8868 7AFD B8C4
sub 1024g/230B9388 2001-10-08
tjb
--
=======================================================================
| Thomas Baker email: tjb(a)unh.edu |
| Systems Programmer |
| Research Computing Center voice: (603) 862-4490 |
| University of New Hampshire fax: (603) 862-1761 |
| 332 Morse Hall |
| Durham, NH 03824 USA http://wintermute.sr.unh.edu/~tjb |
=======================================================================
19 years, 11 months
rawhide report: 20031030 changes
by Build System
Updated Packages:
abiword-2.0.1-1
---------------
* Tue Oct 28 2003 Jeremy Katz <katzj(a)redhat.com> 1:2.0.1-1
- 2.0.1
- really remove duplicate desktop file
anaconda-9.2-1
--------------
* Tue Oct 28 2003 Anaconda team <bugzilla(a)redhat.com>
- built new version from CVS
* Tue Oct 08 2002 Jeremy Katz <katzj(a)redhat.com>
- back to mainstream rpm instead of rpm404
* Mon Sep 09 2002 Jeremy Katz <katzj(a)redhat.com>
- can't buildrequire dietlibc and kernel-pcmcia-cs since they don't always
exist
* Wed Aug 21 2002 Jeremy Katz <katzj(a)redhat.com>
- added URL
* Thu May 23 2002 Jeremy Katz <katzj(a)redhat.com>
- add require and buildrequire on rhpl
* Tue Apr 02 2002 Michael Fulbright <msf(a)redhat.com>
- added some more docs
* Fri Feb 22 2002 Jeremy Katz <katzj(a)redhat.com>
- buildrequire kernel-pcmcia-cs as we've sucked the libs the loader needs
to there now
* Thu Feb 07 2002 Michael Fulbright <msf(a)redhat.com>
- goodbye reconfig
* Thu Jan 31 2002 Jeremy Katz <katzj(a)redhat.com>
- update the BuildRequires a bit
* Fri Jan 04 2002 Jeremy Katz <katzj(a)redhat.com>
- ddcprobe is now done from kudzu
* Wed Jul 18 2001 Jeremy Katz <katzj(a)redhat.com>
- own /usr/lib/anaconda and /usr/share/anaconda
* Fri Jan 12 2001 Matt Wilson <msw(a)redhat.com>
- sync text with specspo
* Thu Aug 10 2000 Matt Wilson <msw(a)redhat.com>
- build on alpha again now that I've fixed the stubs
* Wed Aug 09 2000 Michael Fulbright <drmike(a)redhat.com>
- new build
* Fri Aug 04 2000 Florian La Roche <Florian.LaRoche(a)redhat.com>
- allow also subvendorid and subdeviceid in trimpcitable
* Fri Jul 14 2000 Matt Wilson <msw(a)redhat.com>
- moved init script for reconfig mode to /etc/init.d/reconfig
- move the initscript back to /etc/rc.d/init.d
- Prereq: /etc/init.d
* Thu Feb 03 2000 Michael Fulbright <drmike(a)redhat.com>
- strip files
- add lang-table to file list
* Wed Jan 05 2000 Michael Fulbright <drmike(a)redhat.com>
- added requirement for rpm-python
* Mon Dec 06 1999 Michael Fulbright <drmike(a)redhat.com>
- rename to 'anaconda' instead of 'anaconda-reconfig'
* Fri Dec 03 1999 Michael Fulbright <drmike(a)redhat.com>
- remove ddcprobe since we don't do X configuration in reconfig now
* Tue Nov 30 1999 Michael Fulbright <drmike(a)redhat.com>
- first try at packaging reconfiguration tool
anaconda-images-9.2-3
---------------------
control-center-2.4.0-3
----------------------
* Wed Oct 29 2003 Jonathan Blandford <jrb(a)redhat.com> 1:2.4.0-3
- require libgail-gnome
* Mon Sep 22 2003 Jonathan Blandford <jrb(a)redhat.com> 1:2.4.0-2
- get all the schemas
desktop-backgrounds-2.0-17
--------------------------
* Wed Oct 29 2003 Havoc Pennington <hp(a)redhat.com> 2.0-17
- redhat-backgrounds-5
fedora-release-1-2
------------------
indexhtml-1-1
-------------
* Wed Oct 29 2003 Elliot Lee <sopwith(a)redhat.com> 1-1
- FC1
kernel-2.4.22-1.2115.nptl
-------------------------
* Wed Oct 29 2003 Dave Jones <davej(a)redhat.com>
- Back out part of the 2.4.23pre ACPI changes as per upstream.
- Fix up typo in orlov patch.
- Remove orlov patch for the time being.
ntp-4.1.2-5
-----------
* Wed Oct 29 2003 Harald Hoyer <harald(a)redhat.de> 4.1.2-5
- reverted to 4.1.2 (4.2.0 is unstable) #108369
* Tue Oct 28 2003 Harald Hoyer <harald(a)redhat.de> 4.2.0-3
- removed libmd5 dependency
- removed perl dependency
openoffice.org-1.1.0-4
----------------------
* Tue Oct 28 2003 Dan Williams <dcbw(a)redhat.com> 1.1.0-4
- Make OpenOffice.org more prelink-friendly, ie. when prelinked
it ought to start up faster (Jakub Jelinek)
- Bypass soffice script, handle the necessary things already
in ooffice script (Jakub Jelinek)
- Make getstyle-gnome and msgbox-gnome binaries prelinkable (Jakub Jelinek)
- Speed up the build by not packing all binaries and libraries
18 times (and also require way less diskspace for the build) (Jakub Jelinek)
- Fix .desktop file stuff
- Fix slightly broken 1.0.2 upgrade process
- Enable parallel building for > 1 processor machines
redhat-config-httpd-1.1.0-5
---------------------------
* Wed Oct 29 2003 Phil Knirsch <pknirsch(a)redhat.com> 1.1.0-5
- Fixed problem with Allow from and Deny from (double from).
- Big change in 4Suite requires fix for the Xslt processing.
* Fri Oct 03 2003 Jeremy Katz <katzj(a)redhat.com> 1.1.0-4
- rebuild
rhgb-0.11.2-1
-------------
* Wed Oct 29 2003 Jonathan Blandford <jrb(a)redhat.com> 0.11.1-2
- rebuild for fixed german translation
rp-pppoe-3.5-8
--------------
* Wed Oct 29 2003 Than Ngo <than(a)redhat.com> 3.5-8
- fix a bug in connect script
rpmdb-fedora-1-0.20031030
-------------------------
up2date-4.1.14-2
----------------
* Wed Oct 29 2003 Adrian Likins <alikins(a)redhat.com> 4.1.14
- fix #108401
- fix for redirect bug from Sopwith
* Wed Oct 29 2003 Elliot Lee <sopwith(a)redhat.com> 4.1.12-2
- Add fedora patch to import extra Fedora keys, and to put yum update
URLs in.
* Mon Oct 27 2003 Adrian Likins <alikins(a)redhat.com>
- be more robust when changing on disk metadata formats
- fix #108085
- fix yum 0k/s bug (#107048)
- dirRepo is close to allowing file tree walks
yum-2.0.4-2
-----------
* Wed Oct 29 2003 Elliot Lee <sopwith(a)redhat.com> 2.0.4-2
- Stick in a new yum.conf for FC1.
19 years, 11 months
Re: Vector-based Fedora Logo Available?
by Steven Garrity
> From: "Ricky Boone" <whiplash(a)planetfurry.com>
> Subject: Vector-based Fedora Logo Available?
> Probably a really stupid question..., but will
> there be a version of the (final) Fedora logo as
> a vector-based image, in a format like EPS, SVG, etc?
I apologize if this question has been answered elsewhere - but I was
wondering if there is a public repository for the artwork in general -
vector/bitmap-originals of the bluecurve graphics, etc.
Thanks,
Steven Garrity
19 years, 11 months
yum.conf shipped with 1.0
by Michael A. Koziarski
Hi All,
will the yum.conf shipped with Fedora Core 1.0 include entries for
fedora.us? I think it's a good first step to introducing users to the
idea of using yum repositories to get software.
Plus it will enable software that has recently been added to fedora.us
(such as gtkmm2) to provide simple installation instructions.
I realise that the proper merger of fedora.us and fedora core is still a
way off, but I believe that this would be an excellent first step.
Any comments?
Cheers
Koz
19 years, 11 months
Date and Time Configuration Tool man page translation
by Hornain Frederic
Hi,
This the man page of the Date and Time Configuration Tool translated in
french.
I will as soon as possible do the same with the help html file.
Keep me in touch if you need something else to translated in french.
<<redhat-config-date.8>>
Best regards
Fred
19 years, 11 months
kernel to be backported to RH7.3-RH9?
by Axel Thimm
The current rawhide kernels have some compatibility bits (like
%{sevenexcompat}).
Will the Fedora Core kernel be used for RHL errata (with deactivating
nptl/exec-shield where appropriate)?
--
Axel.Thimm(a)physik.fu-berlin.de
19 years, 11 months
OSSNET - Proposal for Swarming Data Propagation
by Warren Togami
(Alan Cox mentioned a theoretical idea for bittorrent in data
propagation for yum... so this seemed like the most appropriate time to
post this again. Comments would be greatly welcomed.)
OSSNET Proposal
October 28, 2003
Warren Togami <warren(a)togami.com>
The following describes my proposal for the "OSSNET" swarming data
propagation network. This was originally posted to mirror-list-d
during April 2003. This proposal has been cleaned up a bit and
amended.
Unified Namespace
=================
This can be shared with all Open Source projects and distributions.
Imagine this type of unified namespace for theoretical protocol "ossnet".
ossnet://%{publisher}/path/to/data
Where %{publisher} is the vendor or project's master tracker.
The client finds it with standard DNS.
Examples:
ossnet://swarm.redhat.com/linux/fedora/1/en/iso/i386/
ossnet://ossnet.kernel.org/pub/linux/kernel/
ossnet://swarm.openoffice.org/stable/1.2beta/
ossnet://central.debian.org/dists/woody/
ossnet://swarm.k12ltsp.org/3.1.1/
ossnet://master.mozilla.org/mozilla1.7/
Each project tracker has their own official data source with the entire
repository GPG signed for automatic ossnet client verification.
Phase 1 - Swarming for Mirrors only
===================================
Initial implementation would be something like rsync, except swarming
like bittorrent and used only for mirror propagation. It may need
encryption, some kind of access control, and tracking in order to
prevent intrusion, i.e. hold new release secret until release day.
(This paragraph below about access control and encryption was written
after the release of RH9, and the failure of "Easy ISO" early access due
to bandwidth overloading and bittorrent. In the new Fedora episteme
this access control stuff may actually not be needed anymore. We can
perhaps implement OSSNET without it at first.)
I believe access control can be done with the central tracker (i.e. Red
Hat) generating public/private keys, and giving the public key to the
mirror maintainers. Each mirror maintainer would choose which
directories they want to permanently mirror, and which to exclude. Each
mirror server that communicates with another mirror would first need to
verify identity with the master tracker somehow. If somebody leaks
before a release, they can be punished by revoking their key, then the
master tracker and other mirrors will reject them.
Even without the encryption/authorization part this would be powerful.
This would make mirror propagation far faster while dramatically
reducing load on the master mirror. Huge money savings for the data
publisher... but it gets better.
Phase 2 - Swarming clients for users
====================================
I was also thinking about end-user swarming clients. up2date, apt or yum
could integrate this functionality, and this would work well because
they already maintain local caches. The protocol described above would
need to behave differently for end-users in several ways.
Other than the package manager tools, a simple "wget" like program would
be best for ISO downloads.
Unauthenticated clients could join the swarm with upload turned off by
default and encryption turned off (reduce server CPU usage). Most users
don't want to upload, and that's okay because the Linux mirrors are
always swarming outgoing data. Clients can optionally turn on upload,
set an upload rate cap, and specify network subnets where uploading is
allowed. This would allow clients within an organization to act as
caches for each other, or a network administrator could setup a client
running as a swarm cache server uploading only to the LAN, saving tons
of ISP bandwidth. A DSL/cable modem ISP would be easy to convince to
setup their own cache server to efficiently serve their customers. This
is because setting up a server can be done quickly & unofficially.
Clients joining the swarm would greatly complicate things because the
protocol would need to know about "nearby" nodes, like your nearest
swarming mirror or your LAN cache server. This may need to be a
configuration option for end-user clients. These clients would need to
make more intelligent use of nearby caches rather than randomly swarm
packets from hosts over the ISP link. The (bittorrent) protocol would
need to be changed to allow "leeching" under certain conditions without
returning packets to the network. Much additional thought would be
needed in these design considerations.
Region Complication
===================
Due to higher costs of intercontinental bandwidth, or commodity Internet
over I2 cost within America, we may need to implement a "cost table"
system that calculates best near-nodes taking bandwidth cost into account.
Perhaps this may somehow use dedicated "alternate master trackers"
within each cost region, for example Australia, which are GPG identified
by the master tracker as being authoritative for the entire region. Then
end-user clients that connect to the master tracker are immediately told
about their nearer regional tracker.
Possible Multicasting
=====================
This isn't required, but multicasting could be utilized in addition to
unicast in order to more efficiently seed the larger and more connected
worldwide mirrors. Multicast would significantly increase the
complexity of network router setup and software complexity, so I am not
advocating this be worked on until the rest of the system is mplemented.
Possible Benefits?
==================
* STATISTICS!
As BitTorrent has demonstrated, end-user downloads could
possibily be tracked and counted. It would be fairly easy to
standardize data collection in this type of system. Today we have no
realistic way to collect download data from many mirrors due to the
setup hassles and many different types of servers. Imagine how useful
package download frequency data would be. We would have a real idea of
what software people are using, and possibly use that data to guage
where QA should be focused to better and make users/customers happier.
* Unified namespace!
Users never have a need to find mirrors anymore, although optionally
setting cache addresses would help it be faster and more efficient.
* Public mirrors (even unofficial) can easily setup and shutdown at any
time. Immediately after going online they will join the swarm and begin
contributing packets to the world. THAT is an unprecedented and amazing
ability. The server maintainer can set an upload cap so it never kills
their network. For example, businesses or schools could increase their
upload cap during periods of low activity (like night?) and contribute
to the world. The only difference between an official and unofficial
mirror would be unofficial cannot download or serve access controlled
data since they are not cryptographically identified by the master
tracker. Any client (client == mirror) can choose what data they want
to serve, and what they do not want to serve.
* Automatic failover: If your nearest or preferred mirror is down, as
long as you can still reach the master tracker you can still download
from the swarm.
* Most of everything I described above is ALREADY WRITTEN AND PROVEN
CONCEPTS in existing Open Source implementations like bittorrent
(closest in features), freenet (unified namespace) and swarmcast
(progenitor?). I think the access control and dynamic update mechanism
has been implemented yet. bittorrent may be a good point to start
development from since it is written in python ... although scalability
may be a factor with python, so a C rewrite may be needed. (?)
FAQ
===
1. This idea sucks, I don't want to upload!
RTFM! This proposal says that clients have upload DISABLED by default.
2. This idea sucks, I don't want to upload to other people!
RTTM! In this proposal you can set your mirror to upload only to certain
subnets, at certain set upload rate caps.
3. Wont this plan fail for clients behind NAT?
Incoming TCP sessions are only needed if you upload to the swarm, as
other clients connect to you. Uploading is DISABLED by default
Downloading only requires outgoing TCP connections.
4. What if outgoing connections on high ports are disallowed?
Then you are SOL, unless we implement a "proxy" mode. Your LAN can have
a single proxy mirror that serves only your local network, and
downloading your requests on your behalf.
Conclusion
==========
Just imagine how much of a benefit this would be to the entire Open
Source community! Never again would anyone need to find mirrors.
Simply point ossnet compatible clients to the unified namespace URI, and
it *just works*. We could make a libossnet library, and easily extend
existing programs like wget, curl, Mozilla, galeon, or Konqueror to
browse this namespace.
This is an AWESOME example of legitimate use of P2P, and far easier to
police abuse than traditional illegal use of P2P clients. Data
publishers need to run a publically accessible tracker and must be held
legally accountable. This is more like a web server with legal content
and millions of worldwide proxy caches. In any case the web server would
be held accountable for the legality of their content.
That is how this differs from Freenet which uses encryption everywhere
and is decentralized. Freenet can be used for both good and evil, while
ossnet can only sustainably used for good, because normal law
enforcement can easily locate and (rightly) prosecute offenders. This is
existing copyright law, how it was meant to be used. If this idea became
reality, we could point to this glowing example of legitimate P2P as a
weapon to fight RIAA/MPAA interests.
I hope I can work on this project one day. This could be world
changing... and sure would be a fun to develop. Maybe Red Hat could
develop this, in cooperation with other community benefactors of such an
awesome distribution system.
Comments? =)
Warren Togami
warren(a)togami.com
p.s. Time to short Akamai stock. <evil grin>
19 years, 11 months
Rawhide updates
by Philip Balister
Is there anyway way to get an idea of what has changed (changelog wise) in
rawhide over the past couple of days?
Philip
19 years, 11 months