nforce support?
by Kent Nyberg
I am about to by a new computer after my move to another city to study.
Now i wonder if it is possible to buy one with an nforce-chipset?
I recall some one saying that to get it working you have to install
stuff from nvidia. If i can not get it to work with RHL out of the box
i dont realy know if i want this.
Can some one enlighten me about this? maybe there is some homepage with
the status of this working?
Have a nice day!
20 years, 10 months
RH Decisions (was Re: APT, Yum and Red Carpet)
by Jef Spaleta
Kyle Maxwell wrote:
>But some decisions are starting to leave me in the cold
Err..well...i'm going to give red hat a pretty long lead time on when I
expect them to get this whole external communication thing close to the
right ballpark. I'm not prepared to read anything sinister into why rhl
is down or whatever.Hell I'm shocked it had as much information as it
did for the 3.2 days it was up (almost like they knew what they were
getting into...they know better now i think :->). But if a significant
amount of discussion was generated by the "right" people during those
3.2 days, enough to make a significant portion of that written material
inconsistant with bleeding edge ideas...then I say its better to pull
the site down..better no info than misleading info. The keys to the
kingdom haven't changed hands yet....i'd rather see some significant
fits and starts early on...than a big badass policy gotcha later.
Beginnings are hard...this shift towards rhl-"the project" is going to
probably need a significant honeymoon window where users and
developers..especially the more fervent ones...are going to have to
allow for some really bone-headed obvious mistakes from people who live
on that corporate side of the fence. I have no expectation that this is
going to be a smooth transition during this beta phase. Consider this a
proto-project phase....or maybe it would be best to call the project
concept alpha level...where the project framework isn't stable enough to
make it worth professional documentation. I fervently hope that the
machinery for this "project" will be in place by the end of this beta
phase.
I've beat the trademark dead horse only because its an existing policy
that red hat's to make sure is updated to make sense in terms of the
lack of boxsets in the upcoming release. A lot of the other issues that
people seem to have with red hat right now on how to effectively
communicate with the community are new issues that need new policy...
and i get the feeling the hatters didn't shine the flashlight that far
ahead when taking that first step towards the project concept. This
isn't going to be a linear progression from closed to open...consider
the loss of rhl site as the first coffeetable red hat stumbled into and
had to backup to step around while groping for a plan. Luckily I don't
have to worry about production systems. But as it stands...is anything
all that different than a year ago when it comes to deciding if you were
going to deploy the next red hat release (other than the lack of boxsets
and the oem issues that causes)? I think it helps to think of this beta
as still a traditional beta....the community project still needs a lot
of flushing out...as a concept its alpha. I can just imagine how much
internal discussion is being generated....and I can only hope the
hatters show as good a sense of knowing how to schedule a "release" of
the project framework as they do about pushing out distro releases.
-jef"i love it when a plan comes together...now...if we only had a
plan...though I'd settle for a plan for making a plan"spaleta
20 years, 10 months
Digital Certificate due to expire ?
by Mohamed Eldesoky
Got this while looking to the new up2date release in Alikins'
_____________________________________________
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 0 (0x0)
Signature Algorithm: md5WithRSAEncryption
Issuer: C=US, ST=North Carolina, L=Research Triangle Park, O=Red Hat,
Inc., OU=Red Hat Network Services, CN=RHNS Certificate
Authority/Email=rhns(a)redhat.com
Validity
Not Before: Aug 23 22:45:55 2000 GMT
Not After : Aug 28 22:45:55 2003 GMT
______________________________________________
I wonder if you are already aware of that date !!
Regards
Mohamed Eldesoky
Linux-Egypt
--
Once a wise man said "nothing"
20 years, 10 months
Re: APT, Yum and Red Carpet
by Jef Spaleta
Hans Deragon wrote:
> However, apt has synaptic available as GUI. I am not aware of a GUI
> for yum. For a desktop machine, a GUI is a must.
It's deceptive to think just about end-user oriented feature sets, when
deciding on future development paths. One could argue that yum has a
technical advantage in terms of long term development inside rhl,
because its using the same python bindings that the current redhat tools
use to interact with the rpmdb. There is a definite development
advantage with code reuse. So if you want the redhat tools to be
repository aware, make use of the technology that fits best with the
redhat tools.
One could also argue that the redhat tools should be pitched, but anyone
arguing that would have to be pretty persuasive, or would have to have
really good timing to change the momentum surrounding the development of
the redhat tools (like anaconda and r-c-p).
The long term solution is of course bribing the repository technology
developers into sitting down over some pizza,beer and KK doughnuts and
hashing out a repository metadata standard so repos are as tool neutral
as possible.
But there is a deeper issue in your comment. For a nontechnical user's
desktop machine a GUI is a must...that is surely a truism. But now you
have to ask yourself the question...what is redhat's timeline for
seriously targeting non technical home desktops? I personally don't
think this little hiccup about which choice of repo technology gets
bolted into rhl is going to matter on the same timescale of other
relevant issues which would make linux a prefered solution in the
mainstream nontechnical desktop market.
--jef"Jef to Magic8Ball:
Is the next release going to target desktop users like Jef's mom
Magic8ball to Jef:
Outlook not so good"spaleta
20 years, 10 months
P4s, Athlons and bandwidth
by Jean Francois Martinez
Given that most/all of the recent boxes (ie the ones doing the real
work) are P4s and Athlons it is time RedHat stopped compiling
with -mcpu=i686 and started optimizing for the P4: -mcpu=p4
Another point is that there is no such thing like low-level glibc
functions for the P4 and the Athlon. The highest targetted
processor is the PIII. However documents in AMD's web site show
that moving data (ie memcpy and friends) can be made several times
faster if using 3DNow instructions and data prefetching, I gave only
a cursory glance to the assembler parts of glibc but it didn't look
like those parts (targetting the PIII) would be even remotely ideal
for the Athlon. Same thing about the P4.
Would it be possible for RedHat to contact those with an interest ie
AMD/Intel in order to get high-pedrformance assembly versions of those
low level routines? Or failing that to have them written by an
employee?
--
Jean Francois Martinez <jfm512(a)free.fr>
20 years, 10 months
new up2date available (with apt/yum repo support)
by Adrian Likins
New up2date packages for testing available at:
http://people.redhat.com/~alikins/up2date/severn/
Most notable new feature is support for 3rd party
apt and yum repositories. See the included
/etc/sysconfig/rhn/sources file for info
on how to configure them.
It's definately still got some rough edges,
but hopefully will at least work most of
the time ;->
Most of the rest of the changes are just
multilib related and should be mostly
transparent.
Adrian
20 years, 10 months
RawHide and signature
by Féliciano Matias
Many packages in Rawhide are not signed (528/1461).
Normal ?
--
Féliciano Matias <feliciano.matias(a)free.fr>
20 years, 10 months
rpm hell: trying to install Bastille in Severn (RH 9.0.93)
by Elton Woo
I would like to test Bastille with the Severn beta, so from
http://www.bastille-linux.org/ I downloaded:
Bastille-2.1.1-1.0.i386.rpm, as well as
perl-Tk-800.024-5.rh9.at.i386.rpm.
however, when I try to install perl-Tk, I get:
]# rpm -Uvh perl-Tk-800*.rpm
warning: perl-Tk-800.024-5.rh9.at.i386.rpm: V3 DSA signature: NOKEY, key ID
66534c2b
error: Failed dependencies:
atrpms >= 13-16 is needed by perl-Tk-800.024-5.rh9.at
]#
I can't find any package called "atrpms"
The links I get are:
* freshrpms.net by Matthias Saou
* NewRPMS by Rudolf (Che) Kastl
* Dag's rpm collection by Dag Wieers
... still can't find "atrpms" >= 13-16.
Can someone explain to me in simple plain English
WHAT package exactly I need to get?
TIA,
Elton.
--
http://setiathome.ssl.berkeley.edu/stats/team/team_4504.html
"You only live once, so let's make life EASIER for each other."
LINUX Registered User #193975. AMD-K7 ATHLON CPU power on board.
20 years, 10 months
Re: RH Decisions (was Re: APT, Yum and Red Carpet)
by Jef Spaleta
>Instead, we need to work on some software that can detect
>dependency conflicts between the external repository and
>the core distribution and rebuilds the RPMS in the repository.
Err...one external repository against the "core" is easy. But 4 or 5
independant external repositories that might interfere with each other
and the core...is going to be a bloody nightmare. Even if you are
trying to rebuild rpms in the repos to get around things. Something like
and advanced gnome repo with bleeding edge gnome stuff could take a hell
of a lot of rebuilding...and of course depending on the conventions used
in the specfiles...you still aren't going to solve all the dependacy
problems by rebuilding. I think there are lessons to be learned from
how the other community based distro try to do things...how fractured is
the debian tree? How fractured is the gentoo tree?
>It really isn't hard to automatically bump the release
>number and rebuild the RPM, nor should it be very hard
>to figure out when exactly it is needed ...
Epoch wars!!!!!!!!!!! This sounds great as long as you dont have 14
different repos all providing the same version of the same library
compiled with different "options" turned on or off in the specfile or
with craptastic explicit dependancies listed in the specfile...or
dependancies unique to one repo. How you maintain a mixed dependancy
tree among several 3rd party repos sanely is more than slightly scary.
The idea Seth mentioned of "task" based repos to keep repo collisions
down to a minimum has some merit for repos that add new functionality.
But there needs to be some strong community agreement on what that set
of "tasks" are and there needs to be strong community agreement on how
to promote a package to core if multiple repos find they need to provide
it and end up with a conflict...but even with that I think there are
some libraries that end up being needed across task groups that can't be
put into core because of patents and what not.
But repos that try to get advanced "core" functionality..like bleeding
edge gnome for example..just can't be thought of in terms of a single
"task" you end up having to cut across task groups to provide a workable
gnome...thats surely going to lead straight to a sort of hell...if there
isn't some coordination between repos and the core community.
I'm still a HUGE proponent of the "one true meta-repository." And I have
no problem with there being competing implementations of the one true
repo. Competition is good right? But I'm more than willing to be
convinced that repo maintainers can put their heads together and come up
with a workable "tasks" based framework with a goal to keeping
collisions down across the repos. But I am pretty convinced that if
repos aren't working at some level together...yer just going to have
lots of dependancy and packaging error problems when mixing 3+ repos on
top of core. Whether or not they have to work so close together that
they fuse (i love that word, fuse) into "the one true metarepo"...or if
they only need to have a more theortical policy framework on how to deal
with packaging conflicts...its clear there is going to have to be some
level of communication to have it work better than it works now.
Automated software is NOT going to solve the problems we have now when
mixing XD2,freshrpm,fedora,and the COUNTLESS rpms from the projects at
sourceforge that don't even make it into a repo...its going to to some
packaging policies and conflict resolution policies..a lot of beer...a
lot of pizza...a lot of KK doughnuts.
53 developers each with their little micro-repo isn't going to be much
better than rpms sitting in project trees on sourceforge.
-jef"and someone PLEASE think about the people stuck on dial-up on some
of their machines, and make it easy to grab something like repo iso
images, like maybe on a quarterly basis"spaleta
20 years, 10 months