I tried that.
I am starting to get upset now. Mike says to update but how is this possible? All I can do is to run the fedora rescue disc and then what? Do I run the up2date -u in this mode after setting up the network connection?
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very very very hosed and buggy. There are NO RECOVERY PATHs that I can find and I have wasted too much time already trying to fix what is very broken.
Dan
-----Original Message----- From: fedora-list-bounces@redhat.com [mailto:fedora-list-bounces@redhat.com]On Behalf Of Anil Kumar Sharma Sent: Tuesday, October 25, 2005 10:58 PM To: For users of Fedora Core releases Subject: Re: FC4 does not work, "out of the box" for me; GUI/X11 fails
Try replacing this file with older version. libvgahw.a for details search list archives ~7th july 2005. Also see bugzilla referrred there.
On 10/26/05, Mike Pepe lamune@doki-doki.net wrote:
Daniel B. Thurman wrote:
I tried that - text-mode installation and does not work or at least I was not able to figure out how to do it.
I believe that when the CD boots, type
linux text
that should do it.
-Mike
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
-- Anil Kumar Shrama
Daniel B. Thurman wrote:
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very
Don't forget... although we are the monkeys that get experimented on, we get a chance to test and report first... some of the problems are coming from our lack of effort to find the problems first...
-Andy
On Wed, 2005-10-26 at 04:28, Andy Green wrote:
Daniel B. Thurman wrote:
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very
Don't forget... although we are the monkeys that get experimented on, we get a chance to test and report first... some of the problems are coming from our lack of effort to find the problems first...
Note that the k12ltsp project also respins the isos after testing with their added software, and as a side effect includes the updates available at the time of their release.
Follow the 'how to obtain' link at http://www.k12ltsp.org/phpwiki/
On Wednesday 26 October 2005 14:25, Daniel B. Thurman wrote:
I tried that.
I am starting to get upset now. Mike says to update but how is this possible? All I can do is to run the fedora rescue disc and then what? Do I run the up2date -u in this mode after setting up the network connection?
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very very very hosed and buggy. There are NO RECOVERY PATHs that I can find and I have wasted too much time already trying to fix what is very broken.
Hi Daniel, Don't get too upset :) You said you finished installed it in text mode: 2) Did Text-based installation and installed everything and hoped that I can somehow fix the X11/GUI later:
Then you can update FC4 using yum as root in the console, type: yum check-update You'll see the upgradable packages, Then you can select the packages you want to update (in this case X11 related): yum update packagename1 packagename2
Pls let us know the result.
On Wed, 2005-10-26 at 17:44 +0700, Fajar Priyanto wrote:
On Wednesday 26 October 2005 14:25, Daniel B. Thurman wrote:
I tried that.
I am starting to get upset now. Mike says to update but how is this possible? All I can do is to run the fedora rescue disc and then what? Do I run the up2date -u in this mode after setting up the network connection?
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very very very hosed and buggy. There are NO RECOVERY PATHs that I can find and I have wasted too much time already trying to fix what is very broken.
Hi Daniel, Don't get too upset :) You said you finished installed it in text mode: 2) Did Text-based installation and installed everything and hoped that I can somehow fix the X11/GUI later:
Then you can update FC4 using yum as root in the console, type: yum check-update You'll see the upgradable packages, Then you can select the packages you want to update (in this case X11 related): yum update packagename1 packagename2
Pls let us know the result.
Shouldn't Daniel be booting to single user mode by adding a 1 or 3 or something to the grub boot line to enable him to boot from the hard drive in text mode to do the updates with yum?
David Niemi wrote:
(snip)
Shouldn't Daniel be booting to single user mode by adding a 1 or 3 or something to the grub boot line to enable him to boot from the hard drive in text mode to do the updates with yum?
yes, I did mention installing in text mode so you can see what's going on, but once that's done you would need to boot into runlevel 3 and update via yum or up2date.
to do that, when GRUB starts, just hit any key except enter and edit the boot command. append the number 3 to the end, and it should boot up into text mode.
-Mike
On Wed, 2005-10-26 at 00:25 -0700, Daniel B. Thurman wrote:
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very very very hosed and buggy.
Why isn't this done by Fedora? (Not outsiders that we don't know whether we can trust.)
In the past we had Red Hat Linux 7.0, 7.1, 7.2, 7.3 before we jumped to 8.0. You had a fighting chance of getting an installation working in the first go. Having to do another hour (or more) of updates, straight away, is a right pain. Just as bad as Windows.
The whole Fedora approach of rushing out major changes to releases on a certain date for the sake of a schedule has proved itself to be a stupid idea. I don't *need* to be rebuilding the entire OS on several PCs that often, and I certainly don't want to. I'm now looking at using something else, instead. One of the BSDs is looking favourable to me.
Tim wrote:
The whole Fedora approach of rushing out major changes to releases on a certain date for the sake of a schedule has proved itself to be a stupid idea. I don't *need* to be rebuilding the entire OS on several PCs that
We should entertain the idea that this was not in fact decided in the expectation of your personal needs... RHAT have said pretty clearly that Fedora is for them a way to gain experience and feedback with newer stuff so that integration of it into RHEL will go smoothly. That's the Fedora deal, and it explains how some really good guys at RHAT can spend all day every day on it, to us, for free.
Having some kind of deadline, despite issues it can cause, in my experience is beneficial in focusing minds and clarifying the mission as it were.
Having said that, yeah, broken install action is pretty major.
often, and I certainly don't want to. I'm now looking at using something else, instead. One of the BSDs is looking favourable to me.
Go for it, dude! Or consider using a RHEL-u-like, eg, CentOS or Whitebox, which have the RHEL release pattern. Use what suits you best.
-Andy
On Thu, 2005-10-27 at 12:58 +0930, Tim wrote:
On Wed, 2005-10-26 at 00:25 -0700, Daniel B. Thurman wrote:
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very very very hosed and buggy.
Why isn't this done by Fedora? (Not outsiders that we don't know whether we can trust.)
---- The answer has been pretty clear on this - the release cycle is so short, that it doesn't pay to spend the energy rebuilding the current release because by that time, they are busy doing the builds for the test releases for the next series. ----
In the past we had Red Hat Linux 7.0, 7.1, 7.2, 7.3 before we jumped to 8.0. You had a fighting chance of getting an installation working in the first go. Having to do another hour (or more) of updates, straight away, is a right pain. Just as bad as Windows.
---- No - has nothing to do with Windows. It has to do with pushing the development along. You should read Eric Raymond's 'Cathedral and Bazaar' for a developers view of fast paced, less than perfect releases as a method that brings rapid development to Linux / F/OSS. ----
The whole Fedora approach of rushing out major changes to releases on a certain date for the sake of a schedule has proved itself to be a stupid idea. I don't *need* to be rebuilding the entire OS on several PCs that often, and I certainly don't want to. I'm now looking at using something else, instead. One of the BSDs is looking favourable to me.
---- Actually, that's your point of view and certainly noted. You don't have to be rebuilding several PC's, you can keep them where they are...the choice is of course always yours.
BSD is worth a shot - you will probably get some valuable knowledge about how other systems do things. You should probably look at Ubuntu too. If you want stability and long term maintained, consistent release, RHEL or the rebuilds like CentOS give you that. Again, you have the choice - it's your software.
Craig
Craig White wrote:
On Thu, 2005-10-27 at 12:58 +0930, Tim wrote:
On Wed, 2005-10-26 at 00:25 -0700, Daniel B. Thurman wrote:
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very very very hosed and buggy.
Why isn't this done by Fedora? (Not outsiders that we don't know whether we can trust.)
The answer has been pretty clear on this - the release cycle is so short, that it doesn't pay to spend the energy rebuilding the current release because by that time, they are busy doing the builds for the test releases for the next series.
Why not automate the packages to be the latest? Isn't this what computers are supposed to be good at? How hard would it be to make a bi-weekly package. Release the packages as FCx.yymmdd The ISO's would just be created from the current packages. The date code provides users to know the date of the image.
Robin Laing wrote:
Craig White wrote:
On Thu, 2005-10-27 at 12:58 +0930, Tim wrote:
On Wed, 2005-10-26 at 00:25 -0700, Daniel B. Thurman wrote:
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very very very hosed and buggy.
Why isn't this done by Fedora? (Not outsiders that we don't know whether we can trust.)
The answer has been pretty clear on this - the release cycle is so short, that it doesn't pay to spend the energy rebuilding the current release because by that time, they are busy doing the builds for the test releases for the next series.
Why not automate the packages to be the latest? Isn't this what computers are supposed to be good at? How hard would it be to make a bi-weekly package. Release the packages as FCx.yymmdd The ISO's would just be created from the current packages. The date code provides users to know the date of the image.
I assume you mean bi-weekly respins and a subsequent set of new isos? How long would each iso set stay in circulation? A month? Two months? A year?
While I agree that it should be (relatively) easy to design and implement such a process, managing the resultant "sub-release" sets could easily become a nightmare.
Just my opinion, David-Paul Niner
David-Paul Niner wrote:
Robin Laing wrote:
Craig White wrote:
On Thu, 2005-10-27 at 12:58 +0930, Tim wrote:
On Wed, 2005-10-26 at 00:25 -0700, Daniel B. Thurman wrote:
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very very very hosed and buggy.
Why isn't this done by Fedora? (Not outsiders that we don't know whether we can trust.)
The answer has been pretty clear on this - the release cycle is so short, that it doesn't pay to spend the energy rebuilding the current release because by that time, they are busy doing the builds for the test releases for the next series.
Why not automate the packages to be the latest? Isn't this what computers are supposed to be good at? How hard would it be to make a bi-weekly package. Release the packages as FCx.yymmdd The ISO's would just be created from the current packages. The date code provides users to know the date of the image.
I assume you mean bi-weekly respins and a subsequent set of new isos? How long would each iso set stay in circulation? A month? Two months? A year?
While I agree that it should be (relatively) easy to design and implement such a process, managing the resultant "sub-release" sets could easily become a nightmare.
Just my opinion, David-Paul Niner
I don't see any problem with managing these sets. In two weeks, the set is defunct and dropped. If anyone has a copy from a month ago, all they need to do is "yum update all" to be the same as the current set. The only difference is the individual packages within the set.
The iso would only need to be replaced. The way I look at it is that using the ftp idea, you could download FC_current which would be a link to the latest package. It is not a version change but a packaging of all the latest updates to the base package. The date is just to indicate the date of creation of the set.
As my daughter commented about me leaving the new laptop on over night. I left it on to download all the updates on a new install.
Robin Laing wrote:
Why not automate the packages to be the latest? Isn't this what computers are supposed to be good at? How hard would it be to make a bi-weekly package. Release the packages as FCx.yymmdd The ISO's would just be created from the current packages. The date code provides users to know the date of the image.
And how many mirrors want to transfer an extra ~12 GB bi-weekly? That's just for the ISOs. Double that for the exploded trees.
And how many people want to test these ISO sets bi-weekly to make sure no bugs have creeped in?
And how long does the none-technical side of creating new ISOs take? In the past ISOs have been ready a week early, but were waiting on non-technical issues.
William Hooper wrote:
Robin Laing wrote:
Why not automate the packages to be the latest? Isn't this what computers are supposed to be good at? How hard would it be to make a bi-weekly package. Release the packages as FCx.yymmdd The ISO's would just be created from the current packages. The date code provides users to know the date of the image.
And how many mirrors want to transfer an extra ~12 GB bi-weekly? That's just for the ISOs. Double that for the exploded trees.
And how many people want to test these ISO sets bi-weekly to make sure no bugs have creeped in?
And how long does the none-technical side of creating new ISOs take? In the past ISOs have been ready a week early, but were waiting on non-technical issues.
There is no reason to upload or change everything. Just the iso's as the rest of the trees are covered in the updates.
Okay, the ISO's could be created once a month. All the files are already on the mirrors so they could create them locally from the updates already released. At least some of the mirrors could.
Don't confuse the issue of creating a "new" package. All I am saying is that the present package is just repackaged with the latest releases of the updates. All the packages have been tested before going into updates.
To give an example. Lets say that last month, the latest kernel was 2.6.12-1.1456 but this week it is 2.6.13-1.1532. The set uses the 2.6.13-1.1532 kernel (and all associated files). No added packages or extended features. If you cannot get to the set by doing "yum update all" then it doesn't go into the set.
It doesn't take me long to create a DVD ISO using k3b.
This is just an idea off of the top of my head. The thing I hear from others is about downloading the ISO, burning a DVD and then needing to spend a few hours waiting for the updates to be downloaded and installed. The above approach would at least get a fairly recent set of files with current updates on their computers.
On Thu, 2005-10-27 at 12:03, Robin Laing wrote:
Okay, the ISO's could be created once a month.
Or when there are 500 megs or some threshold size of updates that makes it absurd to install then replace most of a new system with updates. Or when a bug in the old installer or kernel gets fixed so a large new set of machines can now have a trouble-free install.
At 12:43 PM -0500 10/27/05, Les Mikesell wrote:
On Thu, 2005-10-27 at 12:03, Robin Laing wrote:
Okay, the ISO's could be created once a month.
Or when there are 500 megs or some threshold size of updates that makes it absurd to install then replace most of a new system with updates. Or when a bug in the old installer or kernel gets fixed so a large new set of machines can now have a trouble-free install.
Hear, Hear! ____________________________________________________________________ TonyN.:' mailto:tonynelson@georgeanelson.com ' http://www.georgeanelson.com/
On Thu, 2005-10-27 at 12:03, Robin Laing wrote:
Or when a bug in the old installer or kernel gets fixed so a large new set of machines can now have a trouble-free install.
I think that's the crux of this issue.
Not every set of updates requires a rebuild of the installation media. Certainly for the most part I'd be against rebuilding the media more than, say, once a month- if at all.
However in this case, support for a large number of video chipsets- which worked in the past- is broken, leaving many people without a working GUI install and no X for months. In my opinion, this is "big enough" to warrant a rebuild of the release.
It's not a nuisance issue, this is an issue that can and probably has turned a lot of folks away from an otherwise great product.
-Mike
Les Mikesell wrote:
On Thu, 2005-10-27 at 12:03, Robin Laing wrote:
Okay, the ISO's could be created once a month.
Or when there are 500 megs or some threshold size of updates that makes it absurd to install then replace most of a new system with updates. Or when a bug in the old installer or kernel gets fixed so a large new set of machines can now have a trouble-free install.
The theshold size is a great idea.
Robin Laing wrote: [snip]
Okay, the ISO's could be created once a month. All the files are already on the mirrors so they could create them locally from the updates already released. At least some of the mirrors could.
That's not how mirroring works. You copy what the main server has, you don't go creating new things and saying they are the same as the main server. Creation date and the command line used to build the ISOs both change the ISO content, so you have no way of verifying the ISOs. The reason md5sums and sh1sums work now is because a bit-for-bit copy is transfered from the main server to the mirror.
Don't confuse the issue of creating a "new" package. All I am saying is that the present package is just repackaged with the latest releases of the updates. All the packages have been tested before going into updates.
None of the updated packages have been tested in the context of installing them via anaconda. This is not the same as doing a "yum update".
To give an example. Lets say that last month, the latest kernel was 2.6.12-1.1456 but this week it is 2.6.13-1.1532.
And this new kernel is completely untested as a boot environment for Anaconda.
It doesn't take me long to create a DVD ISO using k3b.
You are completely ignoring other, none technical issues.
This is just an idea off of the top of my head. The thing I hear from others is about downloading the ISO, burning a DVD and then needing to spend a few hours waiting for the updates to be downloaded and installed. The above approach would at least get a fairly recent set of files with current updates on their computers.
At the expense of time and effort that would be better spent on making the next release better. Also at the expense of using more bandwidth and storage space on the mirrors.
On Thu, 2005-10-27 at 12:57, William Hooper wrote:
The above approach would at least get a fairly recent set of files with current updates on their computers.
At the expense of time and effort that would be better spent on making the next release better.
Which you trade off against fewer people using and testing because of the inconvenience of having to download 800 megs of updates after each install, or because long-fixed bugs in the release kernel prevent installation at all. If Ubuntu is easier and faster to install, you will lose all the testers.
Also at the expense of using more bandwidth and storage space on the mirrors.
There's no need to store all of the intermediate rev's of the iso images, and it would most likely result in less bandwidth usage since the users would no longer have to download the iso and then do another many hundred megs of update downloads for each machine installed.
Les Mikesell wrote:
There's no need to store all of the intermediate rev's of the iso images, and it would most likely result in less bandwidth
Here's a wild thought, maybe the whole idea of ISO images, sampling a copy of the rest of the repo state base-only or with updates, is actually the evil part here.
AIUI Anaconda is moving towards being based on yum... in that case just a small bootable ISO image with no RPMs in it, which then demands to see a local or remote yum repo so it can find the latest versions of all packages in a standard way, and so throwing away precooked ISOs of anything from the Fedora mirrors, might be a solution.
-Andy
On Thu, 2005-10-27 at 14:55, Andy Green wrote:
Les Mikesell wrote:
There's no need to store all of the intermediate rev's of the iso images, and it would most likely result in less bandwidth
Here's a wild thought, maybe the whole idea of ISO images, sampling a copy of the rest of the repo state base-only or with updates, is actually the evil part here.
Partly right. It's obviously horribly wasteful and not particularly useful to have a gazillion mirror copies of the giant every-changing workspace of fedora archived away. However, isos make good bittorrent and rsync targets and as such make sense for bandwidth sharing and saving.
AIUI Anaconda is moving towards being based on yum... in that case just a small bootable ISO image with no RPMs in it, which then demands to see a local or remote yum repo so it can find the latest versions of all packages in a standard way, and so throwing away precooked ISOs of anything from the Fedora mirrors, might be a solution.
Maintaining local yum repos means you have to mirror a lot of gunk you'll never use and relying on remote ones means trouble when things get out of sync. What we need is a more intelligent way to share bandwidth without cluttering the world with useless snapshots.
Les Mikesell wrote:
Here's a wild thought, maybe the whole idea of ISO images, sampling a copy of the rest of the repo state base-only or with updates, is actually the evil part here.
Partly right.
I see :-)
It's obviously horribly wasteful and not particularly useful to have a gazillion mirror copies of the giant every-changing workspace of fedora archived away. However, isos make good bittorrent and rsync targets and as such make sense for bandwidth sharing and saving.
Bittorrent and rsync works fine with directories of files.
Maintaining local yum repos means you have to mirror a lot of gunk you'll never use and relying on remote ones means trouble when things get out of sync. What we need is a more intelligent way to share bandwidth without cluttering the world with useless snapshots.
A cacheing proxy might be interesting to solve this objection, rather than an explicit mirror. Then nothing is pulled down that is not a selected package for at least one proxy user.
-Andy
On Thu, 2005-10-27 at 16:21, Andy Green wrote:
However, isos make good bittorrent and rsync targets and as such make sense for bandwidth sharing and saving.
Bittorrent and rsync works fine with directories of files.
There's a bit of intelligence missing about changing contents during the transfer. Isos at least force a frozen snapshot, but that might be a solvable problem.
Maintaining local yum repos means you have to mirror a lot of gunk you'll never use and relying on remote ones means trouble when things get out of sync. What we need is a more intelligent way to share bandwidth without cluttering the world with useless snapshots.
A cacheing proxy might be interesting to solve this objection, rather than an explicit mirror. Then nothing is pulled down that is not a selected package for at least one proxy user.
That helps on the local side when you use a repository that has a single URL but it doesn't work with yum's mirrorlist concept and it makes things worse if a bad file copy is ever stored in the cache. Those are solvable problems, but this approach doesn't help with the issues of source bandwidth or out-of-sync mirrors. Is there anything that looks like a proxy on the client side but can use bittorrent-style downloads on the back end to get the files and verify correctness?
Les Mikesell wrote:
On Thu, 2005-10-27 at 12:57, William Hooper wrote:
The above approach would at least get a fairly recent set of files with current updates on their computers.
At the expense of time and effort that would be better spent on making the next release better.
Which you trade off against fewer people using and testing because of the inconvenience of having to download 800 megs of updates after each install, or because long-fixed bugs in the release kernel prevent installation at all. If Ubuntu is easier and faster to install, you will lose all the testers.
When did Ubuntu start doing releases faster than every six months?
Also at the expense of using more bandwidth and storage space on the mirrors.
There's no need to store all of the intermediate rev's of the iso images,
If you want to provide the old images while the new ones are syncing, you need space to hold both. That doubles the size of the current ISO mirror requirements.
and it would most likely result in less bandwidth usage since the users would no longer have to download the iso and then do another many hundred megs of update downloads for each machine installed.
Bandwidth would increase because you will be syncing more bytes from the main server. Then you would get a group of people downloading every ISO set so they can have the newest set in case they need to install a new machine. Or people that have installed in the past downloading a newer set when doing a reinstall.
On Thu, 2005-10-27 at 15:10, William Hooper wrote:
Which you trade off against fewer people using and testing because of the inconvenience of having to download 800 megs of updates after each install, or because long-fixed bugs in the release kernel prevent installation at all. If Ubuntu is easier and faster to install, you will lose all the testers.
When did Ubuntu start doing releases faster than every six months?
They are out-of-sync with fedora. If you install today you'd get a fairly old FC4 iso vs a pretty new Ubuntu. And with Ubuntu you only install one CD and pull the rest from the network. I assume this gets the current up to date version the first time as opposed to having to download/install an old version from a 4-iso set, then replace that with a downloaded update.
and it would most likely result in less bandwidth usage since the users would no longer have to download the iso and then do another many hundred megs of update downloads for each machine installed.
Bandwidth would increase because you will be syncing more bytes from the main server. Then you would get a group of people downloading every ISO set so they can have the newest set in case they need to install a new machine.
And every time they do install, it saves the bandwidth of doing those updates. Plus, the isos make good bittorrent targets or rsync can be used to cut bandwidth. There's not much you can do about yum. It is even pre-configured so caching proxies don't help with multiple machine updates.
Or people that have installed in the past downloading a newer set when doing a reinstall.
Likewise a win at some number of installs or some size of update set. Or when done with bittorrent or rsync.
Les Mikesell wrote:
On Thu, 2005-10-27 at 15:10, William Hooper wrote:
Which you trade off against fewer people using and testing because of the inconvenience of having to download 800 megs of updates after each install, or because long-fixed bugs in the release kernel prevent installation at all. If Ubuntu is easier and faster to install, you will lose all the testers.
When did Ubuntu start doing releases faster than every six months?
They are out-of-sync with fedora. If you install today you'd get a fairly old FC4 iso vs a pretty new Ubuntu.
So when FC5 comes out, the situation will be reversed. How is Ubuntu winning here?
And with Ubuntu you only install one CD and pull the rest from the network.
IIRC a standard "click next through the install" of FC4 only requires the first two cds.
I assume this gets the current up to date version the first time as opposed to having to download/install an old version from a 4-iso set, then replace that with a downloaded update.
It's that same thing. If you install the package at install time, you have to download the update. If you don't install it at install time you download the newest version. This really comes down to what packages you pick at install.
With FC you get more packages you can install from CD and just do an update operation. With Ubutu you have to do installs of your additional apps, then do an update. You download the same amount of bytes either way.
and it would most likely result in less bandwidth usage since the users would no longer have to download the iso and then do another many hundred megs of update downloads for each machine installed.
Bandwidth would increase because you will be syncing more bytes from the main server. Then you would get a group of people downloading every ISO set so they can have the newest set in case they need to install a new machine.
And every time they do install, it saves the bandwidth of doing those updates. Plus, the isos make good bittorrent targets
Moving bittorrent targets that cause old seeds out there to cause confusion because you have a bunch of links named "FC4-current". People providing seeds need to keep re-downloading new ISOs to seed to try to keep current.
or rsync can be used to cut bandwidth.
From what I've seen rsync doesn't really give any savings on ISOs from one
test update to the next, so I don't believe it would help much in this situation.
There's not much you can do about yum. It is even pre-configured so caching proxies don't help with multiple machine updates.
Please, not this old saw again. Haven't you seen the amount of problems that come up on the CentOS list because of their round-robin DNS scheme? Especially during large updates, which you are proposing to have more of.
Or people that have installed in the past downloading a newer set when doing a reinstall.
Likewise a win at some number of installs or some size of update set. Or when done with bittorrent or rsync.
If I'm doing a number of installs, I'm using a local install point anyway.
-- William Hooper
On Thu, 2005-10-27 at 15:59, William Hooper wrote:
There's not much you can do about yum. It is even pre-configured so caching proxies don't help with multiple machine updates.
Please, not this old saw again. Haven't you seen the amount of problems that come up on the CentOS list because of their round-robin DNS scheme? Especially during large updates, which you are proposing to have more of.
That's really a yum problem because it is too dumb to understand the concept of multiple A records. By comparison, IE isn't bothered much at all by a few dead sites in the returned list of addresses so getting it right can't be all that difficult.
With Centos, I only pull one copy into my squid cache and other machines find it. With fedora, it never matches and clutters my cache with many copies of the same thing as well as bothering the mirrors with unneeded traffic.
On Thu, 2005-10-27 at 11:23, William Hooper wrote:
And how many people want to test these ISO sets bi-weekly to make sure no bugs have creeped in?
Won't the same bugs be installed from the same RPMs whether yum pulls them in slowly over the network or anaconda installs then from an ISO? I've never seen new problems installing from the rebuilt k12ltsp isos that include updates and I've avoided dealing with a lot of the 'early-fedora' bugs that way. It's an improvement all the way around, since it also avoids waiting to download the updates that are already there.
Les Mikesell wrote:
On Thu, 2005-10-27 at 11:23, William Hooper wrote:
And how many people want to test these ISO sets bi-weekly to make sure no bugs have creeped in?
Won't the same bugs be installed from the same RPMs whether yum pulls them in slowly over the network or anaconda installs then from an ISO?
Bugs in the RPM packages yes, bugs in the install environment, no.
Why do you think there are 3 test ISO sets before each release? That way the installer environment gets tested.
William Hooper wrote:
Les Mikesell wrote:
On Thu, 2005-10-27 at 11:23, William Hooper wrote:
And how many people want to test these ISO sets bi-weekly to make sure no bugs have creeped in?
Won't the same bugs be installed from the same RPMs whether yum pulls them in slowly over the network or anaconda installs then from an ISO?
Bugs in the RPM packages yes, bugs in the install environment, no.
Why do you think there are 3 test ISO sets before each release? That way the installer environment gets tested.
The install environment wouldn't change. It would still have the same packages as the original installer did. This would only change if for some reason a package was split.
Robin Laing wrote:
William Hooper wrote:
Les Mikesell wrote:
On Thu, 2005-10-27 at 11:23, William Hooper wrote:
And how many people want to test these ISO sets bi-weekly to make sure no bugs have creeped in?
Won't the same bugs be installed from the same RPMs whether yum pulls them in slowly over the network or anaconda installs then from an ISO?
Bugs in the RPM packages yes, bugs in the install environment, no.
Why do you think there are 3 test ISO sets before each release? That way the installer environment gets tested.
The install environment wouldn't change. It would still have the same packages as the original installer did. This would only change if for some reason a package was split.
If the install environment won't change, then you can't use an updated kernel or X during the install, let alone an updated version of Anaconda. That means most (if not all) of the install bugs won't be fixed. What was the point of creating new ISOs again?
On Fri, 2005-10-28 at 10:24, William Hooper wrote:
The install environment wouldn't change. It would still have the same packages as the original installer did. This would only change if for some reason a package was split.
If the install environment won't change, then you can't use an updated kernel or X during the install, let alone an updated version of Anaconda. That means most (if not all) of the install bugs won't be fixed. What was the point of creating new ISOs again?
Those things could change if the bugs were bad enough or anyone cared about the user's results (call it the 'fedora experience). But the main point of the new ISOs would be to avoid most of the update downloads on every new install - recently reported at 899 megs/machine, I think. Is that a good first impression? And, it would fix any runtime problems that affect the ability to get to a point where you are able to complete that update like the recently mentioned X issue.
William Hooper wrote:
Robin Laing wrote:
William Hooper wrote:
Les Mikesell wrote:
On Thu, 2005-10-27 at 11:23, William Hooper wrote:
And how many people want to test these ISO sets bi-weekly to make sure no bugs have creeped in?
Won't the same bugs be installed from the same RPMs whether yum pulls them in slowly over the network or anaconda installs then from an ISO?
Bugs in the RPM packages yes, bugs in the install environment, no.
Why do you think there are 3 test ISO sets before each release? That way the installer environment gets tested.
The install environment wouldn't change. It would still have the same packages as the original installer did. This would only change if for some reason a package was split.
If the install environment won't change, then you can't use an updated kernel or X during the install, let alone an updated version of Anaconda. That means most (if not all) of the install bugs won't be fixed. What was the point of creating new ISOs again?
My knowledge of anaconda is about zero. I assumed when I responded that anaconda would look at package x and install it if selected. Now if that isn't the case then my answer is wrong. I assumed that the installer would look at a database of packages and then build from that. It didn't care what revision or epoc but that the package was available.
But the installer then should be changed to work in a way that it does not require such defined packages.
Is it really a big deal to convert the packages within an ISO from x.1 to x.2?
Why cannot it say select package kernel and install the latest kernel. kernel-2.6.12-1.1456_FC4 last month. kernel-2.6.13-1.1526_FC4 this month. kernel-2.6.13-1.1532_FC4 next month.
Robin Laing wrote: [snip]
Bugs in the RPM packages yes, bugs in the install environment, no.
[snip]
The install environment wouldn't change.
[snip]
If the install environment won't change, then you can't use an updated kernel or X during the install, let alone an updated version of Anaconda.
[snip]
My knowledge of anaconda is about zero. I assumed when I responded that anaconda would look at package x and install it if selected. Now if that isn't the case then my answer is wrong.
[snip]
Anaconda is just a program that needs an environment to run on. The version of the kernel used on the install disks determines what hardware is supported. Any bugs causing issues booting the install disk will most likely need an updated kernel _for anaconda to run on_ to fix it.
X is needed for a graphical install. If there is an issue with the graphics support during the install, then an updated X is needed _for anacanda to run on_ to fix it.
William Hooper wrote:
Robin Laing wrote: [snip]
Bugs in the RPM packages yes, bugs in the install environment, no.
[snip]
The install environment wouldn't change.
[snip]
If the install environment won't change, then you can't use an updated kernel or X during the install, let alone an updated version of Anaconda.
[snip]
My knowledge of anaconda is about zero. I assumed when I responded that anaconda would look at package x and install it if selected. Now if that isn't the case then my answer is wrong.
[snip]
Anaconda is just a program that needs an environment to run on. The version of the kernel used on the install disks determines what hardware is supported. Any bugs causing issues booting the install disk will most likely need an updated kernel _for anaconda to run on_ to fix it.
X is needed for a graphical install. If there is an issue with the graphics support during the install, then an updated X is needed _for anacanda to run on_ to fix it.
Now if Anaconda is just a program, then how hard is it to tell Anaconda to use the new packages instead of the old ones? What difference is there if I use the latest released packages for an install over packages that are almost a year old?
X is updated when you type in "yum update all" after an install. Why can't the DVD be the latest version of X?
I am responding in generalities and using specifics as examples. Sorry if that is lost in the communication.
In general, the idea is to have an ISO that has a pretty recent set of patches and updates on it. As others have posted, there is almost a gig of updates to perform after a new install at this time. I don't know about you but for me that is a pain.
Think of this as a way to make Linux easier for those newbs that don't understand.
A simple question is;
How hard is it to make an ISO that only changes the versions of the packages included? Keep it simple.
Robin Laing wrote: [snip]
A simple question is;
How hard is it to make an ISO that only changes the versions of the packages included? Keep it simple.
For an individual user, it's not that hard, there are guides on the net.
For the Fedora project, it's more difficult. I've mentioned many different issues in this thread that I'm not going to bother repeating.
William Hooper wrote:
Robin Laing wrote: [snip]
A simple question is;
How hard is it to make an ISO that only changes the versions of the packages included? Keep it simple.
For an individual user, it's not that hard, there are guides on the net.
For the Fedora project, it's more difficult. I've mentioned many different issues in this thread that I'm not going to bother repeating.
For most of the issues you have raised, I don't disagree. But from a user point of view, it is an issue. On one weekend, I installed FC4 on two machines. With the updates, it added allot of extra download time for each machine. If the DVD's had been current it would have been much easier. For FC5, I plan on updating one machine and then making it available for updating the other machine. At least this will save some bandwidth.
On Fri, Oct 28, 2005 at 02:28:55PM -0600, Robin Laing wrote:
William Hooper wrote: Sorry if that is lost in the communication.
In general, the idea is to have an ISO that has a pretty recent set of patches and updates on it. As others have posted, there is almost a gig of updates to perform after a new install at this time. I don't know about you but for me that is a pain.
Think of this as a way to make Linux easier for those newbs that don't understand.
A simple question is;
How hard is it to make an ISO that only changes the versions of the packages included? Keep it simple.
-- Robin Laing
Fedora is a free distribution. Does anyone know of a operating system you pay for that releases updated media for each set of updates. Now I agree that if you are downloading updates over a dial-up then that is a problem. But over a high speed line you just start the update and go to sleep or go have a dinner or go read up a book, etc. There is also a cron job that does updates automatically so you can do it while you sleep continually without worrying about it.
I think people seem to want a lot for no money down.
On Fri, 2005-10-28 at 17:13 -0500, akonstam@trinity.edu wrote:
I think people seem to want a lot for no money down.
Sounds a bit like some are a bit to defensive about the problems with Fedora. Okay, so it's a test bed, and we're testing it. But when we say that there's problems, offer some ideas about improvements, complaints are fired off that there's been a complaint.
Daniel B. Thurman:
Seems that whomever released this distro should throw away the iso cds and create a BRAND NEW ONE. This distro is very very very hosed and buggy.
Tim:
Why isn't this done by Fedora? (Not outsiders that we don't know whether we can trust.)
Craig White:
The answer has been pretty clear on this - the release cycle is so short, that it doesn't pay to spend the energy rebuilding the current release because by that time, they are busy doing the builds for the test releases for the next series.
What?! A computer can't make a new ISO during the *several* months between new releases?
Sure, it's silly to make a new one each week. But there's been a few show stoppers that ought to have just screamed for the base install to be fixed: e.g. Seriously screwed up Xorg that just doesn't work, at all, on some cards. Install routines that don't install unless the user types "garbage" into the prompt.
In the past we had Red Hat Linux 7.0, 7.1, 7.2, 7.3 before we jumped to 8.0. You had a fighting chance of getting an installation working in the first go. Having to do another hour (or more) of updates, straight away, is a right pain. Just as bad as Windows.
No - has nothing to do with Windows. It has to do with pushing the development along. You should read Eric Raymond's 'Cathedral and Bazaar' for a developers view of fast paced, less than perfect releases as a method that brings rapid development to Linux / F/OSS.
I'd say it has everything to do with the same mentality: We must have a product by X date, doesn't matter if it's broken, it must be out.
The whole Fedora approach of rushing out major changes to releases on a certain date for the sake of a schedule has proved itself to be a stupid idea. I don't *need* to be rebuilding the entire OS on several PCs that often, and I certainly don't want to. I'm now looking at using something else, instead. One of the BSDs is looking favourable to me.
Actually, that's your point of view and certainly noted. You don't have to be rebuilding several PC's, you can keep them where they are...the choice is of course always yours.
Red Hat has certainly lost the place it used to have. Before you had a system that you could use on a server (that's definitely not something you want to be rebuilding every few months), or a work station (again, something you'd prefer not to have to rebuild often, but less of a hassle than a server).
*ix is really more of a server OS (all those servers, a system that takes an age to boot, something that's meant to be running 24/7) than a workstation OS, but this VERY short lifespan doesn't fit that well.
The ditching of what was Red Hat Linux into short lived Fedora has seriously pissed off a number of former Red Hat fans, and I don't blame them.
It wouldn't be quite so bad, but the differences from one release to another are just too great. What you had doesn't transplant to the new release. The install routine doesn't make it easy for you to wipe out system and applications, leave home space, and install the new. If you want to do that, you have to faff around with backing things up and restoring, or using separate drives. The limited options to re-use all the space, or use the empty space, with no keep certain partitions, as-is, are rather pathetic. I've used other systems where updating a system is just that (the rest is left alone, applications and data, *and* it still works with the new system). And the option of updating over the top is fraught with problems.
BSD is worth a shot - you will probably get some valuable knowledge about how other systems do things. You should probably look at Ubuntu too. If you want stability and long term maintained, consistent release, RHEL or the rebuilds like CentOS give you that. Again, you have the choice - it's your software.
Ubuntu seems just as bad (short lived distros that completely replace your prior installation - but only if you don't run into snags).
The fingers have been burnt by Red Hat, but Fedora remains the distro that you're most likely to find software packaged for.
I'd say that my server would be something else by the time the next Fedora release comes out, and perhaps the clients might stay with Fedora. But all these radical changes each release probably precludes mixing different distros together.
As I said, BSD looks most favourable. They don't seem to be pushing out a new distro just to be cool. There's a new one when there's compelling reasons that a new distro is an advantage.
I see Red Hat/Fedora and Ubuntu doing the same things: Rushing out releases on a set date, the release being full of faults, never releasing a fixed version (as a whole), and the next release being a radical change (not the bugs ironed out of the prior release, but a change in tack). It's like committees reorganising themselves, lots of action, little tangible benefits, and lots of new problems, continually.
And that is like Windows. Windows 95, crappy from the get go, never was a good version. (This being from a point of a view of a person who's used better than Windows systems.) Windows 98, crappy from the get go, and has never been fixed in 7 years. Next release, different, never fixed. And so on...
For hecks sake, whichever OS it is, get it right, get it running smoothly, *THEN* start designing the new, better, one. Don't just keep making different ones.
On Thu, 2005-10-27 at 11:22, Tim wrote:
I'd say it has everything to do with the same mentality: We must have a product by X date, doesn't matter if it's broken, it must be out.
That's the point of fedora. Bugs don't get fixed until they are found. They aren't found until someone uses the code. If you want to run the newest code, you get the newest bugs along with it - and you get to help fix them.
Red Hat has certainly lost the place it used to have. Before you had a system that you could use on a server (that's definitely not something you want to be rebuilding every few months), or a work station (again, something you'd prefer not to have to rebuild often, but less of a hassle than a server).
Red Hat still offers this option through the Red Hat Enterprise versions. They continue to provide the security and bugfix updates for 5 years on these releases. If you don't want, or can't pay for the Service contract, use the free Centos version that is rebuilt from the RH source RPMs.
*ix is really more of a server OS (all those servers, a system that takes an age to boot, something that's meant to be running 24/7) than a workstation OS, but this VERY short lifespan doesn't fit that well.
*ix server apps were stable and feature-complete long ago. If that is all you are running, fedora probably is wrong for your purpose. On the other hand, fedora does make a good workstation OS and there you do want to take advantage of the most recent work on the desktop apps which are still evolving.
The ditching of what was Red Hat Linux into short lived Fedora has seriously pissed off a number of former Red Hat fans, and I don't blame them.
They were only pissed until they saw that the split into 2 different distributions helps each focus on a different purpose.
As I said, BSD looks most favourable. They don't seem to be pushing out a new distro just to be cool. There's a new one when there's compelling reasons that a new distro is an advantage.
You are dreaming if you think any *bsd has had the amount of real-world testing under the conditions that the fedora/RH codebase gets. You'll find a few gurus that can keep a few machines running forever without missing a beat because they know the code inside and out and their view of the *bsd world sounds great. But if you want a 'stick a CD in about anything and get a working server' distribution I think its the wrong place to look.
I see Red Hat/Fedora and Ubuntu doing the same things: Rushing out releases on a set date, the release being full of faults, never releasing a fixed version (as a whole), and the next release being a radical change (not the bugs ironed out of the prior release, but a change in tack).
You are describing fedora, not RHEL or Centos here.
It's like committees reorganising themselves, lots of action, little tangible benefits, and lots of new problems, continually.
The point is that at the *end* of a fedora version cycle most of the bugs are fixed - and they wouldn't have been without the participants.
For hecks sake, whichever OS it is, get it right, get it running smoothly, *THEN* start designing the new, better, one. Don't just keep making different ones.
That's why there are two versions, one is for getting new things right, the other for people who don't mind old stuff as long as it is getting security updates. You won't ever have a next 'stable' version if you don't find and apply the fixes to the current 'new' version.