yum broken; headers directory missing

Michael Stenner mstenner at ece.arizona.edu
Sun Mar 28 19:42:59 UTC 2004


On Sun, Mar 28, 2004 at 06:43:03PM +0100, Cam wrote:
> There really is a bug in the system, I'm guessing that the technology in 
> yum doesn't help the developers to provide the repositories as much as 
> it helps the users to download from them.

I'm not sure I understand what you're saying here.  I definitely agree
that there's a problem when the mirror structure changes "out from
under you".  

The role of yum in this is pretty minimal.  You put your rpms where
you want them, and then you run yum-arch to create the headers (where
you want them).  If you change the place where the headers live, then
things will break unless you ALSO change the baseurl on the clients.
There's really no way around that.

> I had assumed the repository 
> was a-changing and a new headers file would be available soon...

The joy of running something with "test" in its name, I guess :)

> For the record my yum.conf contains a line from the yum-2.0.6-1 
> yum.conf.fedora:
<snip>
> Which gives rise to:
<snip>
> [Errno 4] IOError: HTTP Error 404: Not Found
> 
> whereas it used to work.

Yep.  Something definitely got hosed.  Whether you prefer to think of
it as hosed config or a hosed repository is a matter of perspective.
If they match, things work.  If they don't match, things don't work.

> The only real problems I have seen in yum (as a system) have been:
> 
> * this (repository changes with no redirect. Fix: bitch about it and 
> find a new headers file),

Yep.

> * out-of date headers fetched from a mirror but not validated against 
> the main server (eg. for a while my system would fetch outdated headers 
> that required openssh which was no longer available). Fix: delete 
> /var/cache/yum, narrow down the mirrors to the master server and try again.

Yep again.  There has been a little talk about trying to "harden" both
yum and the mirroring process against this.  For now, you're pretty
much doing the right thing.

> * finally, generally inefficient behaviour (downloading headers one by 
> one instead of having a compressed archive of headers

Yes and no.  Headers are already compressed.  The only advantage there
would be doing a single download instead of many.  HTTP keepalive
helps a lot because you only use one HTTP connection.  Also, what you
propose would involve actually downloading MORE.  Yum doesn't keep
headers for rpms you have installed, so you'd need to download a whole
bunch of EXTRA headers.  Finally, what happens when only a few
packages get downloaded?  Surely, you don't want to download the whole
tarball again.  [perhaps you were only talking about the initial
setup, though]

However, all of this will change in the next major version of yum,
since it's moving away from the headers approach and toward the new
xml metadata  system.

> excessive timeouts on individual downloads making yum hang for too
> long when the server stops sending

The next major version will almost certainly have better timeout
control.

> downloading stuff even if the user downloaded it 
> 30 sec ago and had to ctrl-C yum for some reason.

Next version will also support REGET for that case.  You'll pick up
where you left off.

> Mostly though, it just works really well :)

Glad you like it.

					-Michael

-- 
  Michael D. Stenner                            mstenner at ece.arizona.edu
  ECE Department, the University of Arizona                 520-626-1619
  1230 E. Speedway Blvd., Tucson, AZ 85721-0104                 ECE 524G





More information about the test mailing list