Yum, Proxy Cache Safety, Storage Backend

James Antill james.antill at redhat.com
Thu Jan 24 17:20:41 UTC 2008


On Thu, 2008-01-24 at 10:50 -0600, Les Mikesell wrote:
> James Antill wrote:
> 
> >> I think you are missing my point, which is that it would be a huge win 
> >> if yum automatically used typical existing caching proxies with no extra 
> >> setup on anyone's part, so that any number of people behind them would 
> >> get the cached packages without knowing about each other or that they 
> >> need to do something special to defeat the random URLs.
> > 
> >  HTTP doesn't define a way to do this, much like the Pragma header
> > suggestion is a pretty bad abuse of HTTP ...
> 
> It's worked for years in browsers. If you have a stale copy in an 
> intermediate cache, a ctl-refresh pulls a fresh one.

 Sure, the _user_ has a single URL to work with and can make
semi-intelligent decisions with only themselves to blame. Not quite the
same as a program with multiple URLs to the same data.
 Now if you wanted to add ETag support in various places, patches are
very likely to be accepted, then any program could make intelligent
decisions.

> >  In neither case is "work around HTTPs design in yum" a good solution,
> > IMNSHO.
> 
> I'd rather call it "using existing infrastructure and protocols 
> intelligently" - instead of cluttering everyone's caches with randomized 
> URLs to get duplicate files.

 So you think the best thing to do is remove mirrorlist entirely, and
just rely on proxies ... you are obviously free to your opinion, and you
can do that today.

-- 
James Antill <james.antill at redhat.com>
Red Hat
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
Url : http://lists.fedoraproject.org/pipermail/devel/attachments/20080124/88b5b7b5/attachment-0002.bin 


More information about the devel mailing list