While we're talking about RPM dependencies ...

Pavel Alexeev forum at hubbitus.com.ru
Sun Apr 15 22:02:31 UTC 2012


16.04.2012 00:51, Toshio Kuratomi wrote:
> On Sun, Apr 15, 2012 at 11:16:58PM +0400, Pavel Alexeev wrote:
>> As I look it for me for first glance.
>> Install or update one package scenario (yum install foo):
>> 1) Client ask last foo package version.
>> 2) Server calculate all dependencies by self algorithms and return in
>> requested form (several may be used from JSON to XML) full list of
>> dependencies for that package. No other overhead provided like
>> dependencies of all packages, filelist etc.
>> 3) Client got it, intercept with current installed packages list,
>> exclude whats already satisfy needs, and then request each other what
>> does not present starting from 1.
>>
>> Update scenario (yum update):
>> 1) Client ask repo-server to get a list of actual versions available
>> packages.
>> 2) Server answer it.
>> 3) Client found which updated and request its as in first scenario
>> for update.
>>
> I don't think this would be a speedup.  Instead of the CPUs of tens of
> thousands of computers doing the depsolving, you'd be requiring the CPUs of
> a single site to do it.
Yes. And as many clients do the same work, caching will give there good 
results. So, sequence requests will costs nothing.
>    The clients would have to upload, the provides of
> their installed packages so bandwidth needs might increase.  If I was
> installing a few packages by trial and error/memory I'd likely do yum
> install tmux followed closely by yum install zsh, which would require
> separate requests to the server to download separate dependency information
> as opposed to having the information downloaded once.
If you request yum install tmux zsh off course it should be sent and 
calculated on server in one request.
Also caching answers on client side not forbidden.
>    The server that
> constructs the subsets of repodata would become single point of failures
> whereas currently the repodata can be hosted on any mirror.  This setup
> would be much more sensitive to mirrors and repodata going out of sync.
> There'd likely be times when a new push has gone out where the primary
> mirror was the only server which could push packages out as every other
> mirror would be out of sync wrt the repodata server.
Yes, as I wrote initially it introduce more requirements to the server, 
especially some sort of scripting allowed (php, perl, python, ruby or 
other).
But at all it is not exclude mirroring as it is free software and any ma 
install it, and sync metadata information in traditional way.
> -Toshio



More information about the devel mailing list