Basically, I would like a system where yum looks first in a /common/yum/ directory NFS-mounted on several machines, and if it does not find what it is looking for then it goes to a mirror as before, and adds what it finds to /common/yum/ as well as installing it on the machine in question.
Does yum have such a facility? I googled for "yum several machines" but all the solutions suggested, eg setting up a local mirror, seemed to me excessive for my purposes.
On Sat, 2009-02-28 at 12:25 +0000, Timothy Murphy wrote:
Basically, I would like a system where yum looks first in a /common/yum/ directory NFS-mounted on several machines, and if it does not find what it is looking for then it goes to a mirror as before, and adds what it finds to /common/yum/ as well as installing it on the machine in question.
If you have one computer which uses yum as normal, then NFS shares out the /var/cache/yum/*/packages to the other computers. Then the other computers would mount into their /var/cache/yum*/packages directories, when you use yum on them, they'd use whatever packages are found in there (not caring whether it was local or over NFS), and store anything that they download into the common directories on your server.
I suggest only sharing the packages sub-directories, not the whole /var/cache/yum tree. This allows local computers to keep their own metadata, etc. And updating one wouldn't stomp on the data needed by another box.
I wouldn't do a yum update simultaneously on two or more boxes, though. I don't know how it'd take to two boxes both trying to download the same RPM file to the same place.
I've done something along these lines in the past.
On Sun, 01 Mar 2009 00:14:30 +1030, Tim wrote: [snipperoo]
I wouldn't do a yum update simultaneously on two or more boxes, though. I don't know how it'd take to two boxes both trying to download the same RPM file to the same place.
Hmmm ... I do it all the time, every day or two, on five or six boxes behind one router & KVM switch. I run "yum clean all," "updatedb," "rpm --rebuilddb," and then "yum update" whenever the previous update got something; otherwise I just keep repeating "yum update." I get the sequence started on one machine, then KVM-switch to the next and the next, till all have either completed or reported nothing to do. It's quite common for "yum update" to be running simultaneously on two -- or several.
(I know I don't need the updatedb and rebuilddb that often; but it's a way of remembering not to leave them undone for months at a time.)
Maybe I've had troubles I should've recognized as stemming from that? Can I tell?? Or are you concerned only about them slowing one another down?
On Sat, 2009-02-28 at 17:47 +0000, Beartooth wrote:
On Sun, 01 Mar 2009 00:14:30 +1030, Tim wrote: [snipperoo]
I wouldn't do a yum update simultaneously on two or more boxes, though. I don't know how it'd take to two boxes both trying to download the same RPM file to the same place.
Hmmm ... I do it all the time, every day or two, on five or six boxes behind one router & KVM switch. I run "yum clean all," "updatedb," "rpm --rebuilddb," and then "yum update" whenever the previous update got something; otherwise I just keep repeating "yum update." I get the sequence started on one machine, then KVM-switch to the next and the next, till all have either completed or reported nothing to do. It's quite common for "yum update" to be running simultaneously on two -- or several.
Just to be clear, are these machines sharing the same package directory via NFS?
Also, when you say "the previous update got something" so you mean the previous update completed, or the previous update found something to do? The exact order of events is important.
poc
Tim:
I wouldn't do a yum update simultaneously on two or more boxes, though. I don't know how it'd take to two boxes both trying to download the same RPM file to the same place.
Beartooth:
Hmmm ... I do it all the time, every day or two, on five or six boxes behind one router & KVM switch. I run "yum clean all," "updatedb," "rpm --rebuilddb," and then "yum update" whenever the previous update got something; otherwise I just keep repeating "yum update." I get the sequence started on one machine, then KVM-switch to the next and the next, till all have either completed or reported nothing to do. It's quite common for "yum update" to be running simultaneously on two -- or several.
I don't know how well the system will handle two or more computers trying to create the same file on the same disc at the same time.
In several years, I only once messed around with fixing up the RPM database, I don't go around doing things to stuff it up, and I haven't seen it stuff itself up. I've probably jinxed it, now, but my one and only time was thanks to a computer locking up hard in the middle of some updates.
I don't know what people do to shoot their systems in the foot, but I've never felt the need for doing "yum clean all" (a rather brute force thing to do). I've only ever had to do "yum clean metadata" to deal with repos that were out of kilter (something that's not my fault, nor my computer's).
If I had a large enough network, and computers all using the same release, I'd be tempted to pick a particular mirror, and HTTP proxy all traffic with it it. It's a simple way to manage this situation.
On Sun, 2009-03-01 at 17:04 +1030, Tim wrote:
I don't know how well the system will handle two or more computers trying to create the same file on the same disc at the same time.
In fact this can't happen, as the two file-creation operations will be serialized inside the kernel of the machine the file is physically on. One of them will win and create the file, the second will try to create it and then either succeed or fail depending on how the first process did the creation (see open(2)).
If the second process succeeds, the first process will just continue merrily writing a file which no longer has a name, so when the process closes the file it will disappear and the space will be reclaimed. Meanwhile the second process meanwhile will be writing to a completely different file, which does have a name.
If the second process fails to create the file, it'll get a file-creation error and presumably report to the user (in fact to the NFS client in this case).
There is an exception to all this: if the file is created in APPEND mode, you can get corruption of the file (not the system). RTFM.
poc
2009/2/28 Timothy Murphy gayleard@eircom.net:
Basically, I would like a system where yum looks first in a /common/yum/ directory NFS-mounted on several machines, and if it does not find what it is looking for then it goes to a mirror as before, and adds what it finds to /common/yum/ as well as installing it on the machine in question.
Does yum have such a facility? I googled for "yum several machines" but all the solutions suggested, eg setting up a local mirror, seemed to me excessive for my purposes.
There's recently been an article on LWN about just this:
http://lwn.net/Articles/318658/
Be sure to read the comments as well.
J.
-- Timothy Murphy e-mail: gayleard /at/ eircom.net tel: +353-86-2336090, +353-1-2842366 s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
Timothy Murphy wrote:
Basically, I would like a system where yum looks first in a /common/yum/ directory NFS-mounted on several machines, and if it does not find what it is looking for then it goes to a mirror as before, and adds what it finds to /common/yum/ as well as installing it on the machine in question.
Does yum have such a facility? I googled for "yum several machines" but all the solutions suggested, eg setting up a local mirror, seemed to me excessive for my purposes.
That's the way I do it here. I have a pair of scripts which handle it. The first mounts the master copy as /mnt/cache and creates symbolic links in /var/cache/yum to the rpm files. Then the upgrade is run, and the "backup" script first lists which files have changed, and then uses rsync to backup the changes. I found this was more reliable than NFS mounting, due to possible conflicts and also laptops being updated over less than perfect connections.
Note that doing this way does not delete old copies of the packages, which may or may not be desirable. I have a script which finds the deleted packages and backs them up by moving them, that way I have older stuff should I need it for some reason.
If you have solid networking and don't need the old packages you can just NFS mount /var/cache/yum (don't run multiple upgrades at the same time).