I have a bunch of x86-64 machines on my LAN.
It seems to be a major waste of time to have each one of them plow through and download all F24 packages.
I suppose I can poke around and find where the first one ends up downloading all the packages to, before the install, and then rsyncing the whole thing over to the next box.
But I was wondering if there was a less hacky way to do this.
On 06/22/2016 02:39 PM, Sam Varshavchik wrote:
I have a bunch of x86-64 machines on my LAN.
It seems to be a major waste of time to have each one of them plow through and download all F24 packages.
I suppose I can poke around and find where the first one ends up downloading all the packages to, before the install, and then rsyncing the whole thing over to the next box.
But I was wondering if there was a less hacky way to do this.
Uh, create your own local repo server, have it fetch the updates once and have your machines use your local repo to get their copies?
That's what we do. And we can add our own RPMs for local stuff to it. Get in contact with your internal google-fu and research "local rpm repo" for help in doing it. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - We have enough youth, how about a fountain of SMART? - ----------------------------------------------------------------------
Rick Stevens writes:
On 06/22/2016 02:39 PM, Sam Varshavchik wrote:
I have a bunch of x86-64 machines on my LAN.
It seems to be a major waste of time to have each one of them plow through and download all F24 packages.
I suppose I can poke around and find where the first one ends up downloading all the packages to, before the install, and then rsyncing the whole thing over to the next box.
But I was wondering if there was a less hacky way to do this.
Uh, create your own local repo server, have it fetch the updates once and have your machines use your local repo to get their copies?
Last time I checked, I was told that the full repo weighed in somewhere north of 20 gigabytes.
That's what we do. And we can add our own RPMs for local stuff to it. Get in contact with your internal google-fu and research "local rpm repo" for help in doing it.
Not until I move to a Google fiber city, unfortunately.
On 06/22/2016 03:46 PM, Sam Varshavchik wrote:
Rick Stevens writes:
On 06/22/2016 02:39 PM, Sam Varshavchik wrote:
I have a bunch of x86-64 machines on my LAN.
It seems to be a major waste of time to have each one of them plow through and download all F24 packages.
I suppose I can poke around and find where the first one ends up downloading all the packages to, before the install, and then rsyncing the whole thing over to the next box.
But I was wondering if there was a less hacky way to do this.
Uh, create your own local repo server, have it fetch the updates once and have your machines use your local repo to get their copies?
Last time I checked, I was told that the full repo weighed in somewhere north of 20 gigabytes.
You have to have the content SOMEWHERE local, don't you? You don't have to mirror the whole shooting match (all arches, the baseline OS, etc.), just the x86_64 updates repos you're interested in. And with 1TB drives costing $80USD (and you only need one on your local repo server), this is an issue?
That's what we do. And we can add our own RPMs for local stuff to it. Get in contact with your internal google-fu and research "local rpm repo" for help in doing it.
Not until I move to a Google fiber city, unfortunately.
Downloading once to a local machine and having the other machines on the LAN use it as their repo or setting up a caching proxy like squid and having your machines use that as a proxy somehow increases your WAN bandwidth use? Maybe I'm not understanding what you're trying to accomplish here or what your restrictions are.
Our local repo is a VM running under KVM on an old Dell R610 "utility" server we bought off eBay with two 500GB hard drives in a RAID1, 8GB RAM and 8 cores. The VM was given 300GB disk, 2GB RAM, two cores and runs a minimal Fedora server 23 (at the moment). It is a full repo for Fedora 21-23 (32- and 64-bit), CentOS 6 and 7 (both 32- and 64-bit) and serves over 300 client machines without even breaking a sweat. Hardware total: about $200USD. Took less than a day to set up. Polls the repos once a day to pick up updates. Simple. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - I won't rise to the occasion, but I'll slide over to it. - ----------------------------------------------------------------------
Rick Stevens writes:
On 06/22/2016 03:46 PM, Sam Varshavchik wrote:
Last time I checked, I was told that the full repo weighed in somewhere north of 20 gigabytes.
You have to have the content SOMEWHERE local, don't you? You don't have to mirror the whole shooting match (all arches, the baseline OS, etc.), just the x86_64 updates repos you're interested in. And with 1TB drives
I am not talking about the update repos. For system-upgrade I need to go to the full repo.
costing $80USD (and you only need one on your local repo server), this is an issue?
Disk space is not an issue. The issue is piss poor bandwidth for a typical US broadband.
It took just a bit less than half hour to download the packages needed for a full upgrade to F24. But multiply that by the number of machines to upgrade to F24, and this adds up quickly.
The issue is not regular daily updates. I have that automated and covered. A daily rsync of the updates directory to a local repo, with all machines pointing to it, and the regular updates repo turned off, does the trick.
The issue is upgrading to a new release. There is no good way to optimize the downloads in the same manner. rsyncing the entire 20 gig full Fedora release (if it's still about 20 gigs), would take me about ten hours.
Downloading once to a local machine and having the other machines on the LAN use it as their repo or setting up a caching proxy like squid and
That's one option, sure. I don't normally need squid, for my regular daily needs.
But I'll try the trick of rsyncing /var/lib/dnf/system-upgrade, first. This is apparently where dnf system-upgrade drops all of the downloaded packages.
If that's going to be sufficient, this will be fine for something that needs to be done twice a year. If not, I'll probably find the time to get squid up and running, in the next six months.
runs a minimal Fedora server 23 (at the moment). It is a full repo for Fedora 21-23 (32- and 64-bit), CentOS 6 and 7 (both 32- and 64-bit) and serves over 300 client machines without even breaking a sweat. Hardware total: about $200USD. Took less than a day to set up. Polls the repos once a day to pick up updates. Simple.
Daily updates is not the issue. The "dnf system-upgrade" reference in the subject line does not refer to daily updates.
On 06/22/2016 04:39 PM, Sam Varshavchik wrote:
Rick Stevens writes:
On 06/22/2016 03:46 PM, Sam Varshavchik wrote:
Last time I checked, I was told that the full repo weighed in somewhere north of 20 gigabytes.
You have to have the content SOMEWHERE local, don't you? You don't have to mirror the whole shooting match (all arches, the baseline OS, etc.), just the x86_64 updates repos you're interested in. And with 1TB drives
I am not talking about the update repos. For system-upgrade I need to go to the full repo.
costing $80USD (and you only need one on your local repo server), this is an issue?
Disk space is not an issue. The issue is piss poor bandwidth for a typical US broadband.
It took just a bit less than half hour to download the packages needed for a full upgrade to F24. But multiply that by the number of machines to upgrade to F24, and this adds up quickly.
The issue is not regular daily updates. I have that automated and covered. A daily rsync of the updates directory to a local repo, with all machines pointing to it, and the regular updates repo turned off, does the trick.
The issue is upgrading to a new release. There is no good way to optimize the downloads in the same manner. rsyncing the entire 20 gig full Fedora release (if it's still about 20 gigs), would take me about ten hours.
Downloading once to a local machine and having the other machines on the LAN use it as their repo or setting up a caching proxy like squid and
That's one option, sure. I don't normally need squid, for my regular daily needs.
But I'll try the trick of rsyncing /var/lib/dnf/system-upgrade, first. This is apparently where dnf system-upgrade drops all of the downloaded packages.
If that's going to be sufficient, this will be fine for something that needs to be done twice a year. If not, I'll probably find the time to get squid up and running, in the next six months.
runs a minimal Fedora server 23 (at the moment). It is a full repo for Fedora 21-23 (32- and 64-bit), CentOS 6 and 7 (both 32- and 64-bit) and serves over 300 client machines without even breaking a sweat. Hardware total: about $200USD. Took less than a day to set up. Polls the repos once a day to pick up updates. Simple.
Daily updates is not the issue. The "dnf system-upgrade" reference in the subject line does not refer to daily updates.
Ah, OK, yes, I missed that bit. But, as you said, it's only twice a year and so setting up something to rsync the whole repo down in the background when you're deciding to upgrade a batch of machines may not be such an onerous thing after all. After all, 30 minutes/machine times 20 machines = 10 hours. If you have <=20 machines to upgrade, do it the way you're doing it. If it's >20 machines, then pulling the entire repo down would be easier.
I have multiple 10Gbps pipes available to me at our data center, so I don't think about bandwidth issues per se very often. Sorry if I seemed callous about it. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - After a shooting spree, they always want to take the guns away - - from the people who didn't do it. - - -- William S.Burroughs - ----------------------------------------------------------------------
On 06/22/2016 04:39 PM, Sam Varshavchik wrote:
But I'll try the trick of rsyncing /var/lib/dnf/system-upgrade, first. This is apparently where dnf system-upgrade drops all of the downloaded packages.
Yes, this works great. I've done it many times. I also have a small proxy server I wrote myself specifically for the purpose of caching from yum/dnf. The benefit that has over something like squid is that it ignores the host name and path and only matches on the filename when sending a file from the cache. It's a bit of a hack, but it saves a huge amount of bandwidth and only downloads the set of files that are actually being used.
On Wed, Jun 22, 2016 at 5:39 PM, Sam Varshavchik mrsam@courier-mta.com wrote:
I have a bunch of x86-64 machines on my LAN.
It seems to be a major waste of time to have each one of them plow through and download all F24 packages.
I suppose I can poke around and find where the first one ends up downloading all the packages to, before the install, and then rsyncing the whole thing over to the next box.
But I was wondering if there was a less hacky way to do this.
From "man dnf.plugin.system-upgrade":
--datadir=DIRECTORY Save downloaded packages to DIRECTORY. DIRECTORY must already exist. This directory must be mounted automatically by the system or the upgrade will not work. The default is /var/lib/dnf/system-update.
A related question: is there any way to tell "dnf system-upgrade" to download packages from a local repo (either http or file) rather than going out to the net? I already have the big local repo and I'd rather not download everything again.
--Greg
On Wed, Jun 22, 2016 at 5:39 PM, Sam Varshavchik mrsam@courier-mta.com wrote:
Rick Stevens writes:
On 06/22/2016 03:46 PM, Sam Varshavchik wrote:
Last time I checked, I was told that the full repo weighed in somewhere north of 20 gigabytes.
You have to have the content SOMEWHERE local, don't you? You don't have to mirror the whole shooting match (all arches, the baseline OS, etc.), just the x86_64 updates repos you're interested in. And with 1TB drives
I am not talking about the update repos. For system-upgrade I need to go to the full repo.
costing $80USD (and you only need one on your local repo server), this
is an issue?
Disk space is not an issue. The issue is piss poor bandwidth for a typical US broadband.
It took just a bit less than half hour to download the packages needed for a full upgrade to F24. But multiply that by the number of machines to upgrade to F24, and this adds up quickly.
The issue is not regular daily updates. I have that automated and covered. A daily rsync of the updates directory to a local repo, with all machines pointing to it, and the regular updates repo turned off, does the trick.
The issue is upgrading to a new release. There is no good way to optimize the downloads in the same manner. rsyncing the entire 20 gig full Fedora release (if it's still about 20 gigs), would take me about ten hours.
Downloading once to a local machine and having the other machines on the
LAN use it as their repo or setting up a caching proxy like squid and
That's one option, sure. I don't normally need squid, for my regular daily needs.
But I'll try the trick of rsyncing /var/lib/dnf/system-upgrade, first. This is apparently where dnf system-upgrade drops all of the downloaded packages.
If that's going to be sufficient, this will be fine for something that needs to be done twice a year. If not, I'll probably find the time to get squid up and running, in the next six months.
runs a minimal Fedora server 23 (at the moment). It is a full repo for
Fedora 21-23 (32- and 64-bit), CentOS 6 and 7 (both 32- and 64-bit) and serves over 300 client machines without even breaking a sweat. Hardware total: about $200USD. Took less than a day to set up. Polls the repos once a day to pick up updates. Simple.
Daily updates is not the issue. The "dnf system-upgrade" reference in the subject line does not refer to daily updates.
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://lists.fedoraproject.org/admin/lists/users@lists.fedoraproject.org Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On 06/24/2016 08:09 AM, Greg Woods wrote:
A related question: is there any way to tell "dnf system-upgrade" to download packages from a local repo (either http or file) rather than going out to the net? I already have the big local repo and I'd rather not download everything again.
Yes, use the "--repofrompath myrepo,/path/to/repo" option. You can put whatever you want instead of "myrepo" and the path can also be an http url.