On most occasions I have to clean files before I am allowed to update a Fedora 30 system:
garry@ifr$ sudo dnf upgrade [sudo] password for garry: Fedora 30 - x86_64 - Updates 63 kB/s | 18 kB Fedora 30 - x86_64 - Updates 226 kB/s | 659 kB Failed to synchronize cache for repo 'updates' Error: Failed to synchronize cache for repo 'updates' garry@ifr$ sudo dnf clean all 68 files removed garry@ifr$ sudo dnf upgrade Fedora 30 openh264 (From Cisco) - x86_64 11 kB/s | 5.1 kB Fedora 30 - x86_64 - Updates 2.3 MB/s | 12 MB Fedora 30 - x86_64 1.0 MB/s | 70 MB google-chrome 37 kB/s | 3.4 kB Copr repo for qt5-qtbase-print-dialog-advanced o 91 kB/s | 100 kB RPM Fusion for Fedora 30 - Free - Updates 128 kB/s | 130 kB RPM Fusion for Fedora 30 - Free 570 kB/s | 735 kB RPM Fusion for Fedora 30 - Nonfree - Updates 54 kB/s | 34 kB RPM Fusion for Fedora 30 - Nonfree 449 kB/s | 227 kB Visual Studio Code 363 kB/s | 2.1 MB Dependencies resolved. ==================================================================
Does anyone know what is special about this system that it requires clean all before it will update? I have two other systems that never experience the same problem.
On Thu, May 23, 2019 at 16:32:22 -0400, Garry Williams gtwilliams@gmail.com wrote:
On most occasions I have to clean files before I am allowed to update a Fedora 30 system:
garry@ifr$ sudo dnf upgrade [sudo] password for garry: Fedora 30 - x86_64 - Updates 63 kB/s | 18 kB Fedora 30 - x86_64 - Updates 226 kB/s | 659 kB Failed to synchronize cache for repo 'updates' Error: Failed to synchronize cache for repo 'updates'
I get this once in a while. Usually if I grab a compose repo while it is building and then later try to update from an rsync'd normal repo. I haven't been able to figure out the reason yet. As far as I can tell it isn't miscopied data (except possibly in the cache).
On Thu, 23 May 2019 16:32:22 -0400 Garry Williams wrote:
On most occasions I have to clean files before I am allowed to update a Fedora 30 system:
dnf seems to have convinced itself that the cache is perfectly up to date no matter how old it is.
I now always do the two command sequence:
dnf makecache dnf update
Less overhead than starting from scratch with "clean all".
On 5/23/19 3:43 PM, Tom Horsley wrote:
dnf seems to have convinced itself that the cache is perfectly up to date no matter how old it is.
I now always do the two command sequence:
dnf makecache dnf update
Less overhead than starting from scratch with "clean all".
Can't you just use "--refresh"?
On Fri, May 24, 2019 at 4:21 AM Samuel Sieb samuel@sieb.net wrote:
On 5/23/19 3:43 PM, Tom Horsley wrote:
dnf seems to have convinced itself that the cache is perfectly up to date no matter how old it is.
I now always do the two command sequence:
dnf makecache dnf update
Less overhead than starting from scratch with "clean all".
Can't you just use "--refresh"?
Nether one of those suggestions gets around the problem I seem to have:
garry@ifr$ sudo dnf makecache;sudo dnf upgrade Fedora 30 openh264 (From Cisco) - x86_64 1.3 kB/s | 542 B 00:00 Fedora 30 - x86_64 - Updates 51 kB/s | 16 kB 00:00 Fedora 30 - x86_64 - Updates 84 kB/s | 89 kB 00:01 Failed to synchronize cache for repo 'updates' Error: Failed to synchronize cache for repo 'updates' Fedora 30 - x86_64 - Updates 50 kB/s | 16 kB 00:00 Fedora 30 - x86_64 - Updates 2.5 kB/s | 108 kB 00:43 Failed to synchronize cache for repo 'updates' Error: Failed to synchronize cache for repo 'updates' garry@ifr$ sudo dnf --refresh upgrade Fedora 30 openh264 (From Cisco) - x86_64 2.0 kB/s | 542 B 00:00 Fedora 30 - x86_64 - Updates 130 kB/s | 17 kB 00:00 Fedora 30 - x86_64 - Updates 89 kB/s | 101 kB 00:01 Failed to synchronize cache for repo 'updates' Error: Failed to synchronize cache for repo 'updates' garry@ifr$
The only way I found is to clean all. :-(
On Fri, 31 May 2019 10:40:45 -0400 Garry Williams gtwilliams@gmail.com wrote:
On Fri, May 24, 2019 at 4:21 AM Samuel Sieb samuel@sieb.net wrote:
Can't you just use "--refresh"?
Nether one of those suggestions gets around the problem I seem to have:
garry@ifr$ sudo dnf makecache;sudo dnf upgrade Fedora 30 openh264 (From Cisco) - x86_64 1.3 kB/s | 542B 00:00 Fedora 30 - x86_64 - Updates 51 kB/s | 16 kB 00:00 Fedora 30 - x86_64 - Updates 84 kB/s | 89 kB 00:01 Failed to synchronize cache for repo 'updates' Error: Failed to synchronize cache for repo 'updates' Fedora 30 - x86_64 - Updates 50 kB/s | 16 kB 00:00 Fedora 30 - x86_64 - Updates 2.5 kB/s | 108 kB 00:43 Failed to synchronize cache for repo 'updates' Error: Failed to synchronize cache for repo 'updates' garry@ifr$ sudo dnf --refresh upgrade Fedora 30 openh264 (From Cisco) - x86_64 2.0 kB/s | 542 B 00:00 Fedora 30 - x86_64 - Updates 130 kB/s | 17 kB 00:00 Fedora 30 - x86_64 - Updates 89 kB/s | 101 kB 00:01 Failed to synchronize cache for repo 'updates' Error: Failed to synchronize cache for repo 'updates' garry@ifr$
The only way I found is to clean all. :-(
It sounds like it is using a stale repository. Have you any plugins that restrict repositories or is there some setting in the /etc/dnf/dnf.conf file?
What happens if you do dnf clean metadata instead of dnf clean all?
What happens without the makecache?
Are there any anomalies in the /etc/yum.repos.d directory? Things that are enabled that should be disabled, and vice versa.
On Fri, May 31, 2019 at 12:43 PM stan via users users@lists.fedoraproject.org wrote:
On Fri, 31 May 2019 10:40:45 -0400 Garry Williams gtwilliams@gmail.com wrote:
On Fri, May 24, 2019 at 4:21 AM Samuel Sieb samuel@sieb.net wrote:
Can't you just use "--refresh"?
Nether one of those suggestions gets around the problem I seem to have:
garry@ifr$ sudo dnf makecache;sudo dnf upgrade
[snip mangled quote]
Failed to synchronize cache for repo 'updates'
...
The only way I found is to clean all. :-(
It sounds like it is using a stale repository. Have you any plugins that restrict repositories or is there some setting in the /etc/dnf/dnf.conf file?
Don't know about plugins, but
garry@ifr$ cat /etc/dnf/dnf.conf [main] gpgcheck=1 installonly_limit=6 clean_requirements_on_remove=True install_weak_deps=False metadata_expire=43200 garry@ifr$
What happens if you do dnf clean metadata instead of dnf clean all?
I don't know. The next time I can try is Monday.
But, of course, the issue is why this happens in the first place. I suspect no one here knows, so I will open a bug against dnf next week. Perhaps a developer will request data that will shed light on the problem.
What happens without the makecache?
That command was suggested instead of "dnf clean all". The reason I showed it was that it didn't help when I received "Error: Failed to synchronize cache for repo 'updates'" without it.
Are there any anomalies in the /etc/yum.repos.d directory?
Hmmm.
Things that are enabled that should be disabled, and vice versa.
Only modular is disabled. Normal otherwise.
On Friday, May 31, 2019 11:05:20 PM EDT Tim via users wrote:
On Fri, 2019-05-31 at 17:18 -0400, Garry Williams wrote:
But, of course, the issue is why this happens in the first place.
Does your ISP insert a transparent proxy between you and the internet? They're well known to cause caching problems.
Ah, ha! That is a difference between the problem system and the others I have that do not experience the problem.
My employer does eavesdrop on everything.
Thanks for the suggestion.
On 6/1/19 5:27 AM, Garry T. Williams wrote:
On Friday, May 31, 2019 11:05:20 PM EDT Tim via users wrote:
On Fri, 2019-05-31 at 17:18 -0400, Garry Williams wrote:
But, of course, the issue is why this happens in the first place.
Does your ISP insert a transparent proxy between you and the internet? They're well known to cause caching problems.
Ah, ha! That is a difference between the problem system and the others I have that do not experience the problem.
My employer does eavesdrop on everything.
I think I found the answer to this. I ran into the same problem with my simple custom proxy. Starting in F30, the repo uses zchunk. This means that dnf requests lots of byte ranges. If the proxy doesn't support this, then librepo fails. According to the http specs, a client MUST support getting more (or less) data than asked for when requesting ranges. However, librepo does not. I'm about to file a bug for this.
The easiest solution for you would be to disable zchunk for dnf on that system. The reason it works after a clean all is that it doesn't have the metadata file to update, so it downloads the whole thing instead of parts of it.
On Tue, Jun 4, 2019 at 9:55 PM Samuel Sieb samuel@sieb.net wrote:
I think I found the answer to this. I ran into the same problem with my simple custom proxy. Starting in F30, the repo uses zchunk. This means that dnf requests lots of byte ranges. If the proxy doesn't support this, then librepo fails. According to the http specs, a client MUST support getting more (or less) data than asked for when requesting ranges. However, librepo does not. I'm about to file a bug for this.
The easiest solution for you would be to disable zchunk for dnf on that system. The reason it works after a clean all is that it doesn't have the metadata file to update, so it downloads the whole thing instead of parts of it.
That appears to be effective. I added zchunk=False to /etc/dnf/dnf.conf and there was no failure after that today. It was failing pretty regularly before, so I bet it's fixed for me.
Thanks for that. Nice catch.
On Tue, 2019-06-04 at 18:54 -0700, Samuel Sieb wrote:
On 6/1/19 5:27 AM, Garry T. Williams wrote:
On Friday, May 31, 2019 11:05:20 PM EDT Tim via users wrote:
On Fri, 2019-05-31 at 17:18 -0400, Garry Williams wrote:
But, of course, the issue is why this happens in the first place.
Does your ISP insert a transparent proxy between you and the internet? They're well known to cause caching problems.
Ah, ha! That is a difference between the problem system and the others I have that do not experience the problem.
My employer does eavesdrop on everything.
I think I found the answer to this. I ran into the same problem with my simple custom proxy. Starting in F30, the repo uses zchunk. This means that dnf requests lots of byte ranges. If the proxy doesn't support this, then librepo fails. According to the http specs, a client MUST support getting more (or less) data than asked for when requesting ranges. However, librepo does not. I'm about to file a bug for this.
Please do, and please file it against zchunk when you do, but please first make sure you've updated to the latest versions if libdnf, librepo and zchunk-libs. librepo is supposed to automatically reduce the number of zchunk byte ranges it requests if there's a failure, so, if it's not, it's most likely a bug.
It would also be really helpful to see how your proxy responds to a request for too many byte ranges. And please make sure to attach dnf.librepo.log when you file the bug.
Jonathan