I'm running Fedora 30 on a somewhat older PC:
$ uname -a Linux pc.localdomain 5.4.14-100.fc30.x86_64 #1 SMP Thu Jan 23 13:19:57 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
I am experiencing just one minor issue having to do with throughput on the PC's built-in Ethernet interface:
# lspci -v -s 04:00.0 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) Subsystem: Gigabyte Technology Co., Ltd Onboard Ethernet Flags: bus master, fast devsel, latency 0, IRQ 18, NUMA node 0 I/O ports at ce00 [size=256] Memory at fd9ff000 (64-bit, prefetchable) [size=4K] Memory at fd9f8000 (64-bit, prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [b0] MSI-X: Enable+ Count=4 Masked- Capabilities: [d0] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 01-00-00-00-68-4c-e0-00 Kernel driver in use: r8169 Kernel modules: r8169
Until recently, the iperf3 network benchmark reported 940 Mbits/sec throughput in both directions but now for some unknown reason outgoing traffic is being throttled to 870 Mbits/sec:
$ iperf3 -c backup Connecting to host backup, port 5201 [ 5] local 192.168.4.6 port 43050 connected to 192.168.4.15 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 105 MBytes 884 Mbits/sec 0 300 KBytes [ 5] 1.00-2.00 sec 105 MBytes 877 Mbits/sec 0 328 KBytes [ 5] 2.00-3.00 sec 104 MBytes 874 Mbits/sec 0 328 KBytes [ 5] 3.00-4.00 sec 104 MBytes 874 Mbits/sec 0 345 KBytes [ 5] 4.00-5.00 sec 104 MBytes 873 Mbits/sec 0 345 KBytes [ 5] 5.00-6.00 sec 105 MBytes 879 Mbits/sec 0 345 KBytes [ 5] 6.00-7.00 sec 104 MBytes 873 Mbits/sec 0 345 KBytes [ 5] 7.00-8.00 sec 104 MBytes 873 Mbits/sec 0 345 KBytes [ 5] 8.00-9.00 sec 104 MBytes 873 Mbits/sec 0 345 KBytes [ 5] 9.00-10.00 sec 104 MBytes 873 Mbits/sec 0 345 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.02 GBytes 875 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 1.02 GBytes 874 Mbits/sec receiver
but incoming bandwidth is 940 Mbits/sec:
$ iperf3 -c backup -R Connecting to host backup, port 5201 Reverse mode, remote host backup is sending [ 5] local 192.168.4.6 port 43056 connected to 192.168.4.15 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 112 MBytes 941 Mbits/sec [ 5] 1.00-2.00 sec 112 MBytes 941 Mbits/sec [ 5] 2.00-3.00 sec 112 MBytes 941 Mbits/sec [ 5] 3.00-4.00 sec 112 MBytes 941 Mbits/sec [ 5] 4.00-5.00 sec 112 MBytes 941 Mbits/sec [ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec [ 5] 6.00-7.00 sec 112 MBytes 941 Mbits/sec [ 5] 7.00-8.00 sec 112 MBytes 941 Mbits/sec [ 5] 8.00-9.00 sec 112 MBytes 941 Mbits/sec [ 5] 9.00-10.00 sec 112 MBytes 941 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 180 sender [ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver
iperf Done.
The incoming throughput confirms that the Ethernet port is capable of 940 Mbits/sec.
My PC and 'backup' are connected to the same managed 1GigE switch (Ubiquiti EdgeSwitch 8). To rule out a switch configuration issue, I ran the same iperf3 test while the devices were connected to an unmanaged switch and got the same results.
The host named 'backup' tests at 940 Mbits/sec in both directions when tested against another Fedora 30 Linux PC of similar vintage so I don't think 'backup' (a Linux-based Synology NAS) is the issue.
'ethtool' shows that my PC's Ethernet auto-negotiates 1Gig mode:
# ethtool enp3s0 Settings for enp3s0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: No Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 1000Mb/s Duplex: Full Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000033 (51) drv probe ifdown ifup Link detected: yes
The Linux network interface matches up with the other Linux PC mentioned above. Note the MTU is 1500 (as it is on the 'backup' NAS).
# ip a show dev enp3s0 2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 94:de:80:21:61:12 brd ff:ff:ff:ff:ff:ff inet 192.168.4.6/24 brd 192.168.4.255 scope global dynamic noprefixroute enp3s0 valid_lft 86075sec preferred_lft 86075sec inet6 fe80::7c0a:4567:cd0f:13db/64 scope link noprefixroute valid_lft forever preferred_lft forever
# ip r default via 192.168.4.1 dev enp3s0 proto dhcp metric 100 192.168.4.0/24 dev enp3s0 proto kernel scope link src 192.168.4.6 metric 100
I did recently configure a smart queue policy on my LAN's Ubiquiti EdgeRouter 4's WAN interface but I doubt this contributing to the issue for three reasons: (1) the smart queue policy is bound to the router's WAN interface which is a different interface than the one that trunks to my PC's switch; (2) my benchmark test is between two devices on the same switch plus the switch's dashboard GUI makes it obvious that the traffic during the test doesn't go out via the trunk uplink to the router and (3) removing the router's smart queue policy then running the PC -> NAS iperf3 benchmark produces the same throughput numbers. Also, I recently configured the PC to go into suspend after 90 minutes and to set up a magic packet wake-on-lan before it suspends. I've tried backing off these changes followed by a cold reboot...no change in the iperf3 results.
If this is a Linux kernel issue, it would have to have been introduced several kernel releases ago because if I boot into the PC's oldest kernel, 5.4.10-100.fc30.x86_64, I still suffer from the slower outgoing throughput. So, either this issue began with a Linux kernel < 5.4.10 or it's due to a configuration change I made but lost track of.
Have I run into a known, or unreported, issue with the newer Linux kernels or the r8169 module in particular? If not, what might I look into next to try to pinpoint what's causing the reduction in outgoing throughput? Any other suggestions?
Thanks, Dave
On 2020-02-04 11:11, Dave Ulrick wrote:
I am experiencing just one minor issue having to do with throughput on the PC's built-in Ethernet interface:
Sounds as if it may be this?
https://bugzilla.redhat.com/show_bug.cgi?id=1797232
The BZ looks somewhat similar to my issue--same chipset, etc.--but I notice a couple of differences:
1. I'm not seeing any errors on the interface:
# ifconfig enp3s0 enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.4.6 netmask 255.255.255.0 broadcast 192.168.4.255 inet6 fe80::7c0a:4567:cd0f:13db prefixlen 64 scopeid 0x20<link> ether 94:de:80:21:61:12 txqueuelen 1000 (Ethernet) RX packets 2730169 bytes 2813667597 (2.6 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8497698 bytes 11492849340 (10.7 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
(the above was run after running speedtest-cli on the PC)
2. I can recreate my problem with kernel 5.4.10 whereas the BZ ticket says the issue didn't exist in 5.4.13 but does in 5.4.14. (My PC doesn't have 5.4.13 installed.)
Still, the similarities between my situation and the BZ make me wonder if there might be a common underlying issue.
Dave
On 2/3/20 9:26 PM, Ed Greshko wrote:
On 2020-02-04 11:11, Dave Ulrick wrote:
I am experiencing just one minor issue having to do with throughput on the PC's built-in Ethernet interface:
Sounds as if it may be this?
On 2/3/20 7:11 PM, Dave Ulrick wrote:
If this is a Linux kernel issue, it would have to have been introduced several kernel releases ago because if I boot into the PC's oldest kernel, 5.4.10-100.fc30.x86_64, I still suffer from the slower outgoing throughput. So, either this issue began with a Linux kernel < 5.4.10 or it's due to a configuration change I made but lost track of.
Have I run into a known, or unreported, issue with the newer Linux kernels or the r8169 module in particular? If not, what might I look into next to try to pinpoint what's causing the reduction in outgoing throughput? Any other suggestions?
To rule out a kernel issue, you could try running an older live image to test it.
On Mon, 2020-02-03 at 22:15 -0600, Dave Ulrick wrote:
The BZ looks somewhat similar to my issue--same chipset, etc.--but I notice a couple of differences:
- I'm not seeing any errors on the interface:
# ifconfig enp3s0 enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.4.6 netmask 255.255.255.0 broadcast 192.168.4.255 inet6 fe80::7c0a:4567:cd0f:13db prefixlen 64 scopeid 0x20<link> ether 94:de:80:21:61:12 txqueuelen 1000 (Ethernet) RX packets 2730169 bytes 2813667597 (2.6 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8497698 bytes 11492849340 (10.7 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
(the above was run after running speedtest-cli on the PC)
- I can recreate my problem with kernel 5.4.10 whereas the BZ ticket
says the issue didn't exist in 5.4.13 but does in 5.4.14. (My PC doesn't have 5.4.13 installed.)
Still, the similarities between my situation and the BZ make me wonder if there might be a common underlying issue.
The problem (TX errors in my case) persists with 5.4.15. My interface is:
$ sudo lspci -v -s 03:00.0 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7808 Flags: bus master, fast devsel, latency 0, IRQ 17 I/O ports at d000 [size=256] Memory at f7804000 (64-bit, prefetchable) [size=4K] Memory at f7800000 (64-bit, prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [b0] MSI-X: Enable+ Count=4 Masked- Capabilities: [d0] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 01-00-00-00-68-4c-e0-00 Kernel driver in use: r8169 Kernel modules: r8169 $ sudo ethtool enp3s0 Settings for enp3s0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: No Advertised FEC modes: Not reported Speed: 100Mb/s Duplex: Full Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: off Supports Wake-on: pumbg Wake-on: d Current message level: 0x00000033 (51) drv probe ifdown ifup Link detected: yes
Note that it's running at 100 Mb/s despite the interface and local switch being Gigabit capable. I've no idea why. I don't have another box with which to set up an iperf test.
However my problem is the high error rate, which in practice slows down Internet connections dramatically. My ISP connection is rated at 80Mbps/20Mpbs and usually gets to within 90% of that, but a speed test directly to the ISP is only getting around 3Mbps/2Mbps with this kernel, while with 5.4.13 it runs as expected.
I don't positively know that this is a device driver issue because I don't have a different NIC to test it on. It could be elsewhere in the networking stack.
poc
On Tue, 2020-02-04 at 11:19 +0000, Patrick O'Callaghan wrote:
However my problem is the high error rate, which in practice slows down Internet connections dramatically. My ISP connection is rated at 80Mbps/20Mpbs and usually gets to within 90% of that, but a speed test directly to the ISP is only getting around 3Mbps/2Mbps with this kernel, while with 5.4.13 it runs as expected.
Well, wouldn't you know it. I rebooted both 5.4.13 (which worked before) and now it also failed.
After swapping out the Ethernet cable I rebooted 5.4.13 and it's now working. I then rebooted with 5.4.15 and it too is currently working.
So it may be something as trivial as a bad cable, though I had tried different ones before now and it didn't seem to make a difference. If that's the case I apologise for the noise(!).
The interface is still only working at 100M rather than 1000M but I can live with that for now. It's possible that the NIC itself is flaky but I don't have another one to test.
I'll report back if anything else comes to light.
poc
Just a cable between you and the switch? The speed is typically controlled by the hardware itself, and if the speed is not gbit then in my experience it has always been some sort of physical issue (bad cable, not quite plugged in, damaged, or poor punchdown on the jacks).
What kind of cable are you using (ie a standard straight through cat XX?).
On Tue, Feb 4, 2020 at 5:38 AM Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Tue, 2020-02-04 at 11:19 +0000, Patrick O'Callaghan wrote:
However my problem is the high error rate, which in practice slows down Internet connections dramatically. My ISP connection is rated at 80Mbps/20Mpbs and usually gets to within 90% of that, but a speed test directly to the ISP is only getting around 3Mbps/2Mbps with this kernel, while with 5.4.13 it runs as expected.
Well, wouldn't you know it. I rebooted both 5.4.13 (which worked before) and now it also failed.
After swapping out the Ethernet cable I rebooted 5.4.13 and it's now working. I then rebooted with 5.4.15 and it too is currently working.
So it may be something as trivial as a bad cable, though I had tried different ones before now and it didn't seem to make a difference. If that's the case I apologise for the noise(!).
The interface is still only working at 100M rather than 1000M but I can live with that for now. It's possible that the NIC itself is flaky but I don't have another one to test.
I'll report back if anything else comes to light.
poc
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
On Tue, 2020-02-04 at 08:14 -0600, Roger Heflin wrote:
Just a cable between you and the switch? The speed is typically controlled by the hardware itself, and if the speed is not gbit then in my experience it has always been some sort of physical issue (bad cable, not quite plugged in, damaged, or poor punchdown on the jacks).
What kind of cable are you using (ie a standard straight through cat XX?).
The original cable is CAT-6. Distance is no more than around 10 feet. The substitute cable is some random white thing that came with the router. It's fairly thin and slightly flat.
I did do this cable swap before now, while testing all kinds of thing, so I'm not convinced it's the culprit but I'm going to leave well alone for now.
poc
The only other way I know to change the speed is with ethtool and/or setting something in the bios for the specific network card (see during post). I don't think gigabyte puts in the the normal bios, but I seem to remember there being 2-3 settings in the network card bios including a cable length detect option and 2 other options which may let one limit speed.
You said it is a managed switch, you do have the switch set to auto right? If the switch is hard set and the node is auto the standard says the auto guy will default to 100/half I think (maybe 100full), but certainly not Gbit anything. Both ends *MUST* be set exactly the same for it to work, auto with anything other than auto will act badly.
auto/auto is what you want, It has been 15+ years since I have seen any combination that actually needed to be hard set to work right.
On Tue, Feb 4, 2020 at 10:03 AM Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Tue, 2020-02-04 at 08:14 -0600, Roger Heflin wrote:
Just a cable between you and the switch? The speed is typically controlled by the hardware itself, and if the speed is not gbit then in my experience it has always been some sort of physical issue (bad cable, not quite plugged in, damaged, or poor punchdown on the jacks).
What kind of cable are you using (ie a standard straight through cat XX?).
The original cable is CAT-6. Distance is no more than around 10 feet. The substitute cable is some random white thing that came with the router. It's fairly thin and slightly flat.
I did do this cable swap before now, while testing all kinds of thing, so I'm not convinced it's the culprit but I'm going to leave well alone for now.
poc _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
On Tue, 2020-02-04 at 12:46 -0600, Roger Heflin wrote:
The only other way I know to change the speed is with ethtool and/or setting something in the bios for the specific network card (see during post). I don't think gigabyte puts in the the normal bios, but I seem to remember there being 2-3 settings in the network card bios including a cable length detect option and 2 other options which may let one limit speed.
I haven't looked at the BIOS but I'll check it out.
You said it is a managed switch, you do have the switch set to auto right? If the switch is hard set and the node is auto the standard says the auto guy will default to 100/half I think (maybe 100full), but certainly not Gbit anything. Both ends *MUST* be set exactly the same for it to work, auto with anything other than auto will act badly.
I didn't say it was a managed switch. It's a basic home router called a Fritz!Box 7530. Its management console says that the 4 LAN ports are configured to allow 1Gbps but my desktop is connected at 100Mbps. Ironically, my 10-year old NAS box is connected at 1Gbps.
auto/auto is what you want, It has been 15+ years since I have seen any combination that actually needed to be hard set to work right.
Same here. I just leave it alone. I don't think the router end even allows you to set the speed (i.e. the setting it has is a cap).
poc
The right pin being bent on one end or the others port would cause 100mbit if the pin that is damaged is not one of the 2 pairs 100mbit needs.
On Tue, Feb 4, 2020 at 3:18 PM Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Tue, 2020-02-04 at 12:46 -0600, Roger Heflin wrote:
The only other way I know to change the speed is with ethtool and/or setting something in the bios for the specific network card (see during post). I don't think gigabyte puts in the the normal bios, but I seem to remember there being 2-3 settings in the network card bios including a cable length detect option and 2 other options which may let one limit speed.
I haven't looked at the BIOS but I'll check it out.
You said it is a managed switch, you do have the switch set to auto right? If the switch is hard set and the node is auto the standard says the auto guy will default to 100/half I think (maybe 100full), but certainly not Gbit anything. Both ends *MUST* be set exactly the same for it to work, auto with anything other than auto will act badly.
I didn't say it was a managed switch. It's a basic home router called a Fritz!Box 7530. Its management console says that the 4 LAN ports are configured to allow 1Gbps but my desktop is connected at 100Mbps. Ironically, my 10-year old NAS box is connected at 1Gbps.
auto/auto is what you want, It has been 15+ years since I have seen any combination that actually needed to be hard set to work right.
Same here. I just leave it alone. I don't think the router end even allows you to set the speed (i.e. the setting it has is a cap).
poc
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
On 2/3/20 9:26 PM, Ed Greshko wrote:
On 2020-02-04 11:11, Dave Ulrick wrote:
I am experiencing just one minor issue having to do with throughput on the PC's built-in Ethernet interface:
Sounds as if it may be this?
I think I've pinpointed a possible cause for my Ethernet slowdown. Evidently the Ethernet chipset on the affected PC has some bugs insofar as offload functionality and some relatively recent patches to the r8169 kernel module have had a negative effect on performance. The crucial factor seems to be the tcp-segmentation-offload parameter that's viewable with ethtool -k.
The PC with slow transmit speeds has tcp-segmentation-offload turned off:
tcp-segmentation-offload: off
and 'ethtool' won't let me turn it on:
# ethtool -K enp3s0 tso on Cannot change tcp-segmentation-offload Could not change any device features
Another PC which doesn't suffer from the transmit slowness has tcp-segmentation-offload enabled:
tcp-segmentation-offload: on
The first PC gets an iperf3 bitrate of ~ 870 Mbits/sec versus the second PC's bitrate of ~ 940 Mbits/sec.
If I disable tcp-segmentation-offload on the second PC:
# ethtool -K enp4s0 tso off
and run iperf3, I get ~ 880 Mbits/sec, very similar to the throughput on the slower PC. If I turn tso back on, it goes back to ~ 940 Mbits/sec. This makes it look as though having tcp-segmentation-offload disabled might be the reason why the slower PC is getting slower transmit throughput.
Maybe a recent Linux kernel distributed an r8169 module that disabled tcp-segmentation-offload for:
r8169 0000:03:00.0 eth0: RTL8168evl/8111evl, 94:de:80:21:61:12, XID 2c9, IRQ 30
but let it continue to work OK for:
r8169 0000:04:00.0 eth0: RTL8168d/8111d, 00:30:67:6a:e2:64, XID 281, IRQ 35
This looks very similar to my issue--RTL8168evl chipset & tcp-segmentation-offload--excepting that they're discussing kernel 4.19 whereas I'm using 5.4.17:
https://lore.kernel.org/netdev/217e3fa9-7782-08c7-1f2b-8dabacaa83f9@gmail.co...
I've ordered a PCIE Gigabit Ethernet card in hopes that it will have a chipset that isn't affected by this issue.
Dave
On 2/8/20 3:18 PM, Dave Ulrick wrote:
The first PC gets an iperf3 bitrate of ~ 870 Mbits/sec versus the second PC's bitrate of ~ 940 Mbits/sec.
I've ordered a PCIE Gigabit Ethernet card in hopes that it will have a chipset that isn't affected by this issue.
Now I'm curious what you are doing that is so affected by that little difference of maximum bandwidth that you have to buy a new network card.
On Sat, 2020-02-08 at 17:18 -0600, Dave Ulrick wrote:
I think I've pinpointed a possible cause for my Ethernet slowdown.
That's interesting, though as I said earlier I doubt it's the same problem I'm having.
I'm getting a high incidence (3%) of receive errors, which seems to vary from one reboot to the next with no apparent pattern. Changing cables, ports and routers seems to make no difference.
I've just bought a new NIC card and will install it before my next reboot. Hopefully that will fix it.
poc
On Sun, 9 Feb 2020 at 07:15, Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Sat, 2020-02-08 at 17:18 -0600, Dave Ulrick wrote:
I think I've pinpointed a possible cause for my Ethernet slowdown.
That's interesting, though as I said earlier I doubt it's the same problem I'm having.
I'm getting a high incidence (3%) of receive errors, which seems to vary from one reboot to the next with no apparent pattern. Changing cables, ports and routers seems to make no difference.
Check components near the ethernet port for signs of damage.
I once worked on a Windows PC running PCNFS. NFS was failing. I ran the vendor's diagnostics on the network card and it came up "healthy", but visual inspection of the card revealed fried components. The PC was connected via very long cables to a satellite dish, so very likely subject to spikes of induced current from lightning. A well-built system like the SGI Octanes is much more robust. I actually saw lightening come in the corner of the computer room and down across an Octane. The Octane survived except for the small bulbs in the light bar which was outside the heavy metal chassis.
I've just bought a new NIC card and will install it before my next reboot. Hopefully that will fix it.
It could be worth trying diagnostics if you can find them. Usually they are DOS programs. HP used to have a program to make bootable DOS USB keys, not there is "Rufus"
On Sun, 2020-02-09 at 08:05 -0400, George N. White III wrote:
On Sun, 9 Feb 2020 at 07:15, Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Sat, 2020-02-08 at 17:18 -0600, Dave Ulrick wrote:
I think I've pinpointed a possible cause for my Ethernet slowdown.
That's interesting, though as I said earlier I doubt it's the same problem I'm having.
I'm getting a high incidence (3%) of receive errors, which seems to vary from one reboot to the next with no apparent pattern. Changing cables, ports and routers seems to make no difference.
Check components near the ethernet port for signs of damage.
I'll have look when I insert the new card.
I once worked on a Windows PC running PCNFS. NFS was failing. I ran the vendor's diagnostics on the network card and
[...]
It could be worth trying diagnostics if you can find them. Usually they are DOS programs. HP used to have a program to make bootable DOS USB keys, not there is "Rufus"
Sure.
poc
On Sun, 2020-02-09 at 08:05 -0400, George N. White III wrote:
Check components near the ethernet port for signs of damage.
Though there's every chance that there won't be any visible signs. Fried electronic parts don't have to be charred.
I have to periodically replace ethernet switches, and/or network cards on computers that are connected between buildings. There can be a significant voltage difference on the mains wiring between buildings, and even between circuits within a building.
It seems that few ethernet interfaces bother to use galvanic isolating transformers, or opto-coupling, so they're vulnerable to voltages on earthing.
Static shock is also a posibility (the inevitable walking across the carpet and zapping things, or people wearing static electricity generating clothing).
Our recent computers have motherboard ethernet ports, I don't fancy the chances that the ethernet port being zapped will be limited to just the ethernet port components. The previous dead network cards didn't just not network, they would hang the PC, prevent booting, and cause random crashes.
On Sun, 2020-02-09 at 23:39 +1030, Tim via users wrote:
On Sun, 2020-02-09 at 08:05 -0400, George N. White III wrote:
Check components near the ethernet port for signs of damage.
Though there's every chance that there won't be any visible signs. Fried electronic parts don't have to be charred.
I have to periodically replace ethernet switches, and/or network cards on computers that are connected between buildings. There can be a significant voltage difference on the mains wiring between buildings, and even between circuits within a building.
It seems that few ethernet interfaces bother to use galvanic isolating transformers, or opto-coupling, so they're vulnerable to voltages on earthing.
Static shock is also a posibility (the inevitable walking across the carpet and zapping things, or people wearing static electricity generating clothing).
Our recent computers have motherboard ethernet ports, I don't fancy the chances that the ethernet port being zapped will be limited to just the ethernet port components. The previous dead network cards didn't just not network, they would hang the PC, prevent booting, and cause random crashes.
Yes, I'm fairly sceptical as to this being the explanation. This is a home desktop with onboard Ethernet and the router is on the same mains circuit in the same room. The mobo is showing no other issues though it's about 6 years old so I'm planning on getting a new one this year anyway, mostly because it has no NVMe slots and can only support 16GB of RAM.
If the problem persists with the new NIC I'll know to look elsewhere. Phase of the moon, maybe.
poc
On Sun, 2020-02-09 at 14:12 +0000, Patrick O'Callaghan wrote:
On Sun, 2020-02-09 at 23:39 +1030, Tim via users wrote:
On Sun, 2020-02-09 at 08:05 -0400, George N. White III wrote:
Check components near the ethernet port for signs of damage.
Though there's every chance that there won't be any visible signs. Fried electronic parts don't have to be charred.
I have to periodically replace ethernet switches, and/or network cards on computers that are connected between buildings. There can be a significant voltage difference on the mains wiring between buildings, and even between circuits within a building.
It seems that few ethernet interfaces bother to use galvanic isolating transformers, or opto-coupling, so they're vulnerable to voltages on earthing.
Static shock is also a posibility (the inevitable walking across the carpet and zapping things, or people wearing static electricity generating clothing).
Our recent computers have motherboard ethernet ports, I don't fancy the chances that the ethernet port being zapped will be limited to just the ethernet port components. The previous dead network cards didn't just not network, they would hang the PC, prevent booting, and cause random crashes.
Yes, I'm fairly sceptical as to this being the explanation. This is a home desktop with onboard Ethernet and the router is on the same mains circuit in the same room. The mobo is showing no other issues though it's about 6 years old so I'm planning on getting a new one this year anyway, mostly because it has no NVMe slots and can only support 16GB of RAM.
If the problem persists with the new NIC I'll know to look elsewhere. Phase of the moon, maybe.
OK, installed the new NIC and no errors so far, touch wood. I also didn't notice any obvious damage to the mobo.
However, it's still running at 100Mbs: $ sudo ethtool enp4s0 Settings for enp4s0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: Yes <--------------------* Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: No <--------------------* Advertised FEC modes: Not reported Speed: 100Mb/s <--------------------* Duplex: Full Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: off <--------------------* Supports Wake-on: pumbg Wake-on: d Current message level: 0x00000033 (51) drv probe ifdown ifup Link detected: yes
Note that Auto-negotiation is Off (unlike the old NIC, which always had it On).
I see that /etc/sysconfig/network-scripts/ifcfg-Wired_connection_1 has: ETHTOOL_OPTS="autoneg off speed 100 duplex full"
So I changed that to turn autoneg on and speed to 1000, and rebooted. The system came up with no network, so I reverted the change. Clearly that isn't the right way to do it.
Recommendations are welcome.
poc
On 2020-02-10 20:14, Patrick O'Callaghan wrote:
So I changed that to turn autoneg on and speed to 1000, and rebooted. The system came up with no network, so I reverted the change. Clearly that isn't the right way to do it.
Recommendations are welcome.
Well, I use network manager and my network-script for enp2s0 only has
ETHTOOL_OPTS="autoneg on"
Could be that if you have autoneg on and then specify other stuff that should be taken care of by the autoneg you cause issues.
ethtool shows
Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: Symmetric Receive-only Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 1000Mb/s Duplex: Full
for the link.
On Mon, 2020-02-10 at 20:40 +0800, Ed Greshko wrote:
On 2020-02-10 20:14, Patrick O'Callaghan wrote:
So I changed that to turn autoneg on and speed to 1000, and rebooted. The system came up with no network, so I reverted the change. Clearly that isn't the right way to do it.
Recommendations are welcome.
Well, I use network manager and my network-script for enp2s0 only has
ETHTOOL_OPTS="autoneg on"
Could be that if you have autoneg on and then specify other stuff that should be taken care of by the autoneg you cause issues.
I tried that (note that I've never touched that file before now so I don't know where the settings originally came from) and now it does come up, but is still running at 100Mbps. I rebooted both my box and the router, but no difference. Both ends definitely support 1000Mbps but my end now shows:
$ sudo ethtool enp4s0 Settings for enp4s0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes <---------------------* Advertised FEC modes: Not reported Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Link partner advertised pause frame use: Symmetric Receive-only Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 100Mb/s <---------------------* Duplex: Full Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: on <---------------------* Supports Wake-on: pumbg Wake-on: d Current message level: 0x00000033 (51) drv probe ifdown ifup Link detected: yes
poc
On 2020-02-10 21:36, Patrick O'Callaghan wrote:
On Mon, 2020-02-10 at 20:40 +0800, Ed Greshko wrote:
On 2020-02-10 20:14, Patrick O'Callaghan wrote:
So I changed that to turn autoneg on and speed to 1000, and rebooted. The system came up with no network, so I reverted the change. Clearly that isn't the right way to do it.
Recommendations are welcome.
Well, I use network manager and my network-script for enp2s0 only has
ETHTOOL_OPTS="autoneg on"
Could be that if you have autoneg on and then specify other stuff that should be taken care of by the autoneg you cause issues.
I tried that (note that I've never touched that file before now so I don't know where the settings originally came from) and now it does come up, but is still running at 100Mbps. I rebooted both my box and the router, but no difference. Both ends definitely support 1000Mbps but my end now shows:
$ sudo ethtool enp4s0 Settings for enp4s0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes <---------------------* Advertised FEC modes: Not reported Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Link partner advertised pause frame use: Symmetric Receive-only Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 100Mb/s <---------------------* Duplex: Full Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: on <---------------------* Supports Wake-on: pumbg Wake-on: d Current message level: 0x00000033 (51) drv probe ifdown ifup Link detected: yes
The key is....
Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full
The other side is telling that it only supports 100Mb/s
On Mon, 2020-02-10 at 22:04 +0800, Ed Greshko wrote:
The key is....
Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full
The other side is telling that it only supports 100Mb/s
Interesting. I have another machine (my NAS) connecting at 1Gbps to another port on the same router. Unfortunately it runs an ancient version of Debian so doesn't have ethtool.
poc
On Mon, 10 Feb 2020 at 12:12, Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Mon, 2020-02-10 at 22:04 +0800, Ed Greshko wrote:
The key is....
Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full
The other side is telling that it only supports 100Mb/s
Interesting. I have another machine (my NAS) connecting at 1Gbps to another port on the same router. Unfortunately it runs an ancient version of Debian so doesn't have ethtool.
1000baseT uses all 4 pairs in the cable, while 100baseT uses 2 pairs, so this sounds like bad cable. Improperly terminated cables caused problems for me when 1000baseT came in. Some patch cables had uneven pins in the modular plugs when you look at the plug end-on. I suspect the installer used a worn crimping tool and just checked for 100baseT connectivity. There are commercial cable testers, but I found diagnostic software (DOS?) specific to the ethernet device in a laptop that did a good job of detecting the bad cables and was able to borrow a high quality crimping tool to tweak the badly crimped plugs.
A proper tester can find issues you won't find using PC software: https://serverfault.com/questions/426817/how-to-test-cat5-cat6-cable-runs-us...
On 2020-02-11 00:10, Patrick O'Callaghan wrote:
On Mon, 2020-02-10 at 22:04 +0800, Ed Greshko wrote:
The key is....
Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full
The other side is telling that it only supports 100Mb/s
Interesting. I have another machine (my NAS) connecting at 1Gbps to another port on the same router. Unfortunately it runs an ancient version of Debian so doesn't have ethtool.
A question and a test.
If your NAS doesn't have ethtool how are you certain what speed it is connected at?
Well, move the cable from the back of the NAS to the back of the machine with the issue. This way the machine gets the cable and the port which you believe to be working.
Meaning, it could be the port on the router which has an issue.
And, BTW, what is the make/model of the router?
On Tue, 2020-02-11 at 04:12 +0800, Ed Greshko wrote:
On 2020-02-11 00:10, Patrick O'Callaghan wrote:
On Mon, 2020-02-10 at 22:04 +0800, Ed Greshko wrote:
The key is....
Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full
The other side is telling that it only supports 100Mb/s
Interesting. I have another machine (my NAS) connecting at 1Gbps to another port on the same router. Unfortunately it runs an ancient version of Debian so doesn't have ethtool.
A question and a test.
If your NAS doesn't have ethtool how are you certain what speed it is connected at?
The router shows the port running at 1000Mbps.
Well, move the cable from the back of the NAS to the back of the machine with the issue. This way the machine gets the cable and the port which you believe to be working.
I switched the cable that came with the router for the Cat-6 I was using when this problem arose originally, but without changing the port.
It's now running at 1000Mbps. Clearly the (new) cable was at fault.
Thanks.
poc
And I am going to guess before the extra ethtool options being configured you found were inhibiting your previous tests.
On Mon, Feb 10, 2020 at 4:32 PM Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Mon, 2020-02-10 at 13:50 -0400, George N. White III wrote:
1000baseT uses all 4 pairs in the cable, while 100baseT uses 2 pairs, so this sounds like bad cable.
You're right, it was the cable. See my reply to Ed.
Thanks for your help.
poc _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Patrick, glad to see you got your issue sorted out! I've experienced a similar issue with network cables once or twice.
And now a word from the original poster... :-)
On 2/8/20 5:18 PM, Dave Ulrick wrote:
I've ordered a PCIE Gigabit Ethernet card in hopes that it will have a chipset that isn't affected by this issue.
My issue was very different than Patrick's...mine is that transmit performance was 7% worse than receive performance when using a network interface with the RTL8168evl chipset: 870 Mbps transmit, 940 Mbps receive.
The PCIE card arrived today and is now installed. It has the RTL8168e chipset:
[ 3.007475] r8169 0000:03:00.0 eth0: RTL8168e/8111e, 34:e8:94:db:6c:bc, XID 2c2, IRQ 31
It has fixed my Ethernet performance issue!
I'm back to 940 Mbits/sec both transmit and receive. iperf3 transmit results:
[ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 115 MBytes 963 Mbits/sec 48 413 KBytes [ 5] 1.00-2.00 sec 112 MBytes 938 Mbits/sec 144 327 KBytes [ 5] 2.00-3.00 sec 112 MBytes 939 Mbits/sec 144 269 KBytes [ 5] 3.00-4.00 sec 111 MBytes 929 Mbits/sec 96 320 KBytes [ 5] 4.00-5.00 sec 113 MBytes 948 Mbits/sec 96 287 KBytes [ 5] 5.00-6.00 sec 112 MBytes 938 Mbits/sec 96 327 KBytes [ 5] 6.00-7.00 sec 112 MBytes 938 Mbits/sec 144 294 KBytes [ 5] 7.00-8.00 sec 112 MBytes 940 Mbits/sec 96 362 KBytes [ 5] 8.00-9.00 sec 112 MBytes 938 Mbits/sec 144 242 KBytes [ 5] 9.00-10.00 sec 112 MBytes 939 Mbits/sec 96 318 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec 1104 sender [ 5] 0.00-10.00 sec 1.09 GBytes 938 Mbits/sec receiver
The retries are most likely due to the destination host being on a different switch so the packets have to pass through two switches. When the destination host is on the same switch, there are no retries:
[ 5] local 192.168.4.6 port 60268 connected to 192.168.4.15 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 114 MBytes 959 Mbits/sec 0 373 KBytes [ 5] 1.00-2.00 sec 112 MBytes 939 Mbits/sec 0 392 KBytes [ 5] 2.00-3.00 sec 112 MBytes 941 Mbits/sec 0 392 KBytes [ 5] 3.00-4.00 sec 112 MBytes 943 Mbits/sec 0 392 KBytes [ 5] 4.00-5.00 sec 112 MBytes 941 Mbits/sec 0 443 KBytes [ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec 0 443 KBytes [ 5] 6.00-7.00 sec 112 MBytes 941 Mbits/sec 0 443 KBytes [ 5] 7.00-8.00 sec 112 MBytes 941 Mbits/sec 0 443 KBytes [ 5] 8.00-9.00 sec 112 MBytes 941 Mbits/sec 0 443 KBytes [ 5] 9.00-10.00 sec 112 MBytes 941 Mbits/sec 0 443 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver
'ethtool -k' shows that the offload options of interest are turned on:
tcp-segmentation-offload: on generic-segmentation-offload: on
Compare with the onboard Ethernet interface with the RTL8168evl chipset:
[ 3.015108] r8169 0000:04:00.0 eth1: RTL8168evl/8111evl, 94:de:80:21:61:12, XID 2c9, IRQ 32
tcp-segmentation-offload: off generic-segmentation-offload: off [requested on]
iperf3 results:
[ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 105 MBytes 882 Mbits/sec 0 310 KBytes [ 5] 1.00-2.00 sec 106 MBytes 886 Mbits/sec 0 1.03 MBytes [ 5] 2.00-3.00 sec 102 MBytes 860 Mbits/sec 0 1.03 MBytes [ 5] 3.00-4.00 sec 104 MBytes 870 Mbits/sec 0 1.03 MBytes [ 5] 4.00-5.00 sec 104 MBytes 870 Mbits/sec 0 1.08 MBytes [ 5] 5.00-6.00 sec 104 MBytes 870 Mbits/sec 0 1.08 MBytes [ 5] 6.00-7.00 sec 104 MBytes 870 Mbits/sec 0 1.08 MBytes [ 5] 7.00-8.00 sec 102 MBytes 860 Mbits/sec 0 1.08 MBytes [ 5] 8.00-9.00 sec 104 MBytes 870 Mbits/sec 0 1.08 MBytes [ 5] 9.00-10.00 sec 104 MBytes 870 Mbits/sec 0 1.08 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.01 GBytes 871 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 1.01 GBytes 868 Mbits/sec receiver
Likewise the traffic passed through two switches, but the speed was throttled (due I assume to the disabled offload options) so there was adequate bandwidth for all packets to arrive in a timely way.
BTW, I've discovered that two mini-PCs I have on my LAN also have the RTL8168evl chipset, have the same two offload parameters turned off, and suffer from the same network transmit performance issue. I use them as media players (MythTV front ends) and perform acceptably with high-def video content so I'm not concerned about them.
Conclusion: a change to the r8169 module in recent Linux kernels regressed performance with the RTL8168evl chipset.
Dave
On Mon, 2020-02-10 at 22:31 +0000, Patrick O'Callaghan wrote:
I switched the cable that came with the router for the Cat-6 I was using when this problem arose originally, but without changing the port.
It's now running at 1000Mbps. Clearly the (new) cable was at fault.
Good to hear. It's a wonder that network cables aren't the cause of more things, or perhaps it's not even realised the network isn't running properly (chances are some people won't notice that their network is running at 100 Mbs instead of 1000). Cables get dragged about, badly kinked, crushed against the wall, and equipment-supplied ones can be crappy. I've got one that came with equipment that has 8 thin parallel wires, none of them are twisted together.
And I've got a weird ISP-supplied router with 1 Gbs port and 3 100 Mbs ports. Quite why anyone would build something like that, I don't know. And it's hard to tell which is the gigabit port, there's no printing, just tiny indented writing on white plastic.
On Tue, 2020-02-11 at 12:33 +1030, Tim via users wrote:
On Mon, 2020-02-10 at 22:31 +0000, Patrick O'Callaghan wrote:
I switched the cable that came with the router for the Cat-6 I was using when this problem arose originally, but without changing the port.
It's now running at 1000Mbps. Clearly the (new) cable was at fault.
Good to hear. It's a wonder that network cables aren't the cause of more things, or perhaps it's not even realised the network isn't running properly (chances are some people won't notice that their network is running at 100 Mbs instead of 1000). Cables get dragged about, badly kinked, crushed against the wall, and equipment-supplied ones can be crappy. I've got one that came with equipment that has 8 thin parallel wires, none of them are twisted together.
And I've got a weird ISP-supplied router with 1 Gbs port and 3 100 Mbs ports. Quite why anyone would build something like that, I don't know. And it's hard to tell which is the gigabit port, there's no printing, just tiny indented writing on white plastic.
Who knows why ISPs do anything? In fact my ISP (Zen Internet) is pretty responsive and their support crew sound like human beings rather than droids. That said, it's clear that the thin white cable supplied with the router is simply not adequate for Gigabit Ethernet. Once I sussed the problem, it was obvious that this was the case and I should have realised it earlier. My old NAS is connected with a proper CAT-6 cable and it was going full speed.
I'll be mailing Zen to let them know they should inform their customers of this.
poc