This is quite embarrassing, but I'm banging my head against a wall and hoping other eyes will spot some obvious mistake.
I have an F31 guest (fedora30) running in QEMU/KVM on an F31 host (Bree). I want to mount a host directory via NFS in the guest. I set this up a long time ago and it has worked through several Fedora releases without issue, but in a fit of spring cleaning I did a fresh install of F31 rather than my usual update, so of course now it doesn't work. Clearly I did something right back in the day and have now forgotten what it was.
The guest can ping the host and ping the wider Internet, so basic connectivity works (this is via a NAT-style connection). The host can ssh into the guest.
Firewall setup on the host: [poc@Bree ~]$ firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp3s0 sources: services: dhcp dhcpv6-client dns libvirt mdns mountd nfs nfs3 plex rpc-bind rsyncd samba samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
And on the guest: [poc@fedora30 ~]$ sudo firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: dhcpv6-client mdns samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
Guest ip: [poc@fedora30 ~]$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:ca:07:30 brd ff:ff:ff:ff:ff:ff inet 192.168.122.156/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0 valid_lft 2893sec preferred_lft 2893sec inet6 fe80::2e77:5bc1:d19a:6045/64 scope link noprefixroute valid_lft forever preferred_lft forever
and routing: [poc@fedora30 ~]$ ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.156 metric 100
[poc@Bree ~]$ ping fedora30 PING fedora30 (192.168.122.156) 56(84) bytes of data. 64 bytes from fedora30 (192.168.122.156): icmp_seq=1 ttl=64 time=20.1 ms ...
Exports on the host: [poc@Bree ~]$ sudo exportfs /home/Media 192.168.0.0/16 /home/poc/Shared vm-* /home/poc/Shared fedora*
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
poc
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
This is quite embarrassing, but I'm banging my head against a wall and hoping other eyes will spot some obvious mistake.
I have an F31 guest (fedora30) running in QEMU/KVM on an F31 host (Bree). I want to mount a host directory via NFS in the guest. I set this up a long time ago and it has worked through several Fedora releases without issue, but in a fit of spring cleaning I did a fresh install of F31 rather than my usual update, so of course now it doesn't work. Clearly I did something right back in the day and have now forgotten what it was.
The guest can ping the host and ping the wider Internet, so basic connectivity works (this is via a NAT-style connection). The host can ssh into the guest.
Firewall setup on the host: [poc@Bree ~]$ firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp3s0 sources: services: dhcp dhcpv6-client dns libvirt mdns mountd nfs nfs3 plex rpc-bind rsyncd samba samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
And on the guest: [poc@fedora30 ~]$ sudo firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: dhcpv6-client mdns samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
Guest ip: [poc@fedora30 ~]$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:ca:07:30 brd ff:ff:ff:ff:ff:ff inet 192.168.122.156/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0 valid_lft 2893sec preferred_lft 2893sec inet6 fe80::2e77:5bc1:d19a:6045/64 scope link noprefixroute valid_lft forever preferred_lft forever
and routing: [poc@fedora30 ~]$ ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.156 metric 100
[poc@Bree ~]$ ping fedora30 PING fedora30 (192.168.122.156) 56(84) bytes of data. 64 bytes from fedora30 (192.168.122.156): icmp_seq=1 ttl=64 time=20.1 ms ...
Exports on the host: [poc@Bree ~]$ sudo exportfs /home/Media 192.168.0.0/16 /home/poc/Shared vm-* /home/poc/Shared fedora*
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
Well, I don't currently have a host---guest nfs setup. But I do have a guest---guest setup the server is on an F30 VM while the client is an F31VM.
On the server [root@f30k etc]# firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: dhcpv6-client mdns mountd nfs rpc-bind ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
On the client [root@f31gq ~]# firewall-cmd --list-all FedoraWorkstation (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: dhcpv6-client mdns samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
On the server [root@f30k etc]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:22:fa:39 brd ff:ff:ff:ff:ff:ff inet 192.168.122.64/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0 valid_lft 2707sec preferred_lft 2707sec inet6 fe80::7745:e38f:3a8f:5343/64 scope link noprefixroute valid_lft forever preferred_lft forever
On the client [root@f31gq ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:26:a7:35 brd ff:ff:ff:ff:ff:ff inet 192.168.122.75/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0 valid_lft 2649sec preferred_lft 2649sec inet6 fe80::781e:ec29:2d21:7e7d/64 scope link noprefixroute valid_lft forever preferred_lft forever
On the server [root@f30k etc]# exportfs /mnt 192.168.0.0/16
On the client [root@f31gq ~]# showmount -e 192.168.122.64 Export list for 192.168.122.64: /mnt 192.168.0.0/16
And, of course [root@f31gq ~]# mount -t nfs4 192.168.122.64:/mnt /mnt [root@f31gq ~]# ls /mnt [root@f31gq ~]# touch /mnt/x [root@f31gq ~]# ls /mnt x [root@f31gq ~]# umount /mnt [root@f31gq ~]# ls /mnt [root@f31gq ~]#
On Sat, 16 Nov 2019 at 14:50, Patrick O'Callaghan pocallaghan@gmail.com wrote:
This is quite embarrassing, but I'm banging my head against a wall and hoping other eyes will spot some obvious mistake.
I have an F31 guest (fedora30) running in QEMU/KVM on an F31 host (Bree). I want to mount a host directory via NFS in the guest. I set this up a long time ago and it has worked through several Fedora releases without issue, but in a fit of spring cleaning I did a fresh install of F31 rather than my usual update, so of course now it doesn't work. Clearly I did something right back in the day and have now forgotten what it was.
There have been changes to NFS, particularly, NFSv4, but also some measures to make it less insecure. Meanwhile, 9p is widely used to share files between VM guests and hosts (mostly because it needs fewer host resources), see: https://www.linux-kvm.org/page/9p_virtio which gives an example for Fedora 15. https://wiki.qemu.org/Documentation/9psetup looks more current. It begins with kernel config, which should not be needed with Fedora. http://blog.allenx.org/2015/07/03/virtio-9p-note may also be helpful. https://unix.stackexchange.com/questions/240281/virtfs-plan-9-vs-nfs-as-tool... has some pros and cons for NFS versus 9p in a production environment, but several years old now.
Unless you have a specific need for NFS it may be better use of your time to configure 9p passthru.
The guest can ping the host and ping the wider Internet, so basic connectivity works (this is via a NAT-style connection). The host can ssh into the guest.
Firewall setup on the host: [poc@Bree ~]$ firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp3s0 sources: services: dhcp dhcpv6-client dns libvirt mdns mountd nfs nfs3 plex rpc-bind rsyncd samba samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
And on the guest: [poc@fedora30 ~]$ sudo firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: dhcpv6-client mdns samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
Guest ip: [poc@fedora30 ~]$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:ca:07:30 brd ff:ff:ff:ff:ff:ff inet 192.168.122.156/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0 valid_lft 2893sec preferred_lft 2893sec inet6 fe80::2e77:5bc1:d19a:6045/64 scope link noprefixroute valid_lft forever preferred_lft forever
and routing: [poc@fedora30 ~]$ ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.156 metric 100
[poc@Bree ~]$ ping fedora30 PING fedora30 (192.168.122.156) 56(84) bytes of data. 64 bytes from fedora30 (192.168.122.156): icmp_seq=1 ttl=64 time=20.1 ms ...
Exports on the host: [poc@Bree ~]$ sudo exportfs /home/Media 192.168.0.0/16 /home/poc/Shared vm-* /home/poc/Shared fedora*
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
poc
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Strangely, looking at wireshark output it seems port 111 is unreachable. Even if I explicitly enable that port the problem persists.
On 11/17/19 8:35 AM, Ed Greshko wrote:
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Strangely, looking at wireshark output it seems port 111 is unreachable. Even if I explicitly enable that port the problem persists.
OK, I fixed it....
I put the interface virbr0 in the FW zone libvirt.
On the host...
[root@meimei ~]# firewall-cmd --list-all --zone=libvirt libvirt (active) target: ACCEPT icmp-block-inversion: no interfaces: virbr0 sources: services: dhcp dhcpv6 dns mountd nfs nfs3 rpc-bind ssh tftp ports: protocols: icmp ipv6-icmp masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule priority="32767" reject
Then on the guest/client
[root@f31gq ~]# showmount -e 192.168.122.1 Export list for 192.168.122.1: /home/egreshko/Dome 192.168.0.0/16
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
What am I missing?
A follow-up of sorts....
In your original post you indicated
Firewall setup on the host: [poc@Bree ~]$ firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp3s0
As far as missing, I think we both missed that "virbr0" didn't appear in the interfaces line.
I was thrown in my setup since when using the firewall-config GUI it shows the default zone of the virbr0 interface to be "public" and that is where I was making changes. But, I think, it was always in the libvirt zone.
I think I need file a BZ against firewall-config.
On Sat, Nov 16, 2019 at 7:50 PM Patrick O'Callaghan pocallaghan@gmail.com wrote:
This is quite embarrassing, but I'm banging my head against a wall and hoping other eyes will spot some obvious mistake.
I have an F31 guest (fedora30) running in QEMU/KVM on an F31 host (Bree). I want to mount a host directory via NFS in the guest. I set this up a long time ago and it has worked through several Fedora releases without issue, but in a fit of spring cleaning I did a fresh install of F31 rather than my usual update, so of course now it doesn't work. Clearly I did something right back in the day and have now forgotten what it was.
The guest can ping the host and ping the wider Internet, so basic connectivity works (this is via a NAT-style connection). The host can ssh into the guest.
Firewall setup on the host: [poc@Bree ~]$ firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp3s0 sources: services: dhcp dhcpv6-client dns libvirt mdns mountd nfs nfs3 plex rpc-bind rsyncd samba samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
And on the guest: [poc@fedora30 ~]$ sudo firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: dhcpv6-client mdns samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
Guest ip: [poc@fedora30 ~]$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:ca:07:30 brd ff:ff:ff:ff:ff:ff inet 192.168.122.156/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0 valid_lft 2893sec preferred_lft 2893sec inet6 fe80::2e77:5bc1:d19a:6045/64 scope link noprefixroute valid_lft forever preferred_lft forever
and routing: [poc@fedora30 ~]$ ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.156 metric 100
[poc@Bree ~]$ ping fedora30 PING fedora30 (192.168.122.156) 56(84) bytes of data. 64 bytes from fedora30 (192.168.122.156): icmp_seq=1 ttl=64 time=20.1 ms ...
Exports on the host: [poc@Bree ~]$ sudo exportfs /home/Media 192.168.0.0/16 /home/poc/Shared vm-* /home/poc/Shared fedora*
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
Does "showmount ..." list anything on "bree" itself?
What's the output of "cat /proc/fs/nfsd/versions" and "ss -ntul | grep -E "111|2049|20048" | column -t" (on "bree")?
"showmount ..." won't work if the first doesn't have "+3" or if the second doesn't have rpcbind, nfsd, and mountd lines.
FTR. for firewalld: "mountd" opens 20048, tcp & udp "nfs" opens 2049, tcp "nfs3" opens 2049, tcp & udp "rpc-bind" opens 111, tcp & udp
On Sun, Nov 17, 2019 at 1:36 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Strangely, looking at wireshark output it seems port 111 is unreachable. Even if I explicitly enable that port the problem persists.
On which port is mountd running?!
On 11/17/19 4:59 PM, Tom H wrote:
On Sun, Nov 17, 2019 at 1:36 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Strangely, looking at wireshark output it seems port 111 is unreachable. Even if I explicitly enable that port the problem persists.
On which port is mountd running?!
Keep reading the thread. :-)
On Sun, Nov 17, 2019 at 1:49 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/17/19 8:35 AM, Ed Greshko wrote:
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Strangely, looking at wireshark output it seems port 111 is unreachable. Even if I explicitly enable that port the problem persists.
OK, I fixed it....
I put the interface virbr0 in the FW zone libvirt.
Wow. Weird that virbr0 is firewalled, but good to know. Thanks.
On Sun, Nov 17, 2019 at 10:01 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/17/19 4:59 PM, Tom H wrote:
On which port is mountd running?!
Keep reading the thread. :-)
Got there. Thanks, LOL.
On 11/17/19 5:07 PM, Tom H wrote:
On Sun, Nov 17, 2019 at 1:49 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/17/19 8:35 AM, Ed Greshko wrote:
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Strangely, looking at wireshark output it seems port 111 is unreachable. Even if I explicitly enable that port the problem persists.
OK, I fixed it....
I put the interface virbr0 in the FW zone libvirt.
Wow. Weird that virbr0 is firewalled, but good to know. Thanks.
Yep, and as my other post states I think it always was there. If one reads the description in /usr/lib/firewalld/zones/libvirt.xml they'd see.
<description> The default policy of "ACCEPT" allows all packets to/from interfaces in the zone to be forwarded, while the (*low priority*) reject rule blocks any traffic destined for the host, except those services explicitly listed (that list can be modified as required by the local admin). This zone is intended to be used only by libvirt virtual networks - libvirt will add the bridge devices for all new virtual networks to this zone by default. </description>
I wrote https://bugzilla.redhat.com/show_bug.cgi?id=1773273 as the GUI sent me down the wrong path.
On Sun, 2019-11-17 at 08:48 +0800, Ed Greshko wrote:
On 11/17/19 8:35 AM, Ed Greshko wrote:
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Strangely, looking at wireshark output it seems port 111 is unreachable. Even if I explicitly enable that port the problem persists.
OK, I fixed it....
I put the interface virbr0 in the FW zone libvirt.
On the host...
[root@meimei ~]# firewall-cmd --list-all --zone=libvirt libvirt (active) target: ACCEPT icmp-block-inversion: no interfaces: virbr0 sources: services: dhcp dhcpv6 dns mountd nfs nfs3 rpc-bind ssh tftp ports: protocols: icmp ipv6-icmp masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule priority="32767" reject
That did it. In fact virbr0 was already in the libvirt zone, but the various NFS services were not installed there.
This stuff is definitely not obvious. Note that you have to repeat the service additions with the --permanent flag or it will all be lost on the next reboot.
Thanks Ed.
poc
On Sun, 2019-11-17 at 12:40 +0800, Ed Greshko wrote:
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
What am I missing?
A follow-up of sorts....
In your original post you indicated
Firewall setup on the host: [poc@Bree ~]$ firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp3s0
As far as missing, I think we both missed that "virbr0" didn't appear in the interfaces line.
I was thrown in my setup since when using the firewall-config GUI it shows the default zone of the virbr0 interface to be "public" and that is where I was making changes. But, I think, it was always in the libvirt zone.
I think I need file a BZ against firewall-config.
The same thing had me confused. I added a comment to your BZ report. Having a GUI that lies about the real situation is worse than not having one at all. Makes me wonder what else it's hiding.
poc
On Sat, 2019-11-16 at 20:35 -0400, George N. White III wrote:
I have an F31 guest (fedora30) running in QEMU/KVM on an F31 host (Bree). I want to mount a host directory via NFS in the guest. I set this up a long time ago and it has worked through several Fedora releases without issue, but in a fit of spring cleaning I did a fresh install of F31 rather than my usual update, so of course now it doesn't work. Clearly I did something right back in the day and have now forgotten what it was.
There have been changes to NFS, particularly, NFSv4, but also some measures to make it less insecure. Meanwhile, 9p is widely used to share files between VM guests and hosts (mostly because it needs fewer host resources), see: https://www.linux-kvm.org/page/9p_virtio which gives an example for Fedora 15. https://wiki.qemu.org/Documentation/9psetup looks more current. It begins with kernel config, which should not be needed with Fedora. http://blog.allenx.org/2015/07/03/virtio-9p-note may also be helpful. https://unix.stackexchange.com/questions/240281/virtfs-plan-9-vs-nfs-as-tool... has some pros and cons for NFS versus 9p in a production environment, but several years old now.
Unless you have a specific need for NFS it may be better use of your time to configure 9p passthru.
Yes, I've looked casually at 9P in the past but didn't find the documentation very helpful. NFS is enough for me at the moment as performance is not really an issue.
Thanks all the same.
poc
On 11/17/19 9:42 PM, Patrick O'Callaghan wrote:
On Sun, 2019-11-17 at 08:48 +0800, Ed Greshko wrote:
On 11/17/19 8:35 AM, Ed Greshko wrote:
On 11/17/19 2:48 AM, Patrick O'Callaghan wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Strangely, looking at wireshark output it seems port 111 is unreachable. Even if I explicitly enable that port the problem persists.
OK, I fixed it....
I put the interface virbr0 in the FW zone libvirt.
On the host...
[root@meimei ~]# firewall-cmd --list-all --zone=libvirt libvirt (active) target: ACCEPT icmp-block-inversion: no interfaces: virbr0 sources: services: dhcp dhcpv6 dns mountd nfs nfs3 rpc-bind ssh tftp ports: protocols: icmp ipv6-icmp masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule priority="32767" reject
That did it. In fact virbr0 was already in the libvirt zone, but the various NFS services were not installed there.
This stuff is definitely not obvious. Note that you have to repeat the service additions with the --permanent flag or it will all be lost on the next reboot.
Thanks Ed.
Welcome. In the process I learned that "firewall-cmd --get-active-zones" would have shown the missing information sooner and I would have edited the correct zone. :-)
On Sun, 2019-11-17 at 09:55 +0100, Tom H wrote:
On Sat, Nov 16, 2019 at 7:50 PM Patrick O'Callaghan pocallaghan@gmail.com wrote:
This is quite embarrassing, but I'm banging my head against a wall and hoping other eyes will spot some obvious mistake.
I have an F31 guest (fedora30) running in QEMU/KVM on an F31 host (Bree). I want to mount a host directory via NFS in the guest. I set this up a long time ago and it has worked through several Fedora releases without issue, but in a fit of spring cleaning I did a fresh install of F31 rather than my usual update, so of course now it doesn't work. Clearly I did something right back in the day and have now forgotten what it was.
The guest can ping the host and ping the wider Internet, so basic connectivity works (this is via a NAT-style connection). The host can ssh into the guest.
Firewall setup on the host: [poc@Bree ~]$ firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp3s0 sources: services: dhcp dhcpv6-client dns libvirt mdns mountd nfs nfs3 plex rpc-bind rsyncd samba samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
And on the guest: [poc@fedora30 ~]$ sudo firewall-cmd --list-all home (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: dhcpv6-client mdns samba-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
Guest ip: [poc@fedora30 ~]$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:ca:07:30 brd ff:ff:ff:ff:ff:ff inet 192.168.122.156/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0 valid_lft 2893sec preferred_lft 2893sec inet6 fe80::2e77:5bc1:d19a:6045/64 scope link noprefixroute valid_lft forever preferred_lft forever
and routing: [poc@fedora30 ~]$ ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.156 metric 100
[poc@Bree ~]$ ping fedora30 PING fedora30 (192.168.122.156) 56(84) bytes of data. 64 bytes from fedora30 (192.168.122.156): icmp_seq=1 ttl=64 time=20.1 ms ...
Exports on the host: [poc@Bree ~]$ sudo exportfs /home/Media 192.168.0.0/16 /home/poc/Shared vm-* /home/poc/Shared fedora*
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
Does "showmount ..." list anything on "bree" itself?
What's the output of "cat /proc/fs/nfsd/versions" and "ss -ntul | grep -E "111|2049|20048" | column -t" (on "bree")?
"showmount ..." won't work if the first doesn't have "+3" or if the second doesn't have rpcbind, nfsd, and mountd lines.
FTR. for firewalld: "mountd" opens 20048, tcp & udp "nfs" opens 2049, tcp "nfs3" opens 2049, tcp & udp "rpc-bind" opens 111, tcp & udp
Thanks. I solved it by adding the services to the libvirt zone as Ed recommended.
poc
On Sun, 2019-11-17 at 08:35 +0800, Ed Greshko wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Just as a matter of interest, how do you disable the firewall? The man page for firewall-cmd has several degrees of disabling for various things but apparently nothing that just turns it off. As with SElinux, it would useful to be able to do this for diagnostic purposes.
poc
On Sun, 2019-11-17 at 21:57 +0800, Ed Greshko wrote:
This stuff is definitely not obvious. Note that you have to repeat the service additions with the --permanent flag or it will all be lost on the next reboot.
Thanks Ed.
Welcome. In the process I learned that "firewall-cmd --get-active-zones" would have shown the missing information sooner and I would have edited the correct zone. :-)
Yes. Somehow one thinks that "--list-all" actually lists everything, but of course it doesn't, just what's in the default zone.
poc
On 11/17/19 10:31 PM, Patrick O'Callaghan wrote:
On Sun, 2019-11-17 at 08:35 +0800, Ed Greshko wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Just as a matter of interest, how do you disable the firewall? The man page for firewall-cmd has several degrees of disabling for various things but apparently nothing that just turns it off. As with SElinux, it would useful to be able to do this for diagnostic purposes.
The trusty "systemctl stop firewalld" command. :-)
On Sun, 2019-11-17 at 23:59 +0800, Ed Greshko wrote:
On 11/17/19 10:31 PM, Patrick O'Callaghan wrote:
On Sun, 2019-11-17 at 08:35 +0800, Ed Greshko wrote:
But from the guest: [poc@fedora30 ~]$ showmount -e bree clnt_create: RPC: Unable to receive
What am I missing?
OK, I put up an nfs server on the host and get the same error.
If I disable the firewall on the host, it succeeds.
Just as a matter of interest, how do you disable the firewall? The man page for firewall-cmd has several degrees of disabling for various things but apparently nothing that just turns it off. As with SElinux, it would useful to be able to do this for diagnostic purposes.
The trusty "systemctl stop firewalld" command. :-)
I assumed that that would block everything rather than open everything, given that the actual filtering is done in the kernel, not the daemon.
poc
On Sun, Nov 17, 2019 at 10:14 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/17/19 5:07 PM, Tom H wrote:
On Sun, Nov 17, 2019 at 1:49 AM Ed Greshko ed.greshko@greshko.com wrote:
I put the interface virbr0 in the FW zone libvirt.
Wow. Weird that virbr0 is firewalled, but good to know. Thanks.
Yep, and as my other post states I think it always was there. If one reads the description in /usr/lib/firewalld/zones/libvirt.xml they'd see.
<description> The default policy of "ACCEPT" allows all packets to/from interfaces in the zone to be forwarded, while the (*low priority*) reject rule blocks any traffic destined for the host, except those services explicitly listed (that list can be modified as required by the local admin). This zone is intended to be used only by libvirt virtual networks - libvirt will add the bridge devices for all new virtual networks to this zone by default. </description>
Thanks. I assume that you didn't just add virbr0 to the libvirt zone, but that you also added the three nfs-related services to this zone.
Comment from the libvirt source
/* if firewalld is active, try to set the "libvirt" zone. This is * desirable (for consistency) if firewalld is using the iptables * backend, but is necessary (for basic network connectivity) if * firewalld is using the nftables backend */
So it's an nftables requirement.
On Sun, Nov 17, 2019 at 3:29 PM Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Sun, 2019-11-17 at 09:55 +0100, Tom H wrote:
FTR. for firewalld: "mountd" opens 20048, tcp & udp "nfs" opens 2049, tcp "nfs3" opens 2049, tcp & udp "rpc-bind" opens 111, tcp & udp
Thanks. I solved it by adding the services to the libvirt zone as Ed recommended.
You're welcome.
And that answers my earlier question. Thanks.
On Sun, Nov 17, 2019 at 3:32 PM Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Sun, 2019-11-17 at 08:35 +0800, Ed Greshko wrote:
If I disable the firewall on the host, it succeeds.
Just as a matter of interest, how do you disable the firewall?
systemctl stop firewalld.service
On Sun, 2019-11-17 at 21:03 +0100, Tom H wrote:
On Sun, Nov 17, 2019 at 10:14 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/17/19 5:07 PM, Tom H wrote:
On Sun, Nov 17, 2019 at 1:49 AM Ed Greshko ed.greshko@greshko.com wrote:
I put the interface virbr0 in the FW zone libvirt.
Wow. Weird that virbr0 is firewalled, but good to know. Thanks.
Yep, and as my other post states I think it always was there. If one reads the description in /usr/lib/firewalld/zones/libvirt.xml they'd see.
<description> The default policy of "ACCEPT" allows all packets to/from interfaces in the zone to be forwarded, while the (*low priority*) reject rule blocks any traffic destined for the host, except those services explicitly listed (that list can be modified as required by the local admin). This zone is intended to be used only by libvirt virtual networks - libvirt will add the bridge devices for all new virtual networks to this zone by default. </description>
Thanks. I assume that you didn't just add virbr0 to the libvirt zone, but that you also added the three nfs-related services to this zone.
Of course. Note that virbr0 was already in the libvirt zone. I didn't add it explicitly as far as I can recall. In fact I did notice that whenever a VM started up (via virt-manager) I got a popup from the firewall applet to say that virbr0 was in the libvirt zone, but the applet itself always incorrectly showed virbr0 in my default zone (home) as Ed has already mentioned.
Comment from the libvirt source
/* if firewalld is active, try to set the "libvirt" zone. This is * desirable (for consistency) if firewalld is using the iptables * backend, but is necessary (for basic network connectivity) if * firewalld is using the nftables backend */So it's an nftables requirement.
I've never heard of nftables. I assumed that iptables was the backend.
poc
On 11/18/19 4:03 AM, Tom H wrote:
On Sun, Nov 17, 2019 at 10:14 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/17/19 5:07 PM, Tom H wrote:
On Sun, Nov 17, 2019 at 1:49 AM Ed Greshko ed.greshko@greshko.com wrote:
I put the interface virbr0 in the FW zone libvirt.
Wow. Weird that virbr0 is firewalled, but good to know. Thanks.
Yep, and as my other post states I think it always was there. If one reads the description in /usr/lib/firewalld/zones/libvirt.xml they'd see.
<description> The default policy of "ACCEPT" allows all packets to/from interfaces in the zone to be forwarded, while the (*low priority*) reject rule blocks any traffic destined for the host, except those services explicitly listed (that list can be modified as required by the local admin). This zone is intended to be used only by libvirt virtual networks - libvirt will add the bridge devices for all new virtual networks to this zone by default. </description>
Thanks. I assume that you didn't just add virbr0 to the libvirt zone, but that you also added the three nfs-related services to this zone.
But, of course. I thought I mentioned that I was manipulating the wrong zone due to being misled by a GUI. Oh, and my ignorance. :-)
And, that on further review, virbr0 was always in the libvirt zone.
Comment from the libvirt source
/* if firewalld is active, try to set the "libvirt" zone. This is * desirable (for consistency) if firewalld is using the iptables * backend, but is necessary (for basic network connectivity) if * firewalld is using the nftables backend */So it's an nftables requirement.
Good to know...
On 11/18/19 6:27 AM, Patrick O'Callaghan wrote:
I've never heard of nftables. I assumed that iptables was the backend.
Yes, firewalld uses iptables.
nftables is a different animal. nftables.service is disabled by default. See /etc/sysconfig/nftables.conf for "hints"
On Sun, Nov 17, 2019 at 11:28 PM Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Sun, 2019-11-17 at 21:03 +0100, Tom H wrote:
On Sun, Nov 17, 2019 at 10:14 AM Ed Greshko ed.greshko@greshko.com wrote:
Of course. Note that virbr0 was already in the libvirt zone. I didn't add it explicitly as far as I can recall. In fact I did notice that whenever a VM started up (via virt-manager) I got a popup from the firewall applet to say that virbr0 was in the libvirt zone, but the applet itself always incorrectly showed virbr0 in my default zone (home) as Ed has already mentioned.
Comment from the libvirt source
/* if firewalld is active, try to set the "libvirt" zone. This is
- desirable (for consistency) if firewalld is using the iptables
- backend, but is necessary (for basic network connectivity) if
- firewalld is using the nftables backend
*/
So it's an nftables requirement.
I've never heard of nftables. I assumed that iptables was the backend.
iptables is still the firewalld backend in Fedora.
On Mon, Nov 18, 2019 at 1:09 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/18/19 6:27 AM, Patrick O'Callaghan wrote:
I've never heard of nftables. I assumed that iptables was the backend.
Yes, firewalld uses iptables.
nftables is a different animal. nftables.service is disabled by default. See /etc/sysconfig/nftables.conf for "hints"
nftables is a successor to iptables, whose binaries now mostly symlink to "xtables-legacy-multi". There's a hint in the name... So those of us who use iptables directly are going to have to learn the nftables syntax within the next 1/2/5/10/? years.
https://firewalld.org/2018/07/nftables-backend
https://fedoraproject.org/wiki/Changes/firewalld_default_to_nftables
And there's also bpfilter.
On Mon, 2019-11-18 at 20:01 +0100, Tom H wrote:
On Mon, Nov 18, 2019 at 1:09 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/18/19 6:27 AM, Patrick O'Callaghan wrote:
I've never heard of nftables. I assumed that iptables was the backend.
Yes, firewalld uses iptables.
nftables is a different animal. nftables.service is disabled by default. See /etc/sysconfig/nftables.conf for "hints"
nftables is a successor to iptables, whose binaries now mostly symlink to "xtables-legacy-multi". There's a hint in the name... So those of us who use iptables directly are going to have to learn the nftables syntax within the next 1/2/5/10/? years.
https://firewalld.org/2018/07/nftables-backend
https://fedoraproject.org/wiki/Changes/firewalld_default_to_nftables
And there's also bpfilter.
Interesting, thanks.
poc
On 11/19/19 3:01 AM, Tom H wrote:
On Mon, Nov 18, 2019 at 1:09 AM Ed Greshko ed.greshko@greshko.com wrote:
On 11/18/19 6:27 AM, Patrick O'Callaghan wrote:
I've never heard of nftables. I assumed that iptables was the backend.
Yes, firewalld uses iptables.
nftables is a different animal. nftables.service is disabled by default. See /etc/sysconfig/nftables.conf for "hints"
nftables is a successor to iptables, whose binaries now mostly symlink to "xtables-legacy-multi". There's a hint in the name... So those of us who use iptables directly are going to have to learn the nftables syntax within the next 1/2/5/10/? years.
https://firewalld.org/2018/07/nftables-backend
https://fedoraproject.org/wiki/Changes/firewalld_default_to_nftables
And there's also bpfilter.
Humm... Learn something new every day.
Thanks!