I've previously posted this in the libvirt mailing list because that's the package in charge of the libvirt zone, but it's probably better suited here.
firewalld version 1.3.0-1 libvirt version 9.0.0-4 network-manager version 1.42.4-1
# firewall-cmd --get-active-zones libvirt interfaces: br28 public interfaces: dac0 dac0.100 dac0.28 ftth
# firewall-cmd --list-all --zone=public public (active) target: default icmp-block-inversion: no interfaces: dac0 dac0.100 dac0.28 ftth sources: services: dhcpv6-client ssh ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
dac0 is an SFP+ Direct Attach cable with several vlans. dac0.100 is the vlan where I create the ppoe connection from my FTTH provider. ftth is the ppp name. dac0.28 is the vlan for the public /28 IPv4 subnet. br28 is the bridge where dac0.28 is attached.
# brctl show bridge name bridge id STP enabled interfaces br28 8000.d2605c025b1d no dac0.28 vnet1
# firewall-cmd --list-all --zone=libvirt libvirt (active) target: ACCEPT icmp-block-inversion: no interfaces: br28 sources: services: dhcp dhcpv6 dns ssh tftp ports: protocols: icmp ipv6-icmp forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule priority="32767" reject
# nft list tables table inet firewalld table ip mangle
# nft list table ip mangle # Warning: table ip mangle is managed by iptables-nft, do not touch! table ip mangle { chain FORWARD { type filter hook forward priority mangle; policy accept; oifname "ftth" tcp flags syn / syn,rst tcp option maxseg size 1400-65495 counter packets 0 bytes 0 tcp option maxseg size set rt mtu } }
The previous rule is created by NetworkManager to clamp-mss-to-pmtu for the ftth pppoe.
This is how I create the bridge and the dac0.28 vlan with NetworkManager:
# nmcli con add ifname br28 type bridge con-name br28 ipv4.method manual ipv4.addresses MY_IP/28 connection.zone libvirt # nmcli connection add type vlan con-name dac0.28 ifname dac0.28 vlan.parent dac0 vlan.id 28 ipv4.method disabled ipv6.method disabled master br28 slave-type bridge
I also have isc-dhcp-server, wide-dhcpv6-client and radvd running on the host.
# nmcli con NAME UUID TYPE DEVICE ftth f370639c-2712-49c2-9749-e39f17102346 pppoe ftth br28 e4d2aad3-ef2d-4ac0-bda5-58471f21655c bridge br28 lo f0327b03-bbc3-4078-8bd1-5225df0ce153 loopback lo vnet1 25ae75cd-1606-4fd7-8213-09f4ef1280c4 tun vnet1 dac0 040e747e-fd7e-41e9-b6a6-ccec9e73c022 ethernet dac0 dac0.100 147c1632-2c60-42f3-a97a-a6733ef69f4c vlan dac0.100 dac0.28 cefb4bf3-dda9-465a-95d0-512ac1294a5b vlan dac0.28 enp1s0 81a44a95-efdc-47e2-9c12-76a0a140ca5a ethernet --
The previous are all dark green expect lo and vnet1 which are light green (externally managed) and enp1s0 which is white (disconnected).
The br_netfilter module is not loaded and thus net.bridge.bridge-nf-call-ip6tables, net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-arptables are not even exposed to /proc/sys/net/bridge.
I don't have any nwfilter enabled yet, I'd like to get the basics working first.
The guest gets an IP (both v4 and v6) and can reach the internet. The guest can be reached from the host (ALL ports and services and not just dhcp/dhcpv6/dns/ssh/tftp/icmp/ipv6-icmp) but unfortunately not from the outside:
$ ping GUEST_IP PING GUEST_IP (GUEST_IP) 56(84) bytes of data. From FTTH_IP icmp_seq=1 Packet filtered
$ ssh GUEST_IP ssh: connect to host GUEST_IP port 22: No route to host
I also tried routed networking which works fine, but libvirt is in charge to create everything in that case (the bridge, assigning the libvirt-routed zone, enabling the libvirt-routed policies, etc) while bridged networking must be configured manually (at least on non-RedHat distros).
What's wrong? It looks suspiciously similar to https://bbs.archlinux.org/viewtopic.php?id=274670
Thanks, Niccolo' Belli
On Thu, Jul 13, 2023 at 11:53 AM Niccolò Belli darkbasic@linuxsystems.it wrote:
I've previously posted this in the libvirt mailing list because that's the package in charge of the libvirt zone, but it's probably better suited here.
firewalld version 1.3.0-1 libvirt version 9.0.0-4 network-manager version 1.42.4-1
# firewall-cmd --get-active-zones libvirt interfaces: br28 public interfaces: dac0 dac0.100 dac0.28 ftth
# firewall-cmd --list-all --zone=public public (active) target: default icmp-block-inversion: no interfaces: dac0 dac0.100 dac0.28 ftth sources: services: dhcpv6-client ssh ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
dac0 is an SFP+ Direct Attach cable with several vlans. dac0.100 is the vlan where I create the ppoe connection from my FTTH provider. ftth is the ppp name. dac0.28 is the vlan for the public /28 IPv4 subnet. br28 is the bridge where dac0.28 is attached.
# brctl show bridge name bridge id STP enabled interfaces br28 8000.d2605c025b1d no dac0.28 vnet1
# firewall-cmd --list-all --zone=libvirt libvirt (active) target: ACCEPT icmp-block-inversion: no interfaces: br28 sources: services: dhcp dhcpv6 dns ssh tftp ports: protocols: icmp ipv6-icmp forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule priority="32767" reject
# nft list tables table inet firewalld table ip mangle
# nft list table ip mangle # Warning: table ip mangle is managed by iptables-nft, do not touch! table ip mangle { chain FORWARD { type filter hook forward priority mangle; policy accept; oifname "ftth" tcp flags syn / syn,rst tcp option maxseg size 1400-65495 counter packets 0 bytes 0 tcp option maxseg size set rt mtu } }
The previous rule is created by NetworkManager to clamp-mss-to-pmtu for the ftth pppoe.
This is how I create the bridge and the dac0.28 vlan with NetworkManager:
# nmcli con add ifname br28 type bridge con-name br28 ipv4.method manual ipv4.addresses MY_IP/28 connection.zone libvirt # nmcli connection add type vlan con-name dac0.28 ifname dac0.28 vlan.parent dac0 vlan.id 28 ipv4.method disabled ipv6.method disabled master br28 slave-type bridge
I also have isc-dhcp-server, wide-dhcpv6-client and radvd running on the host.
# nmcli con NAME UUID TYPE DEVICE ftth f370639c-2712-49c2-9749-e39f17102346 pppoe ftth br28 e4d2aad3-ef2d-4ac0-bda5-58471f21655c bridge br28 lo f0327b03-bbc3-4078-8bd1-5225df0ce153 loopback lo vnet1 25ae75cd-1606-4fd7-8213-09f4ef1280c4 tun vnet1 dac0 040e747e-fd7e-41e9-b6a6-ccec9e73c022 ethernet dac0 dac0.100 147c1632-2c60-42f3-a97a-a6733ef69f4c vlan dac0.100 dac0.28 cefb4bf3-dda9-465a-95d0-512ac1294a5b vlan dac0.28 enp1s0 81a44a95-efdc-47e2-9c12-76a0a140ca5a ethernet --
The previous are all dark green expect lo and vnet1 which are light green (externally managed) and enp1s0 which is white (disconnected).
The br_netfilter module is not loaded and thus net.bridge.bridge-nf-call-ip6tables, net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-arptables are not even exposed to /proc/sys/net/bridge.
I don't have any nwfilter enabled yet, I'd like to get the basics working first.
The guest gets an IP (both v4 and v6) and can reach the internet. The guest can be reached from the host (ALL ports and services and not just dhcp/dhcpv6/dns/ssh/tftp/icmp/ipv6-icmp) but unfortunately not from the outside:
$ ping GUEST_IP PING GUEST_IP (GUEST_IP) 56(84) bytes of data. From FTTH_IP icmp_seq=1 Packet filtered
This is ICMP Administratively Prohibited; to my best knowledge firewalld does not use it by default.
$ ssh GUEST_IP ssh: connect to host GUEST_IP port 22: No route to host
That implies a routing configuration problem somewhere, again not a firewalld issue at the first glance.
Does it work if you stop firewalld?
I also tried routed networking which works fine, but libvirt is in charge to create everything in that case (the bridge, assigning the libvirt-routed zone, enabling the libvirt-routed policies, etc) while bridged networking must be configured manually (at least on non-RedHat distros).
What's wrong? It looks suspiciously similar to https://bbs.archlinux.org/viewtopic.php?id=274670
Thanks, Niccolo' Belli _______________________________________________ firewalld-users mailing list -- firewalld-users@lists.fedorahosted.org To unsubscribe send an email to firewalld-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/firewalld-users@lists.fedorahos... Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Il 2023-07-13 11:22 Andrei Borzenkov ha scritto:
$ ping GUEST_IP PING GUEST_IP (GUEST_IP) 56(84) bytes of data. From FTTH_IP icmp_seq=1 Packet filtered
This is ICMP Administratively Prohibited; to my best knowledge firewalld does not use it by default.
Yet with firewalld disabled it works as expected.
$ ssh GUEST_IP ssh: connect to host GUEST_IP port 22: No route to host
That implies a routing configuration problem somewhere, again not a firewalld issue at the first glance.
Does it work if you stop firewalld?
Yes it does work.
On Thu, Jul 13, 2023 at 12:50 PM Niccolò Belli darkbasic@linuxsystems.it wrote:
Il 2023-07-13 11:22 Andrei Borzenkov ha scritto:
$ ping GUEST_IP PING GUEST_IP (GUEST_IP) 56(84) bytes of data. From FTTH_IP icmp_seq=1 Packet filtered
This is ICMP Administratively Prohibited; to my best knowledge firewalld does not use it by default.
Scratch that. firewalld does default to administratively prohibited, sorry.
Yet with firewalld disabled it works as expected.
By default firewalld blocks forwarding between interfaces in different zones. You need to create a policy to allow it. See https://firewalld.org/documentation/concepts.html for an overview.
Il 2023-07-13 12:02 Andrei Borzenkov ha scritto:
By default firewalld blocks forwarding between interfaces in different zones. You need to create a policy to allow it.
That makes no sense: what's the purpose of the ACCEPT target in the libvirt zone then?
https://github.com/libvirt/libvirt/blob/master/src/network/libvirt.zone
"The default policy of "ACCEPT" allows all packets to/from interfaces in the zone to be forwarded, while the (*low priority*) reject rule blocks any traffic destined for the host, except those services explicitly listed"
On Thu, Jul 13, 2023 at 1:42 PM Niccolò Belli darkbasic@linuxsystems.it wrote:
Il 2023-07-13 12:02 Andrei Borzenkov ha scritto:
By default firewalld blocks forwarding between interfaces in different zones. You need to create a policy to allow it.
That makes no sense: what's the purpose of the ACCEPT target in the libvirt zone then?
Any zone applies to incoming traffic (from the host point of view) only. libvirt zone is intended for guest -> host traffic. You have a problem with forwarded traffic from outside directed to your guest, host running firewalld is just a router here. You cannot use firewalld zone to control forwarded traffic.
https://github.com/libvirt/libvirt/blob/master/src/network/libvirt.zone
"The default policy of "ACCEPT" allows all packets to/from interfaces in the zone to be forwarded, while the (*low priority*) reject rule blocks any traffic destined for the host, except those services explicitly listed"
Il 2023-07-13 13:03 Andrei Borzenkov ha scritto:
Any zone applies to incoming traffic (from the host point of view) only.
I agree with that.
libvirt zone is intended for guest -> host traffic.
It surely is, because there are services and protocols in that zone which can only refer to INCOMING traffic. But it must ALSO affect forwarded traffic otherwise they wouldn't had put an ACCEPT target into it. I've always found that "ACCEPT target means forwarding is enabled" thing stupid and counterintuitive, but if I understand it correctly we're not simply talking about intra zone forwarding otherwise they would have used --add-forward instead (which they didn't). The description is also pretty clear: "The default policy of ACCEPT allows all packets to/from interfaces in the zone to be forwarded". If that was not enough libvirt has its own set of policies for the libvirt-routed zone but doesn't have any for the libvirt zone: https://github.com/libvirt/libvirt/blob/master/src/network/libvirt-routed-in... https://github.com/libvirt/libvirt/blob/master/src/network/libvirt-routed-ou... https://github.com/libvirt/libvirt/blob/master/src/network/libvirt-to-host.p...
It makes sense because the libvirt-routed zones lacks the ACCEPT policy and thus you need policies instead: https://github.com/libvirt/libvirt/blob/master/src/network/libvirt-routed.zo...
If I'm missing something please explain me what the ACCEPT target really means.
On Wed, Jul 12, 2023 at 12:37:20PM +0200, Niccolò Belli wrote:
Il 2023-07-13 13:03 Andrei Borzenkov ha scritto:
Any zone applies to incoming traffic (from the host point of view) only.
I agree with that.
libvirt zone is intended for guest -> host traffic.
It surely is, because there are services and protocols in that zone which can only refer to INCOMING traffic. But it must ALSO affect forwarded traffic otherwise they wouldn't had put an ACCEPT target into it. I've always found that "ACCEPT target means forwarding is enabled" thing stupid and counterintuitive, but if I understand it correctly we're not simply talking about intra zone forwarding otherwise they would have used --add-forward instead (which they didn't). The description is also pretty clear: "The default policy of ACCEPT allows all packets to/from interfaces in the zone to be forwarded". If that was not enough libvirt has its own set of policies for the libvirt-routed zone but doesn't have any for the libvirt zone: https://github.com/libvirt/libvirt/blob/master/src/network/libvirt-routed-in... https://github.com/libvirt/libvirt/blob/master/src/network/libvirt-routed-ou... https://github.com/libvirt/libvirt/blob/master/src/network/libvirt-to-host.p...
Right. These policies are necessary to because the "routed" network also allows _incoming_ connections to the VM.
My goal was to fully convert libvirt to a native firewalld backend. In fact I sent patches to do so, but the patches were not applied because the maintainers wanted to wait on other changes in the same area.
https://listman.redhat.com/archives/libvir-list/2022-November/235725.html
It makes sense because the libvirt-routed zones lacks the ACCEPT policy and thus you need policies instead: https://github.com/libvirt/libvirt/blob/master/src/network/libvirt-routed.zo...
If I'm missing something please explain me what the ACCEPT target really means.
--set-target=ACCEPT will allow all INPUT and FORWARD traffic. The reason it accepts FORWARD traffic is historical and maintains compatibility for things like libvirt, podman, etc.
Currently the NAT network relies on this behavior. But really the NAT network should be converted to policies as well. See my listman link above. I encourage you to chime in to that old thread if you want to see these patches applied; it always helps to have a user ask for things. I would happily send a v2. ;)
I hope that adds some clarity.
Eric.
On 13.07.2023 16:54, Eric Garver wrote: ...
The description is also pretty clear: "The default policy of ACCEPT allows all packets to/from interfaces in the zone to be forwarded".
...
--set-target=ACCEPT will allow all INPUT and FORWARD traffic. The reason it accepts FORWARD traffic is historical and maintains compatibility for things like libvirt, podman, etc.
Oh, joy of undocumented open source ...
Anyway, it is not what I see here with firewalld 2.0.0.
chain filter_FORWARD_POLICIES { iifname "docker0" oifname "docker0" jump filter_FWD_docker iifname "docker0" oifname "docker0" accept iifname "docker0" oifname "enp0s3" jump filter_FWD_docker iifname "docker0" oifname "enp0s3" accept
It is "docker" -> "public"
iifname "docker0" jump filter_FWD_docker iifname "docker0" accept
It is "docker" -> "ANY" and it allows forwarding by default *from* zone "docker" to any zone.
iifname "enp0s3" oifname "docker0" jump filter_FWD_public iifname "enp0s3" oifname "docker0" reject with icmpx admin-prohibited
It is "public" -> "docker" which is blocked.
iifname "enp0s3" oifname "enp0s3" jump filter_FWD_public iifname "enp0s3" oifname "enp0s3" reject with icmpx admin-prohibited iifname "enp0s3" jump filter_FWD_public iifname "enp0s3" reject with icmpx admin-prohibited oifname "docker0" jump filter_FWD_public oifname "docker0" reject with icmpx admin-prohibited
And it is "ANY" -> "docker" which is blocked either.
It does not match the description above - "allows all packets *TO* interfaces in the zone to be forwarded". (or "from" depending on how we interpret to/from). Anyway, description implies symmetrical bidirectional forwarding while firewalld sets up forwarding in one direction only.
oifname "enp0s3" jump filter_FWD_public oifname "enp0s3" reject with icmpx admin-prohibited jump filter_FWD_public reject with icmpx admin-prohibited }
openSUSE Tumbleweed firewalld-2.0.0-1.1.noarch
Currently the NAT network relies on this behavior. But really the NAT network should be converted to policies as well. See my listman link above. I encourage you to chime in to that old thread if you want to see these patches applied; it always helps to have a user ask for things. I would happily send a v2. ;)
I hope that adds some clarity.
Eric.
On Thu, Jul 13, 2023 at 10:09:39PM +0300, Andrei Borzenkov wrote:
On 13.07.2023 16:54, Eric Garver wrote: ...
The description is also pretty clear: "The default policy of ACCEPT allows all packets to/from interfaces in the zone to be forwarded".
...
--set-target=ACCEPT will allow all INPUT and FORWARD traffic. The reason it accepts FORWARD traffic is historical and maintains compatibility for things like libvirt, podman, etc.
Oh, joy of undocumented open source ...
It's not documented in any of the man pages, but it should be.
Filed bug: https://github.com/firewalld/firewalld/issues/1171
FWIW, it's in section "The Exceptions" in the v1.0.0 blog: https://firewalld.org/2021/06/the-upcoming-1-0-0
Anyway, it is not what I see here with firewalld 2.0.0.
chain filter_FORWARD_POLICIES { iifname "docker0" oifname "docker0" jump filter_FWD_docker iifname "docker0" oifname "docker0" accept iifname "docker0" oifname "enp0s3" jump filter_FWD_docker iifname "docker0" oifname "enp0s3" accept
It is "docker" -> "public"
iifname "docker0" jump filter_FWD_docker iifname "docker0" accept
It is "docker" -> "ANY" and it allows forwarding by default *from* zone "docker" to any zone.
iifname "enp0s3" oifname "docker0" jump filter_FWD_public iifname "enp0s3" oifname "docker0" reject with icmpx
admin-prohibited
It is "public" -> "docker" which is blocked.
iifname "enp0s3" oifname "enp0s3" jump filter_FWD_public iifname "enp0s3" oifname "enp0s3" reject with icmpx
admin-prohibited iifname "enp0s3" jump filter_FWD_public iifname "enp0s3" reject with icmpx admin-prohibited oifname "docker0" jump filter_FWD_public oifname "docker0" reject with icmpx admin-prohibited
And it is "ANY" -> "docker" which is blocked either.
It does not match the description above - "allows all packets *TO* interfaces in the zone to be forwarded". (or "from" depending on how we interpret to/from). Anyway, description implies symmetrical bidirectional forwarding while firewalld sets up forwarding in one direction only.
This is for the libvirt NAT network which specifies that the return (from) traffic is accepted for established connections. I agree the wording in the libvirt zone is poor. This zone definition is part of the libvirt project so you'll have to file a bug with them to fix the wording.
The routed network accepts unsolicited connections _from_ other networks (zones).
[..]
On 14.07.2023 15:56, Eric Garver wrote:
It's not documented in any of the man pages, but it should be.
Filed bug: https://github.com/firewalld/firewalld/issues/1171
Thanks, but that is really whacking a mole.
"man firewalld.zone"
forward ... When enabled, packets will be forwarded between interfaces or sources within a zone, *even if the zone's target is not set to ACCEPT*.
Oops. Does it mean target ACCEPT will implicitly enable forwarding? Where is it documented? Will it enable forwarding globally, also for zones which do not have ACCEPT target?
The general problem of firewalld documentation is listing configuration options without actually describing what these options do. It starts with distinction between matching packets and acting on matched packets.
--><-- port Is an optional empty-element tag and can be used several times to have more than one port entry. --><--
Excellent. Can you understand what it does?
--><-- protocol Is an optional empty-element tag and can be used several times to have more than one protocol entry. --><--
Ditto.
And on and on and on.
Can someone understand whether these options are used to match packets or to do something with these packets? If they are used to match, what is the semantic? If they are used to do something, what do they do?
Il 2023-07-13 15:54 Eric Garver ha scritto:
Right. These policies are necessary to because the "routed" network also allows _incoming_ connections to the VM.
I'm not sure if I understand this correctly but the only difference between a bridged and a routed network is that in the former your ethernet device is part of the bridge, so why would the latter be any different in regard to firewall rules? Why would you want to allow incoming connections on routed networks but not on bridged networks?
Libvirt handbook says that "libvirt’s built-in routed network automatically inserts iptables rules": why are they necessary for routed networks but not for bridged networks considering they are basically the same thing?
Now that I think more about it my setup is kind of a hybrid between the two: my ethernet device is part of the bridge because I have some physical servers that need public IPs but on the other hand the /28 IPv4 subnet gets ROUTED from the ppp device (ftth) which has its own /32 public ip.
These are the iptables rules that get created whenever I set up a routed network in libvirt:
# iptables -S -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -N LIBVIRT_FWI -N LIBVIRT_FWO -N LIBVIRT_FWX -N LIBVIRT_INP -N LIBVIRT_OUT -A INPUT -j LIBVIRT_INP -A FORWARD -j LIBVIRT_FWX -A FORWARD -j LIBVIRT_FWI -A FORWARD -j LIBVIRT_FWO -A OUTPUT -j LIBVIRT_OUT -A LIBVIRT_FWI -d PUBLIC_SUBNET/28 -i ftth -o virbr1 -j ACCEPT -A LIBVIRT_FWI -o virbr1 -j REJECT --reject-with icmp-port-unreachable -A LIBVIRT_FWO -s PUBLIC_SUBNET/28 -i virbr1 -o ftth -j ACCEPT -A LIBVIRT_FWO -i virbr1 -j REJECT --reject-with icmp-port-unreachable -A LIBVIRT_FWX -i virbr1 -o virbr1 -j ACCEPT -A LIBVIRT_INP -i virbr1 -p udp -m udp --dport 53 -j ACCEPT -A LIBVIRT_INP -i virbr1 -p tcp -m tcp --dport 53 -j ACCEPT -A LIBVIRT_INP -i virbr1 -p udp -m udp --dport 67 -j ACCEPT -A LIBVIRT_INP -i virbr1 -p tcp -m tcp --dport 67 -j ACCEPT -A LIBVIRT_OUT -o virbr1 -p udp -m udp --dport 53 -j ACCEPT -A LIBVIRT_OUT -o virbr1 -p tcp -m tcp --dport 53 -j ACCEPT -A LIBVIRT_OUT -o virbr1 -p udp -m udp --dport 68 -j ACCEPT -A LIBVIRT_OUT -o virbr1 -p tcp -m tcp --dport 68 -j ACCEPT
None gets created for the bridged network: first of all because libvirt/virt-manager isn't in charge of creating bridged networks (you have to set them up manually, including assigning the firewalld zone to the bridge interface) and also because according to the libvirt handbook iptables rules apparently aren't needed for bridged networks: I really don't understand why considering they're basically the same thing as routed networks.
My goal was to fully convert libvirt to a native firewalld backend. In fact I sent patches to do so, but the patches were not applied because the maintainers wanted to wait on other changes in the same area.
https://listman.redhat.com/archives/libvir-list/2022-November/235725.html
That would make the whole firewall situation MUCH more easy to understand.
--set-target=ACCEPT will allow all INPUT and FORWARD traffic. The reason it accepts FORWARD traffic is historical and maintains compatibility for things like libvirt, podman, etc.
Yeah I guessed it was for some kind of historical reasons, but I still think that either it doesn't really work that way or there is something else broken in my setup because VMs don't get incoming traffic from the outside.
Il 2023-07-13 21:09 Andrei Borzenkov ha scritto:
Anyway, it is not what I see here with firewalld 2.0.0. It does not match the description above - "allows all packets *TO* interfaces in the zone to be forwarded". (or "from" depending on how we interpret to/from). Anyway, description implies symmetrical bidirectional forwarding while firewalld sets up forwarding in one direction only.
It looks like Andrei is experiencing some kind of unidirectional forwarding as well.
I encourage you to chime in to that old thread if you want to see these patches applied; it always helps to have a user ask for things. I would happily send a v2. ;)
I will for sure, but first I would like to understand what's going on and I'm still groping in the dark right now :(
Thanks, Niccolo' Belli
I think I've found what's the issue at play here: https://github.com/firewalld/firewalld/issues/177
Apparently the ACCEPT target **USED** to allow forward traffic in both directions (from libvirt and to libvirt), but things changed in the 1.0.0 version: https://firewalld.org/2021/06/the-upcoming-1-0-0 "Default target is now similar to reject" https://github.com/firewalld/firewalld/commit/f2896e43c3a548a299f87675a01e1a...
Because of concerns that traffic from public interfaces could be forwarded to trusted interfaces (another zone which had an ACCEPT target) the behavior changed and now the ACCEPT target only allows forward traffic FROM the zone.
So the description in the libvirt zone is no longer valid if you're using firewalld >=1.0.0 and you have to manually create a policy like this:
# firewall-cmd --permanent --new-policy libvirt-in success # firewall-cmd --permanent --policy libvirt-in --add-ingress-zone ANY success # firewall-cmd --permanent --policy libvirt-in --add-egress-zone libvirt success # firewall-cmd --permanent --policy libvirt-in --set-target ACCEPT success
libvirt-in (active) priority: -1 target: ACCEPT ingress-zones: ANY egress-zones: libvirt services: ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:
(replying via web interface because my email provider dropped your message :()
So the description in the libvirt zone is no longer valid if you're
using firewalld >=1.0.0
AFAIK, the current behavior is the intended behavior of libvirt.
First paragraph form libvirt docs [1]:
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
[1]: https://wiki.libvirt.org/Networking.html#forwarding-incoming-connections
Il 2023-07-14 19:57 Eric Garver ha scritto:
AFAIK, the current behavior is the intended behavior of libvirt.
First paragraph form libvirt docs [1]:
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
Thanks, it's starting to make sense now.
I'm trying to use your "firewalld: native support for NAT/routed" zones/policies but I'm encountering some major IPv6 issues.
The following are the zones/policies for a bridge (br28) where I have an ethernet attached with a public IPv4:
<zone> <short>br28</short> <description> This zone is intended to be used only by bridged libvirt virtual networks. </description> <protocol value='icmp'/> <protocol value='ipv6-icmp'/> <service name='dhcp'/> <service name='dhcpv6'/> <service name='dns'/> <service name='ssh'/> <service name='tftp'/> </zone>
<policy target="ACCEPT"> <short>br28-in</short> <description> This policy is used to allow routed traffic to the virtual machines. </description> <ingress-zone name="ANY" /> <egress-zone name="br28" /> </policy>
<policy target="ACCEPT"> <short>br28-to-ftth</short> <description> This policy is used to allow routed virtual machine traffic to the internet. </description> <ingress-zone name="br28" /> <egress-zone name="ftth" /> </policy>
<policy target="REJECT"> <short>br28-to-host</short> <description> This policy is used to filter traffic from virtual machines to the host. </description> <ingress-zone name="br28" /> <egress-zone name="HOST" /> <protocol value='icmp'/> <protocol value='ipv6-icmp'/> <service name='dhcp'/> <service name='dhcpv6'/> <service name='dns'/> <service name='ssh'/> <service name='tftp'/> </policy>
br28 has been attached to the br28 zone and the host provides public IPv4s via dhcp. These public IPs are routed to the internet via the ftth pppoe interface which is attached to the ffth zone:
<zone> <short>ftth</short> <description> The ftth interface. You do not trust the the internet to not harm your computer. Only selected incoming connections are accepted. The same as public. </description> <service name="ssh"/> <service name="dhcpv6-client"/> <forward/> </zone>
I have a second bridge (brCasa) with another ethernet attached which is a private network with NAT. Private IPs are offered via dhcp by the host and NAT is being done via ftth using policies:
<zone> <short>casa</short> <description> This zone is intended to be used only by bridged libvirt virtual networks. </description> <protocol value='icmp'/> <protocol value='ipv6-icmp'/> <service name='dhcp'/> <service name='dhcpv6'/> <service name='dns'/> <service name='ssh'/> <service name='tftp'/> </zone>
<policy target="REJECT"> <short>casa-to-host</short> <description> This policy is used to filter traffic from virtual machines to the host. </description> <ingress-zone name="casa" /> <egress-zone name="HOST" /> <protocol value='icmp'/> <protocol value='ipv6-icmp'/> <service name='dhcp'/> <service name='dhcpv6'/> <service name='dns'/> <service name='ssh'/> <service name='tftp'/> </policy>
<policy target="ACCEPT"> <short>casa-to-ftth</short> <description> This policy is used to allow NAT traffic to the internet. </description> <ingress-zone name="casa" /> <egress-zone name="ftth" /> <masquerade /> </policy>
As you can see these are basically your zones/policies with some minor changes. One of these changes is that I don't use sources for the NAT zone, but instead I attach the bridge directly to it since I'm using Linux 6.1.
Libvirt networks:
<network> <name>br28</name> <forward mode="bridge"/> <bridge name="br28" /> </network>
<network> <name>brCasa</name> <forward mode="bridge"/> <bridge name="brCasa" /> </network>
Both br28 and brCasa have IPv6 connectivity via router advertisement offered by radvd.
At this point I created two VMs, one attached to br28 and one to brCasa. As expected I can ping Google's DNS (8.8.8.8) from both and I can also ping the public IPv4 of the first VM from the second. I can also ping Google's IPv6 DNS from both. But when I try to ping the first VM (the one attached to br28) from the second one it gets weird: sometimes it works, sometimes it doesn't. Even when it works after a couple of pings it stops working. What's even weirder if I run "ip -6 route" after pinging the other VM IPv6 address I notice that the IPv6 default route is gone! What's happening?
Thanks, Niccolo' Belli
Il 2023-07-14 19:57 Eric Garver ha scritto:
AFAIK, the current behavior is the intended behavior of libvirt.
First paragraph form libvirt docs [1]:
By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
Thanks, it's starting to make sense now.
I'm trying to use your "firewalld: native support for NAT/routed" zones/policies but I'm encountering some major IPv6 issues.
The following are the zones/policies for a bridge (br28) where I have an ethernet attached with a public IPv4:
<zone> <short>br28</short> <description> This zone is intended to be used only by bridged libvirt virtual networks. </description> <protocol value='icmp'/> <protocol value='ipv6-icmp'/> <service name='dhcp'/> <service name='dhcpv6'/> <service name='dns'/> <service name='ssh'/> <service name='tftp'/> </zone>
<policy target="ACCEPT"> <short>br28-in</short> <description> This policy is used to allow routed traffic to the virtual machines. </description> <ingress-zone name="ANY" /> <egress-zone name="br28" /> </policy>
<policy target="ACCEPT"> <short>br28-to-ftth</short> <description> This policy is used to allow routed virtual machine traffic to the internet. </description> <ingress-zone name="br28" /> <egress-zone name="ftth" /> </policy>
<policy target="REJECT"> <short>br28-to-host</short> <description> This policy is used to filter traffic from virtual machines to the host. </description> <ingress-zone name="br28" /> <egress-zone name="HOST" /> <protocol value='icmp'/> <protocol value='ipv6-icmp'/> <service name='dhcp'/> <service name='dhcpv6'/> <service name='dns'/> <service name='ssh'/> <service name='tftp'/> </policy>
br28 has been attached to the br28 zone and the host provides public IPv4s via dhcp. These public IPs are routed to the internet via the ftth pppoe interface which is attached to the ffth zone:
<zone> <short>ftth</short> <description> The ftth interface. You do not trust the the internet to not harm your computer. Only selected incoming connections are accepted. The same as public. </description> <service name="ssh"/> <service name="dhcpv6-client"/> <forward/> </zone>
I have a second bridge (brCasa) with another ethernet attached which is a private network with NAT. Private IPs are offered via dhcp by the host and NAT is being done via ftth using policies:
<zone> <short>casa</short> <description> This zone is intended to be used only by bridged libvirt virtual networks. </description> <protocol value='icmp'/> <protocol value='ipv6-icmp'/> <service name='dhcp'/> <service name='dhcpv6'/> <service name='dns'/> <service name='ssh'/> <service name='tftp'/> </zone>
<policy target="REJECT"> <short>casa-to-host</short> <description> This policy is used to filter traffic from virtual machines to the host. </description> <ingress-zone name="casa" /> <egress-zone name="HOST" /> <protocol value='icmp'/> <protocol value='ipv6-icmp'/> <service name='dhcp'/> <service name='dhcpv6'/> <service name='dns'/> <service name='ssh'/> <service name='tftp'/> </policy>
<policy target="ACCEPT"> <short>casa-to-ftth</short> <description> This policy is used to allow NAT traffic to the internet. </description> <ingress-zone name="casa" /> <egress-zone name="ftth" /> <masquerade /> </policy>
As you can see these are basically your zones/policies with some minor changes. One of these changes is that I don't use sources for the NAT zone, but instead I attach the bridge directly to it since I'm using Linux 6.1.
Libvirt networks:
<network> <name>br28</name> <forward mode="bridge"/> <bridge name="br28" /> </network>
<network> <name>brCasa</name> <forward mode="bridge"/> <bridge name="brCasa" /> </network>
Both br28 and brCasa have IPv6 connectivity via router advertisement offered by radvd.
At this point I created two VMs, one attached to br28 and one to brCasa. As expected I can ping Google's DNS (8.8.8.8) from both and I can also ping the public IPv4 of the first VM from the second. I can also ping Google's IPv6 DNS from both. But when I try to ping the first VM (the one attached to br28) from the second one it gets weird: sometimes it works, sometimes it doesn't. Even when it works after a couple of pings it stops working. What's even weirder if I run "ip -6 route" after pinging the other VM IPv6 address I notice that the IPv6 default route is gone! What's happening?
Thanks, Niccolo' Belli
Il 2023-07-16 12:17 Niccolò Belli ha scritto:
I'm trying to use your "firewalld: native support for NAT/routed" zones/policies but I'm encountering some major IPv6 issues.
I have some good news: this is not a firewalld issue but rather a NetworkManager-messes-with-things-and-Linux-behaves-weirdly issue.
I set net.ipv6.conf.all.forwarding=1 in sysctl.conf but NetworkManager forces it to 0 on a per-device basis (net.ipv6.conf.$1.forwarding=0) every time it enables a connection. You would imagine that forwarding doesn't work in such case, but it still does. You can get a couple of pings through and then the OS suddenly realizes that it shouldn't have forwarded packets and it stops working. While doing so it eats your IPv6 default route as a bonus.
I would have happily let NetworkManager share my connection, but unfortunately its feature set it so limited that I can't do so: https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/942 https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/1344
Fortunately the fix is simple:
cat << EOF >> /etc/NetworkManager/dispatcher.d/50-forwarding #!/usr/bin/bash -e case "$2" in up) if [[ "$1" = "br28" ]] || [[ "$1" = "brMetano" ]] || [[ "$1" = "brCasa" ]]; then sysctl -w net.ipv6.conf.$1.forwarding=1 fi ;; esac EOF chmod ug+x /etc/NetworkManager/dispatcher.d/50-forwarding
Niccolo'
firewalld-users@lists.fedorahosted.org