Hi All,
Fedora 32 Tip: DO NOT remove network-scripts as qemu-kvm needed network-scrips to for bridge networking
Found out the hard way
:'(
-T
On 2020-04-27 09:21, ToddAndMargo via users wrote:
Fedora 32 Tip: DO NOT remove network-scripts as qemu-kvm needed network-scrips to for bridge networking
Could you explain a bit more what you mean?
[egreshko@meimei ~]$ rpm -q network-scripts package network-scripts is not installed
And all my qemu guests are running just fine.
On 2020-04-26 18:31, Ed Greshko wrote:
On 2020-04-27 09:21, ToddAndMargo via users wrote:
Fedora 32 Tip: DO NOT remove network-scripts as qemu-kvm needed network-scrips to for bridge networking
Could you explain a bit more what you mean?
Are you doing bridge networking?
[egreshko@meimei ~]$ rpm -q network-scripts package network-scripts is not installed
And all my qemu guests are running just fine.
Are the file sharing with the server?
Would you attach a ifconfig?
On 2020-04-27 09:34, ToddAndMargo via users wrote:
On 2020-04-26 18:31, Ed Greshko wrote:
On 2020-04-27 09:21, ToddAndMargo via users wrote:
Fedora 32 Tip: DO NOT remove network-scripts as qemu-kvm needed network-scrips to for bridge networking
Could you explain a bit more what you mean?
Are you doing bridge networking?
[egreshko@meimei ~]$ rpm -q network-scripts package network-scripts is not installed
And all my qemu guests are running just fine.
Are the file sharing with the server?
Using what protocol?
I use both nfsv4 and sshfs
Would you attach a ifconfig?
Relevant output of "ip addr show"
4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:9a:e8:49 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever inet6 2001:b030:112f:2::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe9a:e849/64 scope link valid_lft forever preferred_lft forever 5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:9a:e8:49 brd ff:ff:ff:ff:ff:ff 6: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr0 state UNKNOWN group default qlen 1000 link/ether fe:54:00:9b:21:c1 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe9b:21c1/64 scope link valid_lft forever preferred_lft forever
On 2020-04-26 18:39, Ed Greshko wrote:
On 2020-04-27 09:34, ToddAndMargo via users wrote:
On 2020-04-26 18:31, Ed Greshko wrote:
On 2020-04-27 09:21, ToddAndMargo via users wrote:
Fedora 32 Tip: DO NOT remove network-scripts as qemu-kvm needed network-scrips to for bridge networking
Could you explain a bit more what you mean?
Are you doing bridge networking?
[egreshko@meimei ~]$ rpm -q network-scripts package network-scripts is not installed
And all my qemu guests are running just fine.
Are the file sharing with the server?
Using what protocol?
I use both nfsv4 and sshfs
I use samba to simulte Windows clients
Would you attach a ifconfig?
Relevant output of "ip addr show"
4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:9a:e8:49 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever inet6 2001:b030:112f:2::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe9a:e849/64 scope link valid_lft forever preferred_lft forever 5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:9a:e8:49 brd ff:ff:ff:ff:ff:ff 6: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr0 state UNKNOWN group default qlen 1000 link/ether fe:54:00:9b:21:c1 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe9b:21c1/64 scope link valid_lft forever preferred_lft forever
Mine. I have two network cards: eno1 is internal with all my vm's and eno2 is external to the Internet and iptables
$ ifconfig br0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.255.10 netmask 255.255.255.0 broadcast 192.168.255.255 inet6 fe80::fc62:7fff:fefe:1fdf prefixlen 64 scopeid 0x20<link> ether fe:62:7f:fe:1f:df txqueuelen 1000 (Ethernet) RX packets 6341 bytes 614097 (599.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2947 bytes 13172152 (12.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether ac:1f:6b:62:10:06 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 16 memory 0xdc300000-dc320000
eno2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.250.135 netmask 255.255.255.0 broadcast 192.168.250.255 inet6 fe80::ae1f:6bff:fe62:1007 prefixlen 64 scopeid 0x20<link> ether ac:1f:6b:62:10:07 txqueuelen 1000 (Ethernet) RX packets 212833 bytes 233983717 (223.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 143676 bytes 14016214 (13.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xdc200000-dc27ffff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 20262 bytes 565587199 (539.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20262 bytes 565587199 (539.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:85:fa:b2 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
On 2020-04-27 10:56, ToddAndMargo via users wrote:
Mine. I have two network cards: eno1 is internal with all my vm's and eno2 is external to the Internet and iptables
Why do you use a network card for your VM's? Did you have issues with virtual HW?
On 2020-04-26 20:20, Ed Greshko wrote:
On 2020-04-27 10:56, ToddAndMargo via users wrote:
Mine. I have two network cards: eno1 is internal with all my vm's and eno2 is external to the Internet and iptables
Why do you use a network card for your VM's? Did you have issues with virtual HW?
both physical network cards are on the host machine. the vm's connect through qemu-kvm "Network bridge: br0" to the host machines and then get routed to the internet through en12, via iptables
$ cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge ONBOOT=yes USERCTL=yes DELAY=0 NM_CONTROLLED=no BOOTPROTO=none PREFIX=24 # IPV4_FAILURE_FATAL=yes IPV4_FAILURE_FATAL=no IPV6INIT=no IPV6_AUTOCONF=no IPV6_DEFROUTE=no IPV6_FAILURE_FATAL=no IPV6_PRIVACY=no IPV6_ADDR_GEN_MODE=stable-privacy NAME="System br0" IPADDR=192.168.255.10 # NETMASK=255.255.255.0 NETWORK=192.168.255.0 DNS1=127.0.0.1 PROXY_METHOD=none BROWSER_ONLY=no AUTOCONNECT_PRIORITY=-999 # DEFROUTE=yes DEFROUTE=no
$ cat ifcfg-eno1 TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=no # NAME=enp6s0 NAME=eno1 UUID=be0f8dfa-9939-4f9e-a20a-cadf593452c2 DEVICE=eno1 ONBOOT=yes # IPADDR=192.168.255.10 # Note: NETMASK is now called "PREFIX" # PREFIX=24 # GATEWAY=192.168.255.10 DNS1=127.0.0.1 # IPV6_PEERDNS=yes # IPV6_PEERROUTES=yes BRIDGE=br0
On 2020-04-27 11:49, ToddAndMargo via users wrote:
both physical network cards are on the host machine. the vm's connect through qemu-kvm "Network bridge: br0" to the host machines and then get routed to the internet through en12, via iptables
Yes, I know what you've done. I just don't know why.
I have full connectivity using the virtual devices. So, same question. Why use physical HW?
On 2020-04-26 20:53, Ed Greshko wrote:
On 2020-04-27 11:49, ToddAndMargo via users wrote:
both physical network cards are on the host machine. the vm's connect through qemu-kvm "Network bridge: br0" to the host machines and then get routed to the internet through en12, via iptables
Yes, I know what you've done. I just don't know why.
I have full connectivity using the virtual devices. So, same question. Why use physical HW?
Because it simulates actual servers I have installed. eno2 is hooked to the internet and eno1 is hoooked up to a [switching] hub that fans out to multiple client workstations. The server is also the firewall
I have used virtual LANs before to do certain things to VM's, such as isolate them from the local network
On 2020-04-27 12:59, ToddAndMargo via users wrote:
On 2020-04-26 20:53, Ed Greshko wrote:
On 2020-04-27 11:49, ToddAndMargo via users wrote:
both physical network cards are on the host machine. the vm's connect through qemu-kvm "Network bridge: br0" to the host machines and then get routed to the internet through en12, via iptables
Yes, I know what you've done. I just don't know why.
I have full connectivity using the virtual devices. So, same question. Why use physical HW?
Because it simulates actual servers I have installed. eno2 is hooked to the internet and eno1 is hoooked up to a [switching] hub that fans out to multiple client workstations. The server is also the firewall
What would not work if you used Virtual HW instead of actual HW?
All of my VM's can access all of the other servers on 3 different LAN segments.
I have used virtual LANs before to do certain things to VM's, such as isolate them from the local network
I'm not talking VLAN's here. I'm talking virtio.
On 4/26/20 10:22 PM, Ed Greshko wrote:
On 2020-04-27 12:59, ToddAndMargo via users wrote:
On 2020-04-26 20:53, Ed Greshko wrote:
On 2020-04-27 11:49, ToddAndMargo via users wrote:
both physical network cards are on the host machine. the vm's connect through qemu-kvm "Network bridge: br0" to the host machines and then get routed to the internet through en12, via iptables
Yes, I know what you've done. I just don't know why.
I have full connectivity using the virtual devices. So, same question. Why use physical HW?
Because it simulates actual servers I have installed. eno2 is hooked to the internet and eno1 is hoooked up to a [switching] hub that fans out to multiple client workstations. The server is also the firewall
What would not work if you used Virtual HW instead of actual HW?
All of my VM's can access all of the other servers on 3 different LAN segments.
My understanding of his explanation is that the second ethernet is a private network connecting his VMs to other physical computers.
On 2020-04-27 16:34, Samuel Sieb wrote:
On 4/26/20 10:22 PM, Ed Greshko wrote:
On 2020-04-27 12:59, ToddAndMargo via users wrote:
On 2020-04-26 20:53, Ed Greshko wrote:
On 2020-04-27 11:49, ToddAndMargo via users wrote:
both physical network cards are on the host machine. the vm's connect through qemu-kvm "Network bridge: br0" to the host machines and then get routed to the internet through en12, via iptables
Yes, I know what you've done. I just don't know why.
I have full connectivity using the virtual devices. So, same question. Why use physical HW?
Because it simulates actual servers I have installed. eno2 is hooked to the internet and eno1 is hoooked up to a [switching] hub that fans out to multiple client workstations. The server is also the firewall
What would not work if you used Virtual HW instead of actual HW?
All of my VM's can access all of the other servers on 3 different LAN segments.
My understanding of his explanation is that the second ethernet is a private network connecting his VMs to other physical computers.
Right. I suppose there may be situations one would want that. I've just not had the need.
I noted that the ifcfg-br0 script contained NM_CONTROLLED=no. I never had, and I don't know if it is possible, to have a mixture of connections with some controlled by NM and others not.
The first issue that I would see is that /usr/sbin/ifdown points to a /etc/alternatives entry. So, you'd either be calling the NM version which is a script that uses nmcli or the network-scripts version which doesn't So, I believe you'd have compatibility issues.
On 2020-04-27 01:51, Ed Greshko wrote:
On 2020-04-27 16:34, Samuel Sieb wrote:
On 4/26/20 10:22 PM, Ed Greshko wrote:
On 2020-04-27 12:59, ToddAndMargo via users wrote:
On 2020-04-26 20:53, Ed Greshko wrote:
On 2020-04-27 11:49, ToddAndMargo via users wrote:
both physical network cards are on the host machine. the vm's connect through qemu-kvm "Network bridge: br0" to the host machines and then get routed to the internet through en12, via iptables
Yes, I know what you've done. I just don't know why.
I have full connectivity using the virtual devices. So, same question. Why use physical HW?
Because it simulates actual servers I have installed. eno2 is hooked to the internet and eno1 is hoooked up to a [switching] hub that fans out to multiple client workstations. The server is also the firewall
What would not work if you used Virtual HW instead of actual HW?
All of my VM's can access all of the other servers on 3 different LAN segments.
My understanding of his explanation is that the second ethernet is a private network connecting his VMs to other physical computers.
Right. I suppose there may be situations one would want that. I've just not had the need.
I noted that the ifcfg-br0 script contained NM_CONTROLLED=no. I never had, and I don't know if it is possible, to have a mixture of connections with some controlled by NM and others not.
The first issue that I would see is that /usr/sbin/ifdown points to a /etc/alternatives entry. So, you'd either be calling the NM version which is a script that uses nmcli or the network-scripts version which doesn't So, I believe you'd have compatibility issues.
In my iptables scripts, I now directly add the path to ifup and down stuff
On 2020-04-27 18:59, ToddAndMargo via users wrote:
On 2020-04-27 01:51, Ed Greshko wrote:
On 2020-04-27 16:34, Samuel Sieb wrote:
On 4/26/20 10:22 PM, Ed Greshko wrote:
On 2020-04-27 12:59, ToddAndMargo via users wrote:
On 2020-04-26 20:53, Ed Greshko wrote:
On 2020-04-27 11:49, ToddAndMargo via users wrote: > both physical network cards are on the host machine. the > vm's connect through qemu-kvm "Network bridge: br0" to > the host machines and then get routed to the internet > through en12, via iptables
Yes, I know what you've done. I just don't know why.
I have full connectivity using the virtual devices. So, same question. Why use physical HW?
Because it simulates actual servers I have installed. eno2 is hooked to the internet and eno1 is hoooked up to a [switching] hub that fans out to multiple client workstations. The server is also the firewall
What would not work if you used Virtual HW instead of actual HW?
All of my VM's can access all of the other servers on 3 different LAN segments.
My understanding of his explanation is that the second ethernet is a private network connecting his VMs to other physical computers.
Right. I suppose there may be situations one would want that. I've just not had the need.
I noted that the ifcfg-br0 script contained NM_CONTROLLED=no. I never had, and I don't know if it is possible, to have a mixture of connections with some controlled by NM and others not.
The first issue that I would see is that /usr/sbin/ifdown points to a /etc/alternatives entry. So, you'd either be calling the NM version which is a script that uses nmcli or the network-scripts version which doesn't So, I believe you'd have compatibility issues.
In my iptables scripts, I now directly add the path to ifup and down stuff
I see.
Well it seems to me you've "customized" your system such that it would be hard for, at least me, to offer much advice. For example, I really don't know if you're using the previous method of controlling the network or using NetworkManager or a mixture. And, FWIW, I fail to appreciate the value of the customization.