A very long time ago I stumbled my way into setting up a virtual bridge so that a qemu-hosted Windows 10 VM could have a static IP address assigned from my LAN's DHCP server, and be reachable from my LAN, directly.
This was a while ago; to the best of my recollection at that time NetworkManager did not support bridges; or the there was some other reason the bridge had to be set up that way.
Well, for a reason that I'll describe separately, after updating to F31 it was necessary to manually ifdown/ifup the bridge in order to fix something. And I'm told that ifdown/ifup is being retired and I should use nmcli. But nmcli doesn't see it, of course.
So, what are my options? Reconfigure my monkey-patched configuration to a NM- managed one?
What I currently have is:
# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Intel Corporation 82573E Gigabit Ethernet Controller (Copper) DEVICE=eth0 BRIDGE=vnet0 HWADDR=00:30:48:FC:83:FA ONBOOT=yes OPTIONS=layer2=1 TYPE=Ethernet NM_CONTROLLED=no USERCTL=no IPV6INIT=yes DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME="System eth0" #UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 BOOTPROTO=dhcp ETHTOOL_OPTS="advertise 030"
and
# cat ifcfg-vnet0 DEVICE=vnet0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes IPV6INIT=no USERCTL=no ZONE=FedoraWorkstation NM_CONTROLLED=no
Currently, NetworkManager sees /something/:
# nmcli c show NAME UUID TYPE DEVICE virbr0 65a32109-d944-40b1-abf8-15458f81585c bridge virbr0
And when everything's up, "ip addr" shows:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master vnet0 state UP group default qlen 1000 link/ether 00:30:48:fc:83:fa brd ff:ff:ff:ff:ff:ff inet6 fe80::230:48ff:fefc:83fa/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 00:30:48:fc:83:fb brd ff:ff:ff:ff:ff:ff 4: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 96:03:13:50:00:eb brd ff:ff:ff:ff:ff:ff inet 192.168.0.2/24 brd 192.168.0.255 scope global dynamic vnet0 valid_lft 604185sec preferred_lft 604185sec inet6 fe80::9403:13ff:fe50:eb/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 02:23:42:87:19:5d brd ff:ff:ff:ff:ff:ff
Is this as simple as manually changing NM_CONTROLLED=yes in both initscripts? That's easy enough to do, test, and roll it back, if it blows up, but can anyone think of anything else that needs to be tweaked.
On Sat, 16 Nov 2019 12:57:11 -0500 Sam Varshavchik wrote:
And I'm told that ifdown/ifup is being retired and I should use nmcli.
That just means it doesn't ship, not that it isn't in the repos.
dnf install network-scripts
and you get back good old network again :-).
Tom Horsley writes:
On Sat, 16 Nov 2019 12:57:11 -0500 Sam Varshavchik wrote:
And I'm told that ifdown/ifup is being retired and I should use nmcli.
That just means it doesn't ship, not that it isn't in the repos.
dnf install network-scripts
and you get back good old network again :-).
I already have network-scripts installed. And I get yelled at when I run ifup or ifdown.
[root@monster network-scripts]# ifup vnet0 WARN : [ifup] You are using 'ifup' script provided by 'network-scripts', which are now deprecated. WARN : [ifup] 'network-scripts' will be removed from distribution in near future. WARN : [ifup] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
So, this is on its way out. Resistance is futile. You will be networkmanaged.
Rather than getting a rude surprise after some future upgrade, I think I'll want to take the path of least resistance, and figure out how to get this house of cards under NM's control.
On Sat, Nov 16, 2019 at 6:57 PM Sam Varshavchik mrsam@courier-mta.com wrote:
A very long time ago I stumbled my way into setting up a virtual bridge so that a qemu-hosted Windows 10 VM could have a static IP address assigned from my LAN's DHCP server, and be reachable from my LAN, directly.
This was a while ago; to the best of my recollection at that time NetworkManager did not support bridges; or the there was some other reason the bridge had to be set up that way.
Well, for a reason that I'll describe separately, after updating to F31 it was necessary to manually ifdown/ifup the bridge in order to fix something. And I'm told that ifdown/ifup is being retired and I should use nmcli. But nmcli doesn't see it, of course.
So, what are my options? Reconfigure my monkey-patched configuration to a NM- managed one?
What I currently have is:
# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Intel Corporation 82573E Gigabit Ethernet Controller (Copper) DEVICE=eth0 BRIDGE=vnet0 HWADDR=00:30:48:FC:83:FA ONBOOT=yes OPTIONS=layer2=1 TYPE=Ethernet NM_CONTROLLED=no USERCTL=no IPV6INIT=yes DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME="System eth0" #UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 BOOTPROTO=dhcp ETHTOOL_OPTS="advertise 030"
and
# cat ifcfg-vnet0 DEVICE=vnet0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes IPV6INIT=no USERCTL=no ZONE=FedoraWorkstation NM_CONTROLLED=no
Currently, NetworkManager sees /something/:
# nmcli c show NAME UUID TYPE DEVICE virbr0 65a32109-d944-40b1-abf8-15458f81585c bridge virbr0
And when everything's up, "ip addr" shows:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master vnet0 state UP group default qlen 1000 link/ether 00:30:48:fc:83:fa brd ff:ff:ff:ff:ff:ff inet6 fe80::230:48ff:fefc:83fa/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 00:30:48:fc:83:fb brd ff:ff:ff:ff:ff:ff 4: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 96:03:13:50:00:eb brd ff:ff:ff:ff:ff:ff inet 192.168.0.2/24 brd 192.168.0.255 scope global dynamic vnet0 valid_lft 604185sec preferred_lft 604185sec inet6 fe80::9403:13ff:fe50:eb/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 02:23:42:87:19:5d brd ff:ff:ff:ff:ff:ff
Is this as simple as manually changing NM_CONTROLLED=yes in both initscripts?
Yes. But check "man nm-settings-ifcfg-rh" to ensure that ifcfg options that you're using are understood by NM.
That's easy enough to do, test, and roll it back, if it blows up, but can anyone think of anything else that needs to be tweaked.