Xeon 8 core machine with 16G. Fedora 35 server installed. Have CentOS8 installed in qemu/kvm virtual machine. What would be the appropriate configuration of processor access and memory to get reasonable response from the vm? With 4 processors and 2G response is atrocious.
On 2/27/22 21:21, Robert McBroom via users wrote:
Xeon 8 core machine with 16G. Fedora 35 server installed. Have CentOS8 installed in qemu/kvm virtual machine. What would be the appropriate configuration of processor access and memory to get reasonable response from the vm? With 4 processors and 2G response is atrocious.
It depends very much on what you're running in the VM. Is it multi-process? How much RAM does it need? Check the usage. Also, make sure that you actually have KVM enabled for your CPU.
On Mon, 2022-02-28 at 00:21 -0500, Robert McBroom via users wrote:
Xeon 8 core machine with 16G. Fedora 35 server installed. Have CentOS8 installed in qemu/kvm virtual machine. What would be the appropriate configuration of processor access and memory to get reasonable response from the vm? With 4 processors and 2G response is atrocious.
I would routinely assign 4GB to the VM, but obviously it depends on your needs. Luckily it's easy to twiddle the setting. 4 cores might be overkill, but again it's easy to play with.
You can also consider CPU pinning.
poc
On Mon, 28 Feb 2022 00:21:30 -0500 Robert McBroom via users wrote:
With 4 processors and 2G response is atrocious.
Make sure you are using proper virtual disk and network drivers. If you installed it with some sort of emulated IDE disk, it will definitely have the effect of making performance horrible. Also the disk cache mode has an extreme effect on performance. I forget which cache mode it was, but one of them dropped disk I/O about 98% slower when I was testing them.
On 2/28/22 08:56, Tom Horsley wrote:
On Mon, 28 Feb 2022 00:21:30 -0500 Robert McBroom via users wrote:
With 4 processors and 2G response is atrocious.
Make sure you are using proper virtual disk and network drivers. If you installed it with some sort of emulated IDE disk, it will definitely have the effect of making performance horrible. Also the disk cache mode has an extreme effect on performance. I forget which cache mode it was, but one of them dropped disk I/O about 98% slower when I was testing them. _______________________________________________
Exploring the use of VM's. Fairly plain install of CentOS8. Understood that Fedora server installed the VM structure. Where would one see alternate drivers? Networking seems to be all set up but doesn't see the router or the outside internet.
Exploring the use of VM's. Fairly plain install of CentOS8. Understood that Fedora server installed the VM structure. Where would one see alternate drivers? Networking seems to be all set up but doesn't see the router or the outside internet.
With the given hardware info, a VM with 2gb memory should at least be satisfactory and workable, provided the hardware is solid and properly configured and the VM is not running a memory and processor intensive software. An Apache server with a few static sites should run perfectly.
With the information available so far, it's hard to say anything about performance.
(1) What is QEMU/KVM running on, Fedora Server Edition and on which version? (2) If on Fedora Server Edition, how was the server installed? Was it done according to the Fedora Server Edition documentation? (3) Did the installation of QEMU/KVM/Libvirt follow the Fedora Server Edition documentation? (4) How was the CentOS VM installed, according to the Fedora Server Edition documentation? (5) What software/services are active in the VM? (6) What is the measure of "resonable response" or "atrocious" response?
Just to name a few items we need to know.
On Mon, 2022-02-28 at 18:26 -0500, Robert McBroom via users wrote:
On 2/28/22 08:56, Tom Horsley wrote:
On Mon, 28 Feb 2022 00:21:30 -0500 Robert McBroom via users wrote:
With 4 processors and 2G response is atrocious.
Make sure you are using proper virtual disk and network drivers. If you installed it with some sort of emulated IDE disk, it will definitely have the effect of making performance horrible. Also the disk cache mode has an extreme effect on performance. I forget which cache mode it was, but one of them dropped disk I/O about 98% slower when I was testing them. _______________________________________________
Exploring the use of VM's. Fairly plain install of CentOS8. Understood that Fedora server installed the VM structure. Where would one see alternate drivers? Networking seems to be all set up but doesn't see the router or the outside internet.
If you install the VM using virt-manager, you can check the device controller options directly. For disks and network interfaces, the "virtio" option will generally give the best performance.
As regards the network, the virt-manager defaults should just work, but for other stuff my personal notes say this:
Libvirt creates VMs in the libvirt firewall zone, so services must be added there:
For NFS:
# firewall-cmd --add-service mountd --zone=libvirt # firewall-cmd --permanent --add-service mountd --zone=libvirt # firewall-cmd --add-service nfs --zone=libvirt # firewall-cmd --permanent --add-service nfs --zone=libvirt # firewall-cmd --add-service nfs3 --zone=libvirt # firewall-cmd --permanent --add-service nfs3 --zone=libvirt # firewall-cmd --add-service rpc-bind --zone=libvirt # firewall-cmd --permanent --add-service rpc-bind --zone=libvirt
For Samba:
# firewall-cmd --add-service samba --zone=libvirt # firewall-cmd --permanent --add-service samba --zone=libvirt # firewall-cmd --add-service samba-client --zone=libvirt # firewall-cmd --permanent --add-service samba-client --zone=libvirt
poc
On 3/1/22 06:31, Patrick O'Callaghan wrote:
On Mon, 2022-02-28 at 18:26 -0500, Robert McBroom via users wrote:
On 2/28/22 08:56, Tom Horsley wrote:
On Mon, 28 Feb 2022 00:21:30 -0500 Robert McBroom via users wrote:
With 4 processors and 2G response is atrocious.
Make sure you are using proper virtual disk and network drivers. If you installed it with some sort of emulated IDE disk, it will definitely have the effect of making performance horrible. Also the disk cache mode has an extreme effect on performance. I forget which cache mode it was, but one of them dropped disk I/O about 98% slower when I was testing them. _______________________________________________
Exploring the use of VM's. Fairly plain install of CentOS8. Understood that Fedora server installed the VM structure. Where would one see alternate drivers? Networking seems to be all set up but doesn't see the router or the outside internet.
If you install the VM using virt-manager, you can check the device controller options directly. For disks and network interfaces, the "virtio" option will generally give the best performance.
As regards the network, the virt-manager defaults should just work, but for other stuff my personal notes say this:
Libvirt creates VMs in the libvirt firewall zone, so services must be added there: For NFS: # firewall-cmd --add-service mountd --zone=libvirt # firewall-cmd --permanent --add-service mountd --zone=libvirt # firewall-cmd --add-service nfs --zone=libvirt # firewall-cmd --permanent --add-service nfs --zone=libvirt # firewall-cmd --add-service nfs3 --zone=libvirt # firewall-cmd --permanent --add-service nfs3 --zone=libvirt # firewall-cmd --add-service rpc-bind --zone=libvirt # firewall-cmd --permanent --add-service rpc-bind --zone=libvirt For Samba: # firewall-cmd --add-service samba --zone=libvirt # firewall-cmd --permanent --add-service samba --zone=libvirt # firewall-cmd --add-service samba-client --zone=libvirt # firewall-cmd --permanent --add-service samba-client --zone=libvirtpoc _______________________________________________
Installed from iso with virt-manager. The install used virtio. Not using samba or nfs but added firewall commands anyway.
I can ping devices on the local network but can't get to the internet. 50 to 60% drops on the pings. Fully updated F35 server edition on host.
On Wed, 2022-03-02 at 01:17 -0500, Robert McBroom via users wrote:
Installed from iso with virt-manager. The install used virtio. Not using samba or nfs but added firewall commands anyway.
I can ping devices on the local network but can't get to the internet. 50 to 60% drops on the pings. Fully updated F35 server edition on host.
You mean *local* pings are dropping at that rate? That's definitely not right. You shouldn't be seeing any drops on the local network, but the fact that some pings are getting through indicates the issue isn't with the firewall settings. Do you see any unusual load on the VM side, e.g. looking at "ip -s link" from within the VM when there should be no network activity?
poc
On 3/2/22 07:10, Patrick O'Callaghan wrote:
On Wed, 2022-03-02 at 01:17 -0500, Robert McBroom via users wrote:
Installed from iso with virt-manager. The install used virtio. Not using samba or nfs but added firewall commands anyway.
I can ping devices on the local network but can't get to the internet. 50 to 60% drops on the pings. Fully updated F35 server edition on host.
You mean *local* pings are dropping at that rate? That's definitely not right. You shouldn't be seeing any drops on the local network, but the fact that some pings are getting through indicates the issue isn't with the firewall settings. Do you see any unusual load on the VM side, e.g. looking at "ip -s link" from within the VM when there should be no network activity?
Missed a step somewhere
~]# ip -s link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 RX: bytes packets errors dropped overrun mcast 259207 3337 0 0 0 0 TX: bytes packets errors dropped carrier collsns 259207 3337 0 0 0 0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:27:f3:78 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 168696 2897 0 0 0 0 TX: bytes packets errors dropped carrier collsns 765942 8395 0 0 0 0 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:27:7e:e2 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:27:7e:e2 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0
~]# ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.91 metric 100
On 3/2/22 20:29, Robert McBroom via users wrote:
~]# ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.91 metric 100
Assuming this is the host, that looks like your problem. The default subnet for the VM internal network is 122 and it looks like you have the same subnet on your physical network as well. You need to change one of them. I expect the VM network would be easier. You can go to the qemu/kvm details and then edit the network config in there. Or a possibly easier method is to edit /etc/libvirt/qemu/networks/default.xml and restart libvirtd.
On Wed, 2022-03-02 at 21:03 -0800, Samuel Sieb wrote:
On 3/2/22 20:29, Robert McBroom via users wrote:
~]# ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.91 metric 100
Assuming this is the host, that looks like your problem. The default subnet for the VM internal network is 122 and it looks like you have the same subnet on your physical network as well. You need to change one of them. I expect the VM network would be easier. You can go to the qemu/kvm details and then edit the network config in there. Or a possibly easier method is to edit /etc/libvirt/qemu/networks/default.xml and restart libvirtd.
Good catch. I wonder if the libvirt install script checks if its default subnet clashes with an existing one.
I think the proper way to change this is:
sudo virsh net-edit default
poc
On 3/3/22 00:03, Samuel Sieb wrote:
On 3/2/22 20:29, Robert McBroom via users wrote:
~]# ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.91 metric 100
Assuming this is the host, that looks like your problem. The default subnet for the VM internal network is 122 and it looks like you have the same subnet on your physical network as well. You need to change one of them. I expect the VM network would be easier. You can go to the qemu/kvm details and then edit the network config in there. Or a possibly easier method is to edit /etc/libvirt/qemu/networks/default.xml and restart libvirtd. _______________________________________________
This was from the VM. The Host is192.168.1.233 with the router gateway 192.168.1.254. Don't knw what is going on with route.
~]$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default _gateway 0.0.0.0 UG 100 0 0 eno1 192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
~]$ ip -s link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 RX: bytes packets errors dropped missed mcast 2962536395 2064986 0 0 0 0 TX: bytes packets errors dropped carrier collsns 2962536395 2064986 0 0 0 0 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 18:60:24:b1:dc:82 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 17689069676 14803583 0 1080766 0 95557 TX: bytes packets errors dropped carrier collsns 552188371 6611449 0 0 0 0 altname enp0s25 6: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:b3:f2:85 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 1027961 13277 0 0 0 155 TX: bytes packets errors dropped carrier collsns 240241 4152 0 0 0 0 8: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:27:f3:78 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 1213839 13277 0 0 0 0 TX: bytes packets errors dropped carrier collsns 3903387 74556 0 0 0 0
On 3/3/22 04:24, Robert McBroom via users wrote:
On 3/3/22 00:03, Samuel Sieb wrote:
On 3/2/22 20:29, Robert McBroom via users wrote:
~]# ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.91 metric 100
Assuming this is the host, that looks like your problem. The default subnet for the VM internal network is 122 and it looks like you have the same subnet on your physical network as well. You need to change one of them. I expect the VM network would be easier. You can go to the qemu/kvm details and then edit the network config in there. Or a possibly easier method is to edit /etc/libvirt/qemu/networks/default.xml and restart libvirtd. _______________________________________________
This was from the VM. The Host is192.168.1.233 with the router gateway 192.168.1.254. Don't knw what is going on with route.
Ok, then it seems that libvirt is running on the virtualized OS. "systemctl disable --now libvirtd". That might not stop the network, so either reboot it or do "ifdown virbr0".
On 3/3/22 14:44, Samuel Sieb wrote:
On 3/3/22 04:24, Robert McBroom via users wrote:
On 3/3/22 00:03, Samuel Sieb wrote:
On 3/2/22 20:29, Robert McBroom via users wrote:
~]# ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.91 metric 100
Assuming this is the host, that looks like your problem. The default subnet for the VM internal network is 122 and it looks like you have the same subnet on your physical network as well. You need to change one of them. I expect the VM network would be easier. You can go to the qemu/kvm details and then edit the network config in there. Or a possibly easier method is to edit /etc/libvirt/qemu/networks/default.xml and restart libvirtd. _______________________________________________
This was from the VM. The Host is192.168.1.233 with the router gateway 192.168.1.254. Don't know what is going on with route.
Ok, then it seems that libvirt is running on the virtualized OS. "systemctl disable --now libvirtd". That might not stop the network, so either reboot it or do "ifdown virbr0". _______________________________________________
That got things connected to the internet.
~]$ ip route default via 192.168.122.1 dev enp1s0 proto dhcp metric 100 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.91 metric 100
~]$ ip -s link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:27:f3:78 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 12718255 9277 0 47 0 0 TX: bytes packets errors dropped carrier collsns 377082 5281 0 0 0 0
Response is still not good, cursor movement is jerky and hard to position. I started the VM with the "Create a New Virtual Machine" icon in virt-manager then followed the windows to install from a CentOS8 iso on the system.
Then used a command
virt-install --name Centos9 \ --description 'CentOS9-Stream' \ --ram 4096 \ --vcpus 4 \ --disk path=/var/lib/libvirt/images/vol.qcow2 \ --check path_in_use=off \ --os-type linux \ --os-variant centos-stream9 \ --network bridge=virbr0 \ --graphics vnc,listen=127.0.0.1,port=5901 \ --cdrom /var/lib/libvirt/images/CentOS9/CentOS-Stream-9-20220224.0-x86_64-dvd1.iso \ --noautoconsole
That worked like a charm and everyting is connected to the internet with a resonable response