not sure what list would be appropriate for this but i'll start here. currently running RHEL 6.5 on a 64-bit laptop, and installed a new fedora 20 VM using the Virtual Machine Manager -- seemed to work fine, f20 came up, looks good, but now i want to shut it down, so from the f20 console, i selected "Virtual Machine" -> "Shut Down" -> "Shut Down", whereupon it *looks* like the VM shuts down, but the VMM window shows
"f20 Running"
should i expect a different result? i was expecting the VMM window to show that that VM was Stopped or something. i realize i could also select "Force Off" but i'd prefer not to be that brutal.
am i doing something wrong?
rday
On Dec 17, 2013, at 8:43 AM, Robert P. J. Day rpjday@crashcourse.ca wrote:
not sure what list would be appropriate for this but i'll start here. currently running RHEL 6.5 on a 64-bit laptop, and installed a new fedora 20 VM using the Virtual Machine Manager -- seemed to work fine, f20 came up, looks good, but now i want to shut it down, so from the f20 console, i selected "Virtual Machine" -> "Shut Down" -> "Shut Down", whereupon it *looks* like the VM shuts down, but the VMM window shows
That command should be the same as 'virsh shutdown <vmname>'. I'm not sure how the message gets to the VM to shut it down cleanly but I'm guessing some sort of message gets to systemd in the VM.
How long have you waited for it to shutdown? You might be running into this bug:
slow shutdown unit user@0.service entered failed state https://bugzilla.redhat.com/show_bug.cgi?id=1023820
I'm running into that delayed reboot/shutdown bug a lot. Not every time, but maybe 1/3 of the time? It happens on baremetal and in VMs. New F20 installs, and updated ones.
Chris Murphy
Quoting Chris Murphy lists@colorremedies.com:
On Dec 17, 2013, at 8:43 AM, Robert P. J. Day rpjday@crashcourse.ca wrote:
not sure what list would be appropriate for this but i'll start here. currently running RHEL 6.5 on a 64-bit laptop, and installed a new fedora 20 VM using the Virtual Machine Manager -- seemed to work fine, f20 came up, looks good, but now i want to shut it down, so from the f20 console, i selected "Virtual Machine" -> "Shut Down" -> "Shut Down", whereupon it *looks* like the VM shuts down, but the VMM window shows
That command should be the same as 'virsh shutdown <vmname>'. I'm not sure how the message gets to the VM to shut it down cleanly but I'm guessing some sort of message gets to systemd in the VM.
How long have you waited for it to shutdown? You might be running into this bug:
slow shutdown unit user@0.service entered failed state https://bugzilla.redhat.com/show_bug.cgi?id=1023820
I'm running into that delayed reboot/shutdown bug a lot. Not every time, but maybe 1/3 of the time? It happens on baremetal and in VMs. New F20 installs, and updated ones.
ok, here's what i'm testing right now, and reporting on in real time.
first, i noticed earlier that a fast way to shut down the f20 VM is to simply type:
# init 0
if the VMM showed "f20 Running", within seconds of typing that command, the console disconnected and the VMM showed "f20 Shutoff", so that's the response time i'm looking for.
so ... start the VM again, let it boot, log in, then return to VMM and "Shut Down" -> "Shut Down." As before, console goes black, but VMM continues to show "f20 Running" (even though CPU monitor in VMM seems to be totally quiet for that VM).
ok, it's been over a minute and still "f20 Running." i won't worry about it too much more since i know that "init 0" works, but it's still kind of weird.
rday
On Dec 17, 2013, at 11:42 AM, Robert P. J. Day rpjday@crashcourse.ca wrote:
Quoting Chris Murphy lists@colorremedies.com:
On Dec 17, 2013, at 8:43 AM, Robert P. J. Day rpjday@crashcourse.ca wrote:
not sure what list would be appropriate for this but i'll start here. currently running RHEL 6.5 on a 64-bit laptop, and installed a new fedora 20 VM using the Virtual Machine Manager -- seemed to work fine, f20 came up, looks good, but now i want to shut it down, so from the f20 console, i selected "Virtual Machine" -> "Shut Down" -> "Shut Down", whereupon it *looks* like the VM shuts down, but the VMM window shows
That command should be the same as 'virsh shutdown <vmname>'. I'm not sure how the message gets to the VM to shut it down cleanly but I'm guessing some sort of message gets to systemd in the VM.
How long have you waited for it to shutdown? You might be running into this bug:
slow shutdown unit user@0.service entered failed state https://bugzilla.redhat.com/show_bug.cgi?id=1023820
I'm running into that delayed reboot/shutdown bug a lot. Not every time, but maybe 1/3 of the time? It happens on baremetal and in VMs. New F20 installs, and updated ones.
ok, here's what i'm testing right now, and reporting on in real time.
first, i noticed earlier that a fast way to shut down the f20 VM is to simply type:
# init 0
if the VMM showed "f20 Running", within seconds of typing that command, the console disconnected and the VMM showed "f20 Shutoff", so that's the response time i'm looking for.
so ... start the VM again, let it boot, log in, then return to VMM and "Shut Down" -> "Shut Down." As before, console goes black, but VMM continues to show "f20 Running" (even though CPU monitor in VMM seems to be totally quiet for that VM).
ok, it's been over a minute and still "f20 Running." i won't worry about it too much more since i know that "init 0" works, but it's still kind of weird.
If you do it within the VM, what result do you get with 'poweroff'? systemd maps init 0 to poweroff, as of course there is no init with systemd, but it maintains compatibility with init scripts so I expect init 0 to work the same as poweroff.
I wonder if the GUI Shutdown option is mapped to halt rather than halt -p? That could be a bug. I'd expect Shutdown in the GUI to be the equivalent of power off within the VM, or of 'virsh shutdown' from the host.
Chris Murphy
Quoting Chris Murphy lists@colorremedies.com:
On Dec 17, 2013, at 11:42 AM, Robert P. J. Day rpjday@crashcourse.ca wrote:
Quoting Chris Murphy lists@colorremedies.com:
On Dec 17, 2013, at 8:43 AM, Robert P. J. Day rpjday@crashcourse.ca wrote:
not sure what list would be appropriate for this but i'll start here. currently running RHEL 6.5 on a 64-bit laptop, and installed a new fedora 20 VM using the Virtual Machine Manager -- seemed to work fine, f20 came up, looks good, but now i want to shut it down, so from the f20 console, i selected "Virtual Machine" -> "Shut Down" -> "Shut Down", whereupon it *looks* like the VM shuts down, but the VMM window shows
That command should be the same as 'virsh shutdown <vmname>'. I'm not sure how the message gets to the VM to shut it down cleanly but I'm guessing some sort of message gets to systemd in the VM.
How long have you waited for it to shutdown? You might be running into this bug:
slow shutdown unit user@0.service entered failed state https://bugzilla.redhat.com/show_bug.cgi?id=1023820
I'm running into that delayed reboot/shutdown bug a lot. Not every time, but maybe 1/3 of the time? It happens on baremetal and in VMs. New F20 installs, and updated ones.
ok, here's what i'm testing right now, and reporting on in real time.
first, i noticed earlier that a fast way to shut down the f20 VM is to simply type:
# init 0
if the VMM showed "f20 Running", within seconds of typing that command, the console disconnected and the VMM showed "f20 Shutoff", so that's the response time i'm looking for.
so ... start the VM again, let it boot, log in, then return to VMM and "Shut Down" -> "Shut Down." As before, console goes black, but VMM continues to show "f20 Running" (even though CPU monitor in VMM seems to be totally quiet for that VM).
ok, it's been over a minute and still "f20 Running." i won't worry about it too much more since i know that "init 0" works, but it's still kind of weird.
If you do it within the VM, what result do you get with 'poweroff'? systemd maps init 0 to poweroff, as of course there is no init with systemd, but it maintains compatibility with init scripts so I expect init 0 to work the same as poweroff.
unsurprisingly, "poweroff" was equivalent to "init 0" in terms of how quickly the VMM moved to displaying "f20 Shutoff".
I wonder if the GUI Shutdown option is mapped to halt rather than halt -p? That could be a bug. I'd expect Shutdown in the GUI to be the equivalent of power off within the VM, or of 'virsh shutdown' from the host.
ah, and doing "virsh shutdown f20" blanks the console of the VM, but leaves the VMM displaying "f20 Running". so "virsh shutdown" isn't even shutting down the VM properly.
rday
On Dec 17, 2013, at 12:29 PM, Robert P. J. Day rpjday@crashcourse.ca wrote:
ah, and doing "virsh shutdown f20" blanks the console of the VM, but leaves the VMM displaying "f20 Running". so "virsh shutdown" isn't even shutting down the VM properly.
Urgh. I think this is a bug getting the external shutdown message passed into the VM. So this might mean setting up the VM with a serial device, and using virsh console so that even if it gets networking in the VM all shutdown, you can still control and see what's happening, or in this case what's not happening..
Chris Murphy
On Dec 17, 2013, at 12:59 PM, Chris Murphy lists@colorremedies.com wrote:
Urgh. I think this is a bug getting the external shutdown message passed into the VM. So this might mean setting up the VM with a serial device, and using virsh console so that even if it gets networking in the VM all shutdown, you can still control and see what's happening, or in this case what's not happening..
OK, I have a completely new F20 baremetal host installed (from DVD ISO, default desktop packageset without libreoffice and no other additions), with updates-testing enabled, all updates applied, and with group Virtualization installed. One non-stock thing I'm doing is running kernel 3.13.0-0.rc4.git0.1.fc21. This is not a debug kernel.
Booting Fedora 20 Live Desktop with virt-manager, get to the desktop and immediate go to virt-manager's powerbutton icon pulldown menu and choose Shut Down. It takes a while but it down eventually shutdown the VM. If I retry this with virsh shutdown, it also works, eventually.
However, I just tried yet again, using virsh console to see if it's the same bug as before, and I get the result you've got, black screen. But virsh list reports the VM as "pmsuspended" even though I clearly chose Shut Down. Serial console reports this:
Trying to enqueue job suspend.target/start/replace-irreversibly Installed new job suspend.target/start as 1106 Installed new job systemd-suspend.service/start as 1107 Installed new job sleep.target/start as 1108 Enqueued job suspend.target/start as 1106 sleep.target changed dead -> active Job sleep.target/start finished, result=done About to execute: /usr/lib/systemd/systemd-sleep suspend Forked /usr/lib/systemd/systemd-sleep as 1813 systemd-suspend.service changed dead -> start Set up jobs progress timerfd. Set up idle_pipe watch. [ 98.381270] PM: Syncing filesystems ... done. [ 98.772061] Freezing user space processes ... (elapsed 0.001 seconds) done. [ 98.833275] Freezing remaining freezable tasks ... (elapsed 0.007 seconds) done. [ 98.861637] Suspending console(s) (use no_console_suspend to debug)
So that's consistent with what you're seeing I think. I got sick of the user@0.service bug causing delays and formed the habit of using virsh destroy. Looks like a totally separate bug here.
Chris Murphy
On Dec 17, 2013, at 1:46 PM, Chris Murphy lists@colorremedies.com wrote:
[ 118.522477] PM: Syncing filesystems ... done. [ 118.986063] Freezing user space processes ... (elapsed 0.005 seconds) done. [ 119.045554] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done. [ 119.689884] PM: suspend of devices complete after 636.182 msecs [ 119.693836] PM: late suspend of devices complete after 0.193 msecs [ 119.702168] PM: noirq suspend of devices complete after 5.163 msecs [ 119.704754] ACPI: Preparing to enter system sleep state S3 [ 119.706818] PM: Saving platform NVS memory [ 119.708309] Disabling non-boot CPUs ... [ 119.711815] Unregister pv shared memory for cpu 1 [ 119.717559] Broke affinity for irq 1 [ 119.718393] Broke affinity for irq 9 [ 119.718393] Broke affinity for irq 14 [ 119.836051] smpboot: CPU 1 is now offline
Yet another pmsuspend on GUI shut down request. That's supposed to be the behavior if the host is shutdown, but that's not what I'm doing. And virsh dompmwakeup doesn't work.
# virsh dompmwakeup fedora20 Domain fedora20 successfully woken up
Yet serial console remains unresponsive, until I disconnect and reconnect, and then I get:
[ 253.170908] ACPI Error: No installed handler for fixed event - PM_Timer (0), disabling (20130517/evevent-286) [ 253.171840] ACPI Error: No installed handler for fixed event - SleepButton (3), disabling (20130517/evevent-286) [ 253.171840] ACPI Error: Could not disable RealTimeClock events (20130517/evxfevnt-266) [ 253.199507] ACPI Error: No installed handler for fixed event - PM_Timer (0), disabling (20130517/evevent-286) [ 253.200441] ACPI Error: No installed handler for fixed event - SleepButton (3), disabling (20130517/evevent-286) [ 253.200441] ACPI Error: Could not disable RealTimeClock events (20130517/evxfevnt-266) [ 253.223361] ACPI Error: No installed handler for fixed event - PM_Timer (0), disabling (20130517/evevent-286) [ 253.224296] ACPI Error: No installed handler for fixed event - SleepButton (3), disabling (20130517/evevent-286) [ 253.224296] ACPI Error: Could not disable RealTimeClock events (20130517/evxfevnt-266)
So I'd say the VM is confused. I'm not sure where the problem is.
Chris Murphy
Quoting Chris Murphy lists@colorremedies.com:
On Dec 17, 2013, at 1:46 PM, Chris Murphy lists@colorremedies.com wrote:
[ 118.522477] PM: Syncing filesystems ... done. [ 118.986063] Freezing user space processes ... (elapsed 0.005 seconds) done. [ 119.045554] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done. [ 119.689884] PM: suspend of devices complete after 636.182 msecs [ 119.693836] PM: late suspend of devices complete after 0.193 msecs [ 119.702168] PM: noirq suspend of devices complete after 5.163 msecs [ 119.704754] ACPI: Preparing to enter system sleep state S3 [ 119.706818] PM: Saving platform NVS memory [ 119.708309] Disabling non-boot CPUs ... [ 119.711815] Unregister pv shared memory for cpu 1 [ 119.717559] Broke affinity for irq 1 [ 119.718393] Broke affinity for irq 9 [ 119.718393] Broke affinity for irq 14 [ 119.836051] smpboot: CPU 1 is now offline
Yet another pmsuspend on GUI shut down request. That's supposed to be the behavior if the host is shutdown, but that's not what I'm doing. And virsh dompmwakeup doesn't work.
# virsh dompmwakeup fedora20 Domain fedora20 successfully woken up
Yet serial console remains unresponsive, until I disconnect and reconnect, and then I get:
[ 253.170908] ACPI Error: No installed handler for fixed event - PM_Timer (0), disabling (20130517/evevent-286) [ 253.171840] ACPI Error: No installed handler for fixed event - SleepButton (3), disabling (20130517/evevent-286) [ 253.171840] ACPI Error: Could not disable RealTimeClock events (20130517/evxfevnt-266) [ 253.199507] ACPI Error: No installed handler for fixed event - PM_Timer (0), disabling (20130517/evevent-286) [ 253.200441] ACPI Error: No installed handler for fixed event - SleepButton (3), disabling (20130517/evevent-286) [ 253.200441] ACPI Error: Could not disable RealTimeClock events (20130517/evxfevnt-266) [ 253.223361] ACPI Error: No installed handler for fixed event - PM_Timer (0), disabling (20130517/evevent-286) [ 253.224296] ACPI Error: No installed handler for fixed event - SleepButton (3), disabling (20130517/evevent-286) [ 253.224296] ACPI Error: Could not disable RealTimeClock events (20130517/evxfevnt-266)
So I'd say the VM is confused. I'm not sure where the problem is.
just catching up here and have to admit i'm out of my comfort zone ... is it safe to say that all of this represents an actual bug/issue and i wasn't doing anything imbecilic earlier? as in, when i was using the VMM to allegedly "Shut Down" a fedora 20 VM that didn't actually, you know, *shut down*?
should this be covered further on one of the virt lists? is someone going to file something on bugzilla that i can follow along with? thanks.
rday
On Dec 18, 2013, at 4:25 AM, Robert P. J. Day rpjday@crashcourse.ca wrote:
just catching up here and have to admit i'm out of my comfort zone ... is it safe to say that all of this represents an actual bug/issue and i wasn't doing anything imbecilic earlier?
Correct.
as in, when i was using the VMM to allegedly "Shut Down" a fedora 20 VM that didn't actually, you know, *shut down*?
Correct.
should this be covered further on one of the virt lists?
I have no preference.
is someone going to file something on bugzilla that i can follow along with? thanks.
https://bugzilla.redhat.com/show_bug.cgi?id=1044145
There's a work around suggested there, I tried it, it still doesn't work. So it might take some time to get sorted out. In the meantime, I'm using 'poweroff' within the VM itself, either command line or the DE's GUI option to do this, rather than trying to shut it down from outside the VM.
Gnome Boxes is another VM option you might look at. It appears to (externally execute) shutdown of VMs correctly.
Chris Murphy
On Dec 17, 2013, at 1:46 PM, Chris Murphy lists@colorremedies.com wrote:
So that's consistent with what you're seeing I think. I got sick of the user@0.service bug causing delays and formed the habit of using virsh destroy. Looks like a totally separate bug here.
https://bugzilla.redhat.com/show_bug.cgi?id=1044145
I think I figured it out. This is some GUI confusion. The option is Shut Down, but due to how the VM is launched, the -no-shutdown option is used by default, which makes the shut down command a pmsuspend command instead.
This may end up turning into notabug. But I think it's sufficient confusing that at least the term used should be re-evaluated.
Chris Murphy
On 12/17/2013 11:16 AM, Chris Murphy wrote:
If you do it within the VM, what result do you get with 'poweroff'? systemd maps init 0 to poweroff, as of course there is no init with systemd, but it maintains compatibility with init scripts so I expect init 0 to work the same as poweroff.
I don't know how the mapping is done, but if it were me, init would be a simple bash script with a case statement that made the requested system call depending on the argument. Just a thought.
Quoting Joe Zeff joe@zeff.us:
On 12/17/2013 11:16 AM, Chris Murphy wrote:
If you do it within the VM, what result do you get with 'poweroff'? systemd maps init 0 to poweroff, as of course there is no init with systemd, but it maintains compatibility with init scripts so I expect init 0 to work the same as poweroff.
I don't know how the mapping is done, but if it were me, init would be a simple bash script with a case statement that made the requested system call depending on the argument. Just a thought.
at this point, i'm not sure how much more helpful i can be. is anyone else seeing this behaviour? be annoying to find out it's just me. :-(
rday
On Dec 17, 2013, at 12:47 PM, Robert P. J. Day rpjday@crashcourse.ca wrote:
Quoting Joe Zeff joe@zeff.us:
On 12/17/2013 11:16 AM, Chris Murphy wrote:
If you do it within the VM, what result do you get with 'poweroff'? systemd maps init 0 to poweroff, as of course there is no init with systemd, but it maintains compatibility with init scripts so I expect init 0 to work the same as poweroff.
I don't know how the mapping is done, but if it were me, init would be a simple bash script with a case statement that made the requested system call depending on the argument. Just a thought.
at this point, i'm not sure how much more helpful i can be. is anyone else seeing this behaviour? be annoying to find out it's just me. :-(
I've seen it before, in the F18/19 time frame my recollection was it always was like this.
But the last ~6 months I connect to the host computer via 'ssh blah@f20s.local -L 5900:127.0.0.1:5900' and use virsh to manipulate the VM from the outside, and TigerVNC pointed to 127.0.0.1 to control the VM from the inside, as well as ssh. Therefore I'm not regularly using virt-manager.
Chris Murphy