Hardware acceleration in my Win7 virtual machine
by David C. Mores
Use of some applications (e.g. Solitaire) in my Win7 [qemu] virtual
machine start with a warning that "Hardware acceleration is either
disabled or not supported by your video card driver ...". I understand
that the qemu virtual VGA driver is simple. However, is there any
activity going on to make a VGA driver that has virtualized access to
video hardware acceleration in the host system? Is it available?
I'm running Fedora 21 with qemu 2.1.3-3
Dave
8 years, 2 months
Hardware acceleration in my Win7 virtual machine
by David C. Mores
Use of some applications (e.g. Solitaire) in my Win7 [qemu] virtual
machine start with a warning that "Hardware acceleration is either
disabled or not supported by your video card driver ...". I understand
that the qemu virtual VGA driver is simple. However, is there any
activity going on to make a VGA driver that has virtualized access to
video hardware acceleration in the host system? Is it available?
Dave
8 years, 2 months
I/O errors in guest when cache=none, qcow2 on Btrfs
by Chris Murphy
Could someone check out this bug, and see if it needs upstream
attention? It's currently set to kernel; but I have no idea if it
would be libvirt or qemu upstreams to make aware of this.
The gist is that on Fedora 21 and 22, when virtio blkc + cache=none +
qcow2 on Btrfs, the guest OS (regardless of the file system it uses)
starts to experience many I/O errors. If the qcow2 is on XFS, or if
cache=writeback or writethrough, the problem doesn't happen.
Regression testing shows the problem does not happen on Fedora 20's
versions of libvirt and qemu, even with newer kernels. So maybe it's
not a kernel problem, or maybe it's a collision of kernel and libvirt
or qemu problem, hence the inquiry. Upstream Btrfs is aware of this
bug and are looking into it.
The fallout of the bug is that gnome-boxes experiences problems since
they're currently using cache=none by default (and there's no way to
change this in the GUI) when qcow2 is on Btrfs.
blk_update_request: I/O error, dev vda, sector XXXXXXXX when qcow2 is on Btrfs
https://bugzilla.redhat.com/show_bug.cgi?id=1204569
--
Chris Murphy
8 years, 2 months
Understanding the fence_virtd serial listener
by Ian Pilcher
I'm trying to understand the fence_virt.conf options for the serial
listener, specifically the "uri" and "path" options.
* uri - Is this option only used when using the vmchannel mode? I
can't figure out what it could possibly be doing in serial mode.
* path - I *really* can't figure this one out. In fact, I'm completely
mystified by how fence_virtd figures out what sockets it should listen
to for VM requests.
Anyone?
--
========================================================================
Ian Pilcher arequipeno(a)gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================
8 years, 2 months
CfP 10th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '15)
by VHPC 15
=================================================================
CALL FOR PAPERS
10th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
'15)
held in conjunction with Euro-Par 2015, August 24-28, Vienna, Austria
(Springer LNCS)
=================================================================
Date: August 25, 2015
Workshop URL: http://vhpc.org
Paper Submission Deadline: May 22, 2015
CALL FOR PAPERS
Virtualization technologies constitute a key enabling factor for flexible
resource
management in modern data centers, cloud environments, and increasingly in
HPC as well. Providers need to dynamically manage complex infrastructures
in a
seamless fashion for varying workloads and hosted applications,
independently of
the customers deploying software or users submitting highly dynamic and
heterogeneous workloads. Thanks to virtualization, we have the ability to
manage
vast computing and networking resources dynamically and close to the
marginal
cost of providing the services, which is unprecedented in the history of
scientific
and commercial computing.
Various virtualization technologies contribute to the overall picture in
different
ways: machine virtualization, with its capability to enable consolidation
of multiple
under-utilized servers with heterogeneous software and operating systems
(OSes),
and its capability to live-migrate a fully operating virtual machine (VM)
with a very
short downtime, enables novel and dynamic ways to manage physical servers;
OS-level virtualization, with its capability to isolate multiple user-space
environments and to allow for their co-existence within the same OS kernel,
promises to provide many of the advantages of machine virtualization with
high
levels of responsiveness and performance; I/O Virtualization allows physical
network adapters to take traffic from multiple VMs; network virtualization,
with its
capability to create logical network overlays that are independent of the
underlying physical topology and IP addressing, provides the fundamental
ground on top of which evolved network services can be realized with an
unprecedented level of dynamicity and flexibility; These technologies
have to be inter-mixed and integrated in an intelligent way, to support
workloads that are increasingly demanding in terms of absolute performance,
responsiveness and interactivity, and have to respect well-specified
Service-
Level Agreements (SLAs), as needed for industrial-grade provided services.
Indeed, among emerging and increasingly interesting application domains
for virtualization, we can find big-data application workloads in cloud
infrastructures, interactive and real-time multimedia services in the cloud,
including real-time big-data streaming platforms such as used in real-time
analytics supporting nowadays a plethora of application domains. Distributed
cloud infrastructures promise to offer unprecedented responsiveness levels
for
hosted applications, but that is only possible if the underlying
virtualization
technologies can overcome most of the latency impairments typical of current
virtualized infrastructures (e.g., far worse tail-latency).
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations,
each followed by 10 min discussion sections, and lightning talks, limited
to 5
minutes. Presentations may be accompanied by interactive demonstrations.
TOPICS
Topics of interest include, but are not limited to:
- Virtualization in supercomputing environments, HPC clusters, cloud HPC
and grids
- Optimizations of virtual machine monitor platforms, hypervisors and
OS-level virtualization
- Hypervisor and network virtualization QoS and SLAs
- Cloud based network and system management for SDN and NFV
- Management, deployment and monitoring of virtualized environments
- Performance measurement, modelling and monitoring of virtualized/cloud
workloads
- Programming models for virtualized environments
- Cloud reliability, fault-tolerance, high-availability and security
- Heterogeneous virtualized environments, virtualized accelerators, GPUs
and co-processors
- Optimized communication libraries/protocols in the cloud and for HPC in
the cloud
- Topology management and optimization for distributed virtualized
applications
- Cluster provisioning in the cloud and cloud bursting
- Adaptation of emerging HPC technologies (high performance networks, RDMA,
etc..)
- I/O and storage virtualization, virtualization aware file systems
- Job scheduling/control/policy in virtualized environments
- Checkpointing and migration of VM-based large compute jobs
- Cloud frameworks and APIs
- Energy-efficient / power-aware virtualization
Important Dates
April 29, 2015 - Abstract registration
May 22, 2015 - Full paper submission
June 19, 2014 - Acceptance notification
October 2, 2014 - Camera-ready version due
August 25, 2014 - Workshop Date
TPC
CHAIR
Michael Alexander (chair), TU Wien, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Balazs Gerofi (co-chair), RIKEN Advanced Institute for Computational
Science, Japan
PROGRAM COMMITTEE
Stergios Anastasiadis, University of Ioannina, Greece
Costas Bekas, IBM Zurich Research Laboratory, Switzerland
Jakob Blomer, CERN
Ron Brightwell, Sandia National Laboratories, USA
Roberto Canonico, University of Napoli Federico II, Italy
Julian Chesterfield, OnApp, UK
Patrick Dreher, MIT, USA
William Gardner, University of Guelph, Canada
Kyle Hale, Northwestern University, USA
Marcus Hardt, Karlsruhe Institute of Technology, Germany
Iftekhar Hussain, Infinera, USA
Krishna Kant, Temple University, USA
Eiji Kawai, National Institute of Information and Communications
Technology, Japan
Romeo Kinzler, IBM, Switzerland
Kornilios Kourtis, ETH, Switzerland
Nectarios Koziris, National Technical University of Athens, Greece
Massimo Lamanna, CERN
Che-Rung Roger Lee, National Tsing Hua University, Taiwan
Helge Meinhard, CERN
Jean-Marc Menaud, Ecole des Mines de Nantes France
Christine Morin, INRIA, France
Amer Qouneh, University of Florida, USA
Seetharami Seelam, IBM Watson Research Center, USA
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Yasuhiro Watashiba, Osaka University, Japan
Chao-Tung Yang, Tunghai University, Taiwan
PAPER SUBMISSION-PUBLICATION
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the Springer LNCS series - the
format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
http://www.springer.de/comp/lncs/authors.html
Submission Link:
https://easychair.org/conferences/?conf=europar2015ws
GENERAL INFORMATION
The workshop is one day in length and will be held in conjunction with
Euro-Par 2015, 24-28 August, Vienna, Austria
8 years, 2 months
How to revert an external snapshot
by Dario Lesca
I have generate an external snapshot with this command:
> vm='win7-64'
>
> virsh snapshot-create-as "$vm" "$vm-snap1" "snap1 description" \
> --diskspec vda,file="/virt/$vm-snap1.qcow2" \
> --disk-only --atomic
My virsh / libvirt is the last from Fedora 21
> [root@dodo:/virt]# virsh --version
> 1.2.9.2
Now I have this situation:
> [root@dodo:/virt]# ll win7-64*
> -rw-r--r-- 1 qemu qemu 17182753280 4 mar 14.14 win7-64.qcow2
> -rw-r--r-- 1 root root 558628864 4 mar 14.35 win7-64-snap1.qcow2
>
> root@dodo:/virt]# virsh snapshot-list win7-64
> Nome Creation Time Stato
> ------------------------------------------------------------
> win7-64-snap1 2015-03-04 14:21:15 +0100 shutoff
But if I run the revert command this is the result:
> [root@dodo:/virt]# virsh snapshot-revert win7-64 win7-64-snap1
> errore: unsupported configuration: revert to external snapshot not supported yet
There is some way to revert my vm to previous state?
Many thanks for help.
--
Dario Lesca
(inviato dal mio Linux Fedora 21 con Gnome 3.14)
8 years, 2 months
Re: [fedora-virt] Problem assigning an NVIDIA Quadro K2000 to a guest OS
by storri
Can I use an installed version of Windows 7 on a real partition with virt? I wanted to get something fully working so I made a Windows 7 installation on a uefi drive last night. That way i can directly boot into windows or via virt-manager.
Stephen
Sent from my Verizon Wireless 4G LTE smartphone
<div>-------- Original message --------</div><div>From: Alex Williamson <alex.williamson(a)redhat.com> </div><div>Date:03/08/2015 10:13 PM (GMT-05:00) </div><div>To: Stephen Torri <storri(a)torri.org> </div><div>Cc: virt <virt(a)lists.fedoraproject.org> </div><div>Subject: Re: [fedora-virt] Problem assigning an NVIDIA Quadro K2000 to a
guest OS </div><div>
</div>On Sun, 2015-03-08 at 17:30 -0400, Stephen Torri wrote:
> I followed a presentation
> (www.linux-kvm.org/wiki/images/b/b4/2012-forum-VFIO.pdf) to try to
> enable VGA passthrough for the guest OS. This is my first attempt at
> using QEMU+KVM to install a guest OS. My intention is to have the guest
> OS have direct access to the GPU for 3d gaming.
>
> Problem: Not sure if it is setup right. Launching the VM appears to be
> ok. I can open windows and such but if I try to change the resolution
> from 800x600 to something higher I get a lot garbage on the screen.
> Moving the mouse pointer causes the background image to draw over the
> task bar.
>
> My current setup:
>
> GPU1: NVIDIA GeForce 780 Ti (Two monitors currently connected)
> GPU2: NVIDIA Quadro K2000 (No monitors connected)
> - NVIDIA 346.47 properitary drivers installed.
> - Xinerama is disabled due to possible GDM bug.
>
> Goal:
>
> (Not using VM): Linux desktop stretched across both monitors
> (Using VM): Guest OS has full screen rendering on one montior while
> linux is one the other.
This is not the way GPU assignment works. GPU assignment of Quadro
cards is very much like assignment of a NIC. When the NIC is assigned
to the VM, the network cable attached to the NIC is used by the VM.
When a GPU is assigned, the accelerated graphics channel is through the
video outputs of the GPU card. Somehow the idea of rendering into a
Window on the host is a common misconception. There is some opportunity
to due this, but not with Windows 7 as a guest and at a huge cost to
performance. You can of course use the input selection button on you
monitor to select an input channel connected to the assigned GPU.
> Steps done so far:
> 1. Used virt-manager to create a storage device on a SSD (50GB size)
> 2. Installed Windows 7 in VM
> 3. Updated Windows 7
> 4. Enable virtualization support and added PCI 0000:03:00.0 as a
> physical PCI device to the VM.
> 5. Thinking I was not done (due to ignorance) I followed the
> presentation I linked above to find device to assign:
>
> $ sudo lspci -nn | grep NVIDIA
> 02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110B
> [GeForce GTX 780 Ti] [10de:100a] (rev a1)
> 02:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio
> [10de:0e1a] (rev a1)
> 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107GL
> [Quadro K2000] [10de:0ffe] (rev a1)
> 03:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio
> Controller [10de:0e1b] (rev a1)
>
> 6. Worked on finding the group:
>
> $ sudo readlink /sys/bus/pci/devices/0000\:03\:00.0/iommu_group
> ../../../../kernel/iommu_groups/17
>
> 7. Now that I have the group I worked to find the devices in the group:
>
> $ sudo ls /sys/bus/pci/devices/0000\:03\:00.0/iommu_group/devices
> 0000:03:00.0 0000:03:00.1
>
> *** Ok. So two devices need to dealt with here.
>
> 8. I unbinded each from the device driver
>
> $ echo 0000:03:00.0 | sudo tee \
> /sys/bus/pci/devices/0000:03:00.0/driver/unbind
>
> $ echo 0000:03:00.1 | sudo tee \
> /sys/bus/pci/devices/0000:03:00.1/driver/unbind
>
> 9. Found the vendor and device ID for each:
>
> $ sudo lspci -n -s 03:00.0
> 03:00.0 0300: 10de:0ffe (rev a1)
> $ sudo lspci -n -s 03:00.1
> 03:00.1 0403: 10de:0e1b (rev a1)
>
> 10. Now I bind them to vfio-pci
> $ echo 10de 0ffe | sudo tee \
> /sys/bus/pci/drivers/vfio-pci/new_id
> $ echo 10de 01eb | sudo tee \
> /sys/bus/pci/drivers/vfio-pci/new_id
>
> ** Not sure about this. I am not sure I have bounded them correctly.
>
> 11. Now checking them I see:
> $ ls /dev/vfio
> 17 vfio
It's good that you verified the IOMMU group only contains the GPU itself
and audio function, but beyond that, these steps aren't necessary. So
long as the hostdev device in your domain XML is set to managed, which
is the default when using virt-manager, the binding of the assigned
device to vfio is handled by libvirt. However, it's generally
recommended to prevent host drivers from binding to the GPU (they're not
as well practiced at releasing devices as NIC drivers), so we'll want to
configure pci-stub to claim it rather than nouveau. We also need to
deal with the audio function since it's part of the IOMMU group, so
we'll handle it the same way.
To do this, take the PCI vendor and device IDs that you show above and
add the following to your kernel commandline:
pci-stub.ids=10de:0ffe,10de:0e1b
You can do this by editing /etc/sysconfig/grub and re-running
grub2-mkconfig and rebooting. After reboot, if you run "lspci -ks 3:",
pci-stub should show as the driver in use for both GPU and audio
functions.
Now, attach a monitor to the Quadro card and you should find that when
the Nvidia driver is initialized in the guest, the console window in
virt-manager freezes (Windows7 disables that graphics controller) and
the monitor initializes. If you were to use Windows 8, the guest would
be able to use both the virt-manager window and the physical monitor
simultaneously as if it were a dual-monitor setup. In this mode you can
even get 3D graphics in the virt-manager window, but *at greatly reduced
performance*. Hope that helps,
Alex
8 years, 2 months
Problem assigning an NVIDIA Quadro K2000 to a guest OS
by storri
I followed a presentation
(www.linux-kvm.org/wiki/images/b/b4/2012-forum-VFIO.pdf) to try to
enable VGA passthrough for the guest OS. This is my first attempt at
using QEMU+KVM to install a guest OS. My intention is to have the guest
OS have direct access to the GPU for 3d gaming.
Problem: Not sure if it is setup right. Launching the VM appears to be
ok. I can open windows and such but if I try to change the resolution
from 800x600 to something higher I get a lot garbage on the screen.
Moving the mouse pointer causes the background image to draw over the
task bar.
My current setup:
GPU1: NVIDIA GeForce 780 Ti (Two monitors currently connected)
GPU2: NVIDIA Quadro K2000 (No monitors connected)
- NVIDIA 346.47 properitary drivers installed.
- Xinerama is disabled due to possible GDM bug.
Goal:
(Not using VM): Linux desktop stretched across both monitors
(Using VM): Guest OS has full screen rendering on one montior while
linux is one the other.
Steps done so far:
1. Used virt-manager to create a storage device on a SSD (50GB size)
2. Installed Windows 7 in VM
3. Updated Windows 7
4. Enable virtualization support and added PCI 0000:03:00.0 as a
physical PCI device to the VM.
5. Thinking I was not done (due to ignorance) I followed the
presentation I linked above to find device to assign:
$ sudo lspci -nn | grep NVIDIA
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110B
[GeForce GTX 780 Ti] [10de:100a] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio
[10de:0e1a] (rev a1)
03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107GL
[Quadro K2000] [10de:0ffe] (rev a1)
03:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio
Controller [10de:0e1b] (rev a1)
6. Worked on finding the group:
$ sudo readlink /sys/bus/pci/devices/0000\:03\:00.0/iommu_group
../../../../kernel/iommu_groups/17
7. Now that I have the group I worked to find the devices in the group:
$ sudo ls /sys/bus/pci/devices/0000\:03\:00.0/iommu_group/devices
0000:03:00.0 0000:03:00.1
*** Ok. So two devices need to dealt with here.
8. I unbinded each from the device driver
$ echo 0000:03:00.0 | sudo tee \
/sys/bus/pci/devices/0000:03:00.0/driver/unbind
$ echo 0000:03:00.1 | sudo tee \
/sys/bus/pci/devices/0000:03:00.1/driver/unbind
9. Found the vendor and device ID for each:
$ sudo lspci -n -s 03:00.0
03:00.0 0300: 10de:0ffe (rev a1)
$ sudo lspci -n -s 03:00.1
03:00.1 0403: 10de:0e1b (rev a1)
10. Now I bind them to vfio-pci
$ echo 10de 0ffe | sudo tee \
/sys/bus/pci/drivers/vfio-pci/new_id
$ echo 10de 01eb | sudo tee \
/sys/bus/pci/drivers/vfio-pci/new_id
** Not sure about this. I am not sure I have bounded them correctly.
11. Now checking them I see:
$ ls /dev/vfio
17 vfio
8 years, 2 months