Re: [fedora-virt] Routing to guests
by Robert Thiem
> From: Philip Rhoades
> I can ssh from/to the host/guest OK but how do I set up a route (or
> whatever is necessary) so that another machine:
> eth0: 192.168.0.12
> can ssh to the guest? - "ssh 192.168.122.68" gives "no route to host" -
> http://docs.fedoraproject.org/virtualization-guide/f12/en-US/html/ but
> the problem does not seem to be covered there.
Alexander is correct in saying that bridging would allow you to do that.
There are two networking discussed in the guide.
The first is a NAT (network address translation), in which the guests are
given "private" ip addresses and any outbound traffic appears to be coming
from the host machine's IP address. This is the same as the setup on your
ADSL router where the internal network machines get addresses of
192.168.x.x but the internet sees your requests as coming from the IP
address of your router.
There should be lots of documentation in linux firewalling guides under
sections on NAT (or possibly called IP Masquerading in some). Have a look
at these for information on port forwarding to reveal services
inside the virtual (such as ssh).
The other option is bridging. This shares the physical network interface
of the host with the guest. In this case the VM acts as though it's a
machine plugged into the same subnet as the host, its services are
accessible like those of the host and it's as vulnerable to attack as the
host.
Robert
11 years, 10 months
virt-manager with spice support
by Gianluca Cecchi
On Wed Feb 16 08:04:49 UTC 2011 Avi Alkalay wrote:
> After migrating to QEMU 0.14, my Windows 7 VM booted recognizing and
> installing new HW. But none was a QXL device even if virt-manager defines a
> qxl device and I can confirm qemu-kvm is being called with '-vga qxl'.
> Inside the VM there is only a plain 'Standard VGA Graphics Adapter'. I also
> tried to manually install the QXL driver via the qxl.inf on this W7 VM with
> no success. Windows shows no manufacturer (and thus no further action) when
> qxl.inf file is selected.
>
> On the other hand, I defined a spice display via virt-manager and I can
> successfully connect to the VM console using spicy.
I got this too and win7 worked with spice display and access via spicec.
To force apply qlx.inf I had to:
device manager -> standard vga graphics adapter -> driver -> update driver
browse my computer for driver software
You have two options
a) search for driver software in this location -> you can browse where
you have the qxl.inf but it says that the driver is up to date and you
cannot force it
and you see vgapnp.sys as the driver used
b) let me pick from a list
you get the "have disk..." option, you select "display adapters" as
hardware type, and when you specify qxl.inf you get the warning about
not being signed
but it installs apparently ok and win7 asks you to restart
>From this point you are unable to arrive at login prompt. you get a
shut off after the black "starting windows" coloured screen
Next start there is the option to repair or to start normally.... and
the latter always goes to a shutoff of the vm.
If I reset video model as vga and then restart, win7 is able to boot
normally again, as a standard vga grpahics adapter applying vgapnp.sys
Gianluca
12 years, 8 months
qcow2 and virtio in virt-preview and rawhide
by Gianluca Cecchi
Hello,
testing in F14 + virt-preview repo with qemu 0.13.91
Installing a CentOS 5.5 guest with virtio disk in qcow2 format on a
directory based storage pool seems quite slow (default text install
for 2,5Gb of data takes about 33 minutes)
Changing cache mode to writeback improves very much.
What is the risk of writeback and what is the default?
What would be advisable as a good compromise between speed and data
safeness in this version of qemu?
Thanks,
Gianluca
12 years, 9 months
Re: [fedora-virt] [Bug 579348] libvirt: kvm disk error after first stage install of Win2K or WinXP
by KC8LDO
I see it's listed as fixed in the "qemu-0.13.0-1fc14" files. That's nice.
Where is the F13 version? This bug is opened against F13 and has been for a
long time.
----- Original Message -----
From: <bugzilla(a)redhat.com>
To: <kc8ldo(a)arrl.net>
Sent: Saturday, February 19, 2011 11:55 PM
Subject: [Bug 579348] libvirt: kvm disk error after first stage install of
Win2K or WinXP
> Please do not reply directly to this email. All additional
> comments should be made in the comments box of this bug.
>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=579348
>
> pmdyermms(a)gmail.com changed:
>
> What |Removed |Added
> ----------------------------------------------------------------------------
> CC| |pmdyermms(a)gmail.com
>
> --
> Configure bugmail: https://bugzilla.redhat.com/userprefs.cgi?tab=email
> ------- You are receiving this mail because: -------
> You are on the CC list for the bug.
>
12 years, 9 months
virt-manager with spice support
by Gianluca Cecchi
On Fri Feb 11 22:08:13 UTC 2011 Frédéric Grelot wrote:
> I'm just trying this new version, but my vms crash as soon as I enable qxl devices...
> The vm stops when the driver gets loaded by the os (approximatively : on fedora 13, 14 and winxp guests, right after the progress bars).
> As soon as I disable the qxl device, I have no more problems...
>
> The logs shows :
> qemu-kvm: /builddir/build/BUILD/qemu-kvm-0.14.0/qemu-kvm.c:1724: kvm_mutex_unlock: Assertion `!cpu_single_env' failed.
I have the same error ...
I thought it was related to spice with win7 in particular and so I
posted to spice-devel list:
http://lists.freedesktop.org/archives/spice-devel/2011-February/002697.html
But perhaps it could be a general one....
My system is F14 with virt-preview and:
spice-client-0.7.2-1.fc14.x86_64
python-virtinst-0.500.5-1.fc14.noarch
qemu-common-0.14.0-0.1.201102107aa8c46.fc14.x86_64
gpxe-roms-qemu-1.0.1-1.fc14.noarch
libvirt-client-0.8.7-1.fc14.x86_64
spice-server-0.7.2-1.fc14.x86_64
spice-gtk-0.5-1.fc14.x86_64
qemu-kvm-0.14.0-0.1.201102107aa8c46.fc14.x86_64
spice-gtk-tools-0.5-1.fc14.x86_64
virt-manager-0.8.6-1.fc14.noarch
qemu-system-x86-0.14.0-0.1.201102107aa8c46.fc14.x86_64
libvirt-python-0.8.7-1.fc14.x86_64
spice-gtk-python-0.5-1.fc14.x86_64
spice-glib-0.5-1.fc14.x86_64
qemu-img-0.14.0-0.1.201102107aa8c46.fc14.x86_64
libvirt-0.8.7-1.fc14.x86_64
Thanks for your time.
Gianluca
12 years, 9 months
Fwd: LUN over FC and VM
by John Brier
oops, meant to send this to the list
---------- Forwarded message ----------
From: John Brier <johnbrier(a)gmail.com>
Date: Fri, Feb 11, 2011 at 4:29 PM
Subject: Re: [fedora-virt] LUN over FC and VM
To: Olivier Renault <orenault(a)redhat.com>
On Thu, Feb 10, 2011 at 12:55 AM, Olivier Renault <orenault(a)redhat.com> wrote:
> On 02/09/2011 06:41 PM, "Jóhann B. Guðmundsson" wrote:
>> I'm wondering if I can hook up a LUN directly to KVM-VM through FC so
>> the VM can work directly on the storage LUN as oppose to configuring the
>> LUN on the server hosting all the vm's, partition it, mounting it and
>> then create an "image" on top of that which then is exposed to the VM as
>> additional storage?
there is also this:
https://fedoraproject.org/wiki/Features/VirtStorageManagement
as I understand it though NPIV support currently only allows tools on
the host (say, virt-manager) to control the HBA, so for example you
can't send SCSI commands from the VM to the HBA, so you can't rescan
the bus for new luns, for example
but pci passthrough should should do that
12 years, 9 months
Re: [fedora-virt] virt-manager with spice support
by Frédéric Grelot
I'm just trying this new version, but my vms crash as soon as I enable qxl devices...
The vm stops when the driver gets loaded by the os (approximatively : on fedora 13, 14 and winxp guests, right after the progress bars).
As soon as I disable the qxl device, I have no more problems...
The logs shows :
qemu-kvm: /builddir/build/BUILD/qemu-kvm-0.14.0/qemu-kvm.c:1724: kvm_mutex_unlock: Assertion `!cpu_single_env' failed.
2011-02-11 20:38:01.012: shutting down
And the command-line is :
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -S -M pc-0.13 -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -name spice-f14-i386 -uuid 21b91c42-9c60-1b95-e0bc-4f938a1e23c6 -nodefconfig -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/spice-f14-i386.monitor,server,nowait -mon chardev=monitor,mode=control -rtc base=utc -boot c -drive file=/dev/vg_raid10/vm_spice_f14_root,if=none,id=drive-virtio-disk0,boot=on,format=raw -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -drive file=/raid/iso_images/Fedora-14-i386-DVD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=49,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:03:54:16,bus=pci.0,addr=0x3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -device usb-tablet,id=input0 -spice port=5932,addr=0.0.0.0,disable-ticketing -vga qxl -device AC97,id=sound0,bus=pci.0,addr=0x6 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Any idea about this?
On the host :
# uname -a
Linux server.domain.net 2.6.35.11-83.fc14.x86_64 #1 SMP Mon Feb 7 07:06:44 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
all the non-spice vms are ok.
Frederic.
----- Mail original -----
> On Thu, 2011-02-10 at 14:47 -0200, Avi Alkalay wrote:
> > Meanwhile I can't test and play with none of these features because
> > the newest packaged qemu I can find is 0.13 which doesn't support
> > parameters that libvirt generates.
> >
> >
> > :-(
> >
> >
> > I'm not willing to compile qemu 0.14 just to test spice.
> >
> >
> > Unless you guys have another tip or hidden repo to point me...
> >
> Updated qemu is now in rawhide, and virt-preview. Coming soon to
> F15
> updates-testing.
>
> Justin
>
>
> _______________________________________________
> virt mailing list
> virt(a)lists.fedoraproject.org
> https://admin.fedoraproject.org/mailman/listinfo/virt
>
12 years, 9 months
Re: [fedora-virt] virt-manager with spice support
by Avi Alkalay
Thanks Cole. On Fedora, it worked with '--enablerepo=updates-testing' but
apparently there is a bug on libvirt where it fails to transform a Spice
configuration entry from the XML file into a correct '-spice' qemu command
line argument.
On the XML, libvirt/virt-manager generates:
<graphics type='spice' autoport='yes' listen='127.0.0.1'/>
But VM fails to boot with this error:
qemu-kvm: -spice port=5901,addr=127.0.0.1,disable-ticketing: Invalid
parameter 'addr'
parse error: port=5901,addr=127.0.0.1,disable-ticketing
So 'addr' is invalid for qemu -spice and the man page doesn't say much about
it.
Any hint? My software levels:
bash# rpm -qa | egrep -e "virt|qemu|spice" | grep -v virtuoso | sort
gpxe-roms-qemu-1.0.1-1.fc14.noarch
kmod-kqemu-1.4.0-0.2.pre1.fc14.23.x86_64
kmod-kqemu-2.6.35.10-74.fc14.x86_64-1.4.0-0.2.pre1.fc14.23.x86_64
kqemu-1.4.0-0.5.pre1.fc14.noarch
libvirt-0.8.7-1.fc14.x86_64
libvirt-client-0.8.7-1.fc14.x86_64
libvirt-python-0.8.7-1.fc14.x86_64
python-virtinst-0.500.5-1.fc14.noarch
qemu-common-0.13.0-1.fc14.x86_64
qemu-img-0.13.0-1.fc14.x86_64
qemu-kvm-0.13.0-1.fc14.x86_64
qemu-system-x86-0.13.0-1.fc14.x86_64
qemu-user-0.13.0-1.fc14.x86_64
spice-client-0.7.2-1.fc14.x86_64
spice-glib-0.5-1.fc14.x86_64
spice-gtk-0.5-1.fc14.x86_64
spice-gtk-python-0.5-1.fc14.x86_64
spice-gtk-tools-0.5-1.fc14.x86_64
spice-protocol-0.7.0-2.fc14.noarch
spice-server-0.7.2-1.fc14.x86_64
virt-manager-0.8.6-1.fc14.noarch
virt-top-1.0.4-3.fc13.x86_64
virt-viewer-0.2.1-1.fc13.x86_64
Thanks in advance,
Avi
On Tue, Feb 8, 2011 at 12:18, Cole Robinson <crobinso(a)redhat.com> wrote:
> On 02/05/2011 02:50 PM, Avi Alkalay wrote:
> > Hello
> >
> > I'm trying to install your virt-manager 0.8.6 RPM for F14 with spice
> support
> > but yum can't find spice-gtk-python package.
> > Which repo is it available so I can get it running ?
> >
>
> You can get it from rawhide. So
>
> yum --enablerepo=rawhide install spice-gtk-python
>
> In the future, please direct questions to virt(a)lists.fedoraproject.org
>
> Thanks,
> Cole
>
12 years, 9 months
LUN over FC and VM
by Jóhann B. Guðmundsson
I'm wondering if I can hook up a LUN directly to KVM-VM through FC so
the VM can work directly on the storage LUN as oppose to configuring the
LUN on the server hosting all the vm's, partition it, mounting it and
then create an "image" on top of that which then is exposed to the VM as
additional storage?
JBG
12 years, 9 months