Re: [Fedora-xen] HVM Performance and PV drivers
by Jim Klein
On Thursday 08 February 2007 10:08am, Jim Klein wrote:
> Anyone been able to get the PV drivers from Xensource to work on an
> FC6 created HVM. Performance is rather sluggish, even on 64bit,
> without these, yet they don't seem to want to work here. Or,
> alternatively, is Red Hat working on their own? The HVM IO is really
> a bit too slow for most production use without them.
On Thu, 8 Feb 2007 11:26am, Lamont Peterson wrote:
> I don't believe that you can do this.
>
>But, I'm wondering what OS you're trying to run on the HVM DomU. If it's
>Linux, use a paravirtualized DomU. You'll get much better performance
>(especially on things like I/O).
>
>HVM adds some emulation overhead and, thus, is slower than PV. HVM is only
>necessary to run an OS for which no paravirtualized kernel is available.
>--
>Lamont Peterson <lamont(a)gurulabs.com>
>Senior Instructor
>Guru Labs, L.C. [ http://www.GuruLabs.com/ ]
All my Linux guests are PV (been running Xen in production for 7 months - far longer in testing.) I'm talking about Windows guests, which is why I would need PV drivers to work. Am running 64bit Dom0 to alleviate some of the pain, but I/O is still too slow for anything but lightweight applications. We're a RHEL shop, so would prefer to use RHEL as a base over XenSource Commercial, but it looks like I may have to go that route if noone else is working on PV drivers for Windows guests.
--
Jim Klein
Director Information Services & Technology
LPIC1, CNA/CNE 4-6, RHCT/RHCE
Saugus Union School District
http://www.saugus.k12.ca.us
"Finis Origine Pendet"
17 years, 2 months
Xen, Security and Selinux: An analysis
by K T Ligesh
There was discussion recently about the need for security on xen dom0, and quite frankly I am a bit confused. For me, the entire idea of virtualization is to enhance the security, as in, on our main website, we have a single virtual machine (on openvz) running inside the host, which has all the services. The host has absolutely no services other than ssh. Even the ssh can be turned off, so that the only access is through the serial console, but I found it not really worth it, especially considering how cumbersome and unreliable the provider's serial console access was. The idea is that the dom0 will contain nothing other than xen virtual machines, and every other service is run inside the domUs, which is the right way, considering the really low overheads of virtualization. So whatever service you are planning to run on dom0, create a new domU specifically for it, and run it there.
Xen has the problem that they have a xend service running, which frankly is a very bad design. Even for migration, the better way would be use more reliable channels like ssh, but other than that, do we actually need selinux on dom0? The only exception to this is if you have the backup of the domUs on the dom0, and you want them to be protected in the case of xend getting compromised.
Thanks.
17 years, 2 months
cannot move Fedora Core 6 OS from one guest into another
by Adam Monsen
I'm having trouble moving an entire Fedora Core 6 OS from one guest
(let's call it "OLD") into a new guest ("NEW"). I want to do this
because the old guest (which appears to be working just fine, thank
you) uses LVM, and I'd like to discontinue using LVM.
In case it saves you an hour or so of reading the rest of this email,
here's the error message:
-------------------8<-------------------
host# xm create -c /xen/new/conf
Using config file "./conf".
Going to boot Fedora Core (2.6.19-1.2895.fc6xen)
kernel: /boot/vmlinuz-2.6.19-1.2895.fc6xen
initrd: /boot/initrd-2.6.19-1.2895.fc6xen.img
Started domain manual3
Linux version 2.6.19-1.2895.fc6xen
(brewbuilder(a)hs20-bc2-2.build.redhat.com) (gcc version 4.1.1 20070105
(Red Hat 4.1.1-51)) #1 SMP Wed Jan 10 19:47:12 EST 2007
...
XENBUS: Device with no driver: device/vbd/51712
Freeing unused kernel memory: 184k freed
Write protecting the kernel read-only data: 387k
Red Hat nash version 5.1.19.0.2 starting
Mounting proc filesystem
Mounting sysfs filesystem
Creating /dev
Creating initial device nodes
Setting up hotplug.
Creating block device nodes.
Loading uhci-hcd.ko module
USB Universal Host Controller Interface driver v3.0
Loading ohci-hcd.ko module
Loading ehci-hcd.ko module
Loading jbd.ko module
Loading ext3.ko module
Loading xenblk.ko module
Registering block device major 202
xvda: xvda1
Loading dm-mod.ko module
device-mapper: ioctl: 4.10.0-ioctl (2006-09-14) initialised: dm-devel(a)redhat.com
Loading dm-mirror.ko module
Loading dm-zero.ko module
Loading dm-snapshot.ko module
Making device-mapper control node
Scanning logical volumes
Reading all physical volumes. This may take a while...
No volume groups found
Activating logical volumes
Volume group "VolGroup00" not found
Creating root device.
Mounting root filesystem.
mount: could not find filesystem '/dev/root'
Setting up other filesystems.
Setting up new root fs
setuproot: moving /dev failed: No such file or directory
no fstab.sys, mounting internal defaults
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
Switching to new root and running init.
unmounting old /dev
unmounting old /proc
unmounting old /sys
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!
------------------->8-------------------
I see nothing useful in /var/log/xend.log or /var/log/xend-debug.log,
just that the domain crashed and was destroyed.
Ok, now here's the whole story from the beginning...
OLD is a paravirtualized guest, NEW will also be. The host for both
OLD and NEW is an up-to-date FC6 system. The goal is to migrate OLD
into NEW, a created-from-scratch guest that doesn't use LVM to slice
up its file-backed VBD.
To try to make it easy on myself, I used qemu to boot a Fedora Core 6
rescue CD .iso image (hmm, might've be a FC5 rescue CD, but this
hopefully shouldn't matter based on what I did in rescue mode), as
follows:
-------------------8<-------------------
host# qemu -hda old.img -hdb new.img -cdrom FC-6-i386-rescuecd.iso -boot d
------------------->8-------------------
old.img is the file-backed VBD for OLD, and was created using a
GUI-guided network/anaconda/whatever install via the "Virtual Machine
Manager". The first partition is /boot, the second is an LVM volume
group with two logical volumes, LogVol00 being a 1.9G partition that
is mounted as the root partition, and LogVol01 is used for swap space.
-------------------8<-------------------
host# ls -lh old.img
-rw-r--r-- 1 root root 3.0G Feb 8 17:07 old.img
host# du -hs old.img
934M old.img
host# fdisk -l old.img
last_lba(): I don't know how to handle files with mode 81a4
You must set cylinders.
You can do this from the extra functions menu.
Disk old.img: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
old.img1 * 1 13 104391 83 Linux
old.img2 14 382 2963992+ 8e Linux LVM
------------------->8-------------------
new.img was just a large sparse file, waiting to become a file-backed VBD.
-------------------8<-------------------
host# dd if=/dev/zero bs=1 count=1 seek=3G of=new.img
------------------->8-------------------
And qemu worked fine. In rescue mode, I was able to mount both
file-backed VBDs, partition/format hdb, copy over OLD to create NEW.
Seemed peachy.
-------------------8<-------------------
qemu# fdisk -l /dev/hdb
Disk /dev/hdb: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hdb1 1 391 3140676 83 Linux
qemu# mkfs.ext3 /dev/hdb1
...worked fine...
qemu# mkdir /mnt/old
qemu# lvm vgchange -ay
2 logical volume(s) in volume group "VolGroup00" now active
qemu# mount /dev/VolGroup00/LogVol00 /mnt/old
qemu# mount /dev/hda1 /mnt/old/boot
qemu# mkdir /mnt/new
qemu# mount /dev/hdb1 /mnt/new
qemu# cp -a /mnt/old/* /mnt/new
------------------->8-------------------
grub install onto hdb seemed to work fine, too:
-------------------8<-------------------
qemu# grub
grub> device (hd0) /dev/hdb
grub> root (hd0,0)
grub> setup (hd0)
...worked fine...
grub> quit
------------------->8-------------------
I then got out of rescue mode and quit qemu. I proceeded to modify NEW
slightly. First I mounted it):
-------------------8<-------------------
host# mkdir /mnt/new
host# bytes_sec=512 sect_track=63
host# mount -o loop,offset=$((bytes_sec * sect_track)) hd.img /mnt/new
------------------->8-------------------
I got that calculation for the offset from the Xen user manual. I was
somehow using this python script to derive the number of cylinders:
-------------------8<-------------------
host# cat disk_geom.py
size_GiB = 2.0
size_bytes = size_GiB * 1024 * 1024 * 1024
bytes_sec = 512
sec_track = 63
heads = 255
print "Cylinders: %d" % (size_bytes / (heads * sec_track * bytes_sec))
------------------->8-------------------
And yeah, I'm saying 2.0 GB instead of 3.0 GB in the script, and I'm
probably mixing up GiB and GB, and ...ugh. ANYWAY, the mount command
actually worked, so I guess 32256 (512 * 63) was correct. Or
something.
At any rate, I was able to modify files in the NEW's file-backed VBD.
/etc/grub.conf (a symlink to ../boot/grub/grub.conf) became:
-------------------8<-------------------
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora Core (2.6.19-1.2895.fc6xen)
root (hd0,0)
kernel /boot/vmlinuz-2.6.19-1.2895.fc6xen ro root=/
initrd /boot/initrd-2.6.19-1.2895.fc6xen.img
------------------->8-------------------
The Xen guest config for NEW looks like:
-------------------8<-------------------
name = 'new'
memory = '256'
disk = [ 'tap:aio:/var/xen/vm/new/hd.img,xvda,w' ]
vif = [ '' ]
vnc = 1
bootloader = '/usr/bin/pygrub'
on_reboot = 'restart'
on_crash = 'destroy'
------------------->8-------------------
NEW's /etc/fstab looks like:
-------------------8<-------------------
/dev/xvda1 / ext3 defaults 1 1
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
------------------->8-------------------
(nope, no swap, hope I don't need that).
I also changed NEW's /boot/grub/device.map, just in case:
-------------------8<-------------------
(hd0) /dev/xvda
------------------->8-------------------
I then unmounted the file-backed VBD, started up NEW, and saw the
error I pasted at the top of this email.
Hints on how I could possibly make this any harder on myself would
also be appreciated. Thanks!
--
Adam Monsen
17 years, 2 months
Nvidia drivers for xen kernel
by Robert Thiem
> I am a newbie to xen and I am wondering if anyone has got the latest
> Nvidia graphics driver (9746) working with the xen version of FC6 (2798).
> ...
> Colin Ager Norfolk UK
I'm using a (evil) closed source nVidia driver, with FC6 xen 2.6.19-1.2895
(32 bit).
It does *seem* to work, but I haven't used the Xen kernel much since so I
can't comment on long term stability.
I post as Penguinfan on nvnews.net
http://www.nvnews.net/vbulletin/showthread.php?t=77597
17 years, 2 months
HVM Performance and PV drivers
by Jim Klein
Anyone been able to get the PV drivers from Xensource to work on an
FC6 created HVM. Performance is rather sluggish, even on 64bit,
without these, yet they don't seem to want to work here. Or,
alternatively, is Red Hat working on their own? The HVM IO is really
a bit too slow for most production use without them.
--
Jim Klein
Director Information Services & Technology
LPIC1, CNA/CNE 4-6, RHCT/RHCE
Saugus Union School District
http://www.saugus.k12.ca.us
"Finis Origine Pendet"
17 years, 2 months
Nvidia drivers for xen kernel
by Colin Ager
Hi
I am a newbie to xen and I am wondering if anyone has got the latest
Nvidia graphics driver (9746) working with the xen version of FC6 (2798).
Nvidia reports that xen is unsupported but I have seen reports of people
applying patches to make some Nvidia drivers work with xen.
Is this possible?
Colin Ager Norfolk UK
17 years, 2 months
All the ways go to Error: (22, 'Invalid argument')
by Davidson Paulo
Good afternoon,
I've installed Xen on a Fedora 6. The xend process are running, all
ok, but no matter what or how I try to create a new domain, I got the
useless error message "Error: (22, 'Invalid argument')".
On my last try, I've populated the directory /vm/fc5.base with a base
installation of Fedora Core 5. So, I exported that directory on NFS,
inserting the following line on /etc/exports
/vm/fc5.base *(rw,no_root_squash)
and running exportfs -afr to update NFS server.
Afer, I created a config file on /etc/xen/fc5base with the following content:
# General
kernel = "/vm/fc5.base/boot/vmlinuz-2.6.18-1.2257.fc5xenU"
ramdisk = "/vm/fc5.base/boot/initrd-2.6.18-1.2257.fc5xenU.img"
memory = 128
root = "/dev/nfs"
nfs_server = "127.0.0.1"
nfs_root = "/vm/fc5.base"
# Network
netmask = "255.255.255.0"
gateway = "10.1.0.1"
hostname = "doutorx.niteroi.unimed"
At last, running the following command
# xm create fc5base vmid=100
I got that message error:
Error: (22, 'Invalid argument')
I have tried other ways to create a new domain from scratch, following
instrunctions on the xm man page, but, I repeat, every try result in
that error message.
Some idea?
Thanks,
Davidson Paulo
17 years, 2 months
FC6 Dom0 Xen Kernel Sources
by Guy Zana
Hi,
1. Where can I find the *release* version of the dom0 xen kernel source
for FC6?
Have I encountered the development tree?
http://hg.et.redhat.com/kernel/
2. What are the "development" phases that redhat do in order to port xen
to fedora?
Thanks,
Guy.
17 years, 2 months
Having issues with Windows 2003 as a guest
by Rick Stevens
System configuration: Opteron 1210, 2GB RAM, Abit motherboard with
nVidia chipset, but running "vesa" video driver.
I'm trying to get Windows 2003 Server SP2 up and running on Xen as
a guest. The installation gets stuck at the blue "Starting Windows"
screen. I read somewhere that early in the boot process, at the "Press
F6 to install SCSI drivers", I should press F5 instead, then select
the HAL version for "Standard PC". All well and good.
The problem is that when I use VNC as the console, the system ignores
keyboard input, and when I try to switch to SDL, I get no console at
all. Needless to say, this is bloody frustrating.
Here's my config file:
-------------------------------- CUT HERE ----------------------------
# -*- mode: python; -*-
#============================================================================
# Python configuration setup for 'xm create'.
# This script sets the parameters used when a domain is created using
# 'xm create'.
# You use a separate script for each domain you want to create, or
# you can set the parameters for the domain on the xm command line.
#============================================================================
import os, re
arch = os.uname()[4]
if re.search('64', arch):
arch_libdir = 'lib64'
else:
arch_libdir = 'lib'
#----------------------------------------------------------------------------
# Kernel image file.
kernel = "/usr/lib/xen/boot/hvmloader"
# The domain build function. HVM domain uses 'hvm'.
builder='hvm'
# Initial memory allocation (in megabytes) for the new domain.
#
# WARNING: Creating a domain with insufficient memory may cause out of
# memory errors. The domain needs enough memory to boot kernel
# and modules. Allocating less than 32MBs is not recommended.
memory = 512
# Shadow pagetable memory for the domain, in MB.
# Should be at least 2KB per MB of domain memory, plus a few MB per
vcpu.
shadow_memory = 8
# A name for your domain. All domains must have different names.
name = "win2k3"
# 128-bit UUID for the domain. The default behavior is to generate a
new UUID
# on each call to 'xm create'.
#uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9"
#-----------------------------------------------------------------------------
# the number of cpus guest platform has, default=1
#vcpus=1
# enable/disable HVM guest PAE, default=0 (disabled)
#pae=0
pae=1
# enable/disable HVM guest ACPI, default=0 (disabled)
#acpi=0
# enable/disable HVM guest APIC, default=0 (disabled)
#apic=0
# List of which CPUS this domain is allowed to use, default Xen picks
#cpus = "" # leave to Xen to pick
#cpus = "0" # all vcpus run on CPU0
#cpus = "0-3,5,^1" # run on cpus 0,2,3,5
# Optionally define mac and/or bridge for the network interfaces.
# Random MACs are assigned if not given.
#vif = [ 'type=ioemu, mac=00:16:3e:00:00:11, bridge=xenbr0,
model=ne2k_pci' ]
# type=ioemu specify the NIC is an ioemu device not netfront
vif = [ 'type=ioemu, bridge=xenbr0' ]
#----------------------------------------------------------------------------
# Define the disk devices you want the domain to have access to, and
# what you want them accessible as.
# Each disk entry is of the form phy:UNAME,DEV,MODE
# where UNAME is the device, DEV is the device name the domain will see,
# and MODE is r for read-only, w for read-write.
disk = [ 'file:/var/lib/xen/images/win2k3.img,hda,w',
'phy:/dev/cdrom,hdc:cdrom,r' ]
#----------------------------------------------------------------------------
# Configure the behaviour when a domain exits. There are three
'reasons'
# for a domain to stop: poweroff, reboot, and crash. For each of these
you
# may specify:
#
# "destroy", meaning that the domain is cleaned up as normal;
# "restart", meaning that a new domain is started in place of
the old
# one;
# "preserve", meaning that no clean-up is done until the domain
is
# manually destroyed (using xm destroy, for
example); or
# "rename-restart", meaning that the old domain is not cleaned up, but
is
# renamed and a new domain started in its place.
#
# The default is
#
# on_poweroff = 'destroy'
# on_reboot = 'restart'
# on_crash = 'restart'
#
# For backwards compatibility we also support the deprecated option
# restart
#
# restart = 'onreboot' means on_poweroff = 'destroy'
# on_reboot = 'restart'
# on_crash = 'destroy'
#
# restart = 'always' means on_poweroff = 'restart'
# on_reboot = 'restart'
# on_crash = 'restart'
#
# restart = 'never' means on_poweroff = 'destroy'
# on_reboot = 'destroy'
# on_crash = 'destroy'
#on_poweroff = 'destroy'
#on_reboot = 'restart'
#on_crash = 'restart'
#============================================================================
# New stuff
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'
#-----------------------------------------------------------------------------
# boot on floppy (a), hard disk (c) or CD-ROM (d)
# default: hard disk, cd-rom, floppy
#boot="cda"
boot="dc"
#-----------------------------------------------------------------------------
# write to temporary files instead of disk image files
#snapshot=1
#----------------------------------------------------------------------------
# enable SDL library for graphics, default = 0
# Crikey, VNC's not working, let's try SDL.
#sdl=0
sdl=1
#----------------------------------------------------------------------------
# enable VNC library for graphics, default = 1
# For now, we're trying SDL
#vnc=1
vnc=0
#----------------------------------------------------------------------------
# address that should be listened on for the VNC server if vnc is set.
# default is to use 'vnc-listen' setting from /etc/xen/xend-config.sxp
#vnclisten="127.0.0.1"
#----------------------------------------------------------------------------
# set VNC display number, default = domid
#vncdisplay=1
#----------------------------------------------------------------------------
# try to find an unused port for the VNC server, default = 1
vncunused=1
#----------------------------------------------------------------------------
# enable spawning vncviewer for domain's console
# (only valid when vnc=1), default = 0
vncconsole=0
#----------------------------------------------------------------------------
# set password for domain's VNC console
# default is depents on vncpasswd in xend-config.sxp
vncpasswd=''
#----------------------------------------------------------------------------
# no graphics, use serial port
#nographic=0
#----------------------------------------------------------------------------
# enable stdvga, default = 0 (use cirrus logic device model)
stdvga=0
#-----------------------------------------------------------------------------
# serial port re-direct to pty deivce, /dev/pts/n
# then xm console or minicom can connect
#serial='pty'
#-----------------------------------------------------------------------------
# enable sound card support, [sb16|es1370|all|..,..], default none
#soundhw='sb16'
#-----------------------------------------------------------------------------
# set the real time clock to local time [default=0 i.e. set to utc]
#localtime=1
#-----------------------------------------------------------------------------
# start in full screen
#full-screen=1
#-----------------------------------------------------------------------------
# Enable USB support (specific devices specified at runtime through
the
# monitor window)
#usb=1
# Enable USB mouse support (only enable one of the following, `mouse'
for
# PS/2 protocol relative mouse, `tablet' for
# absolute mouse)
#usbdevice='mouse'
#usbdevice='tablet'
-------------------------------- CUT HERE ----------------------------
Any help will be GREATLY appreciated and you'll win major good karma
points!
----------------------------------------------------------------------
- Rick Stevens, Senior Systems Engineer rstevens(a)vitalstream.com -
- VitalStream, Inc. http://www.vitalstream.com -
- -
- BASIC is the Computer Science version of `Scientific Creationism' -
----------------------------------------------------------------------
17 years, 2 months
Raid-1 over iscsi vs DRBD?
by Chris Hirsch
Hey all...I'm looking to set up a high availably xen "cluster" so that
if my iSCSI storage goes down, my domU's will remain up. I've been
looking around on the net and it appears this is a pretty common thing
to do but I have one question that I haven't seen an answer to.
It looks like I can accomplish the high availably in one of two ways. I
can use DRBD on my two boxes as shown here
http://www.gridvm.org/drbd-lvm-gnbd-and-xen-for-free-and-reliable-san.html
and I should be able to take down a storage box and my iSCSI domU's will
never know any difference.
An alternate approach (as I see it) is that instead of DRBD I should be
able to carve out two iSCSI targets (one on each storage machine) and
then in my domU make those two targets (say sda1 and sdb1) a raid-1 and
accomplish the same thing as DRBD but use the Linux md stuff instead? Is
there a reason that this isn't done? From my lack of findings I'd say
yes but I haven't actually tried this in a real setting yet. I was just
hoping somebody could steer me in the right direction (and away from
this approach if it's just silly). My *guess* is that DRBD was written
to do a network raid-1 and thus has optimizations to do just that vs a
Linux "disk" raid-1 using md.
Thanks for any pointers or insights to this. I'd really appreciate any
websites or writeups that would compare the two or just some cook book
examples.
Thanks,
Chris
17 years, 2 months