Xen : virsh error
by unixfoo
Greetings,
When I try to use "virsh", I get the below error. ( xm list returns details
though ). I use Fedora 8 and xen.
[root@un1xf00 ~]# virsh
virsh: error: failed to connect to the hypervisor
[root@un1xf00 ~]# xm list
Name ID Mem VCPUs State
Time(s)
Domain-0 0 4885 4 r-----
399430.4
webhttp10000 11 2048 2 -b---- 65141.6
webhttp10001 18 2560 2 -b---- 358.0
webhttp10002 14 2048 2 -b---- 81720.5
[root@un1xf00 ~]#
Any configuration to be done for virsh. Please help
-unixfoo
http://unixfoo.blogspot.com
16 years, 4 months
Best practices questions
by Lopez, Denise
Hi all,
I am in the process of building a new Xen server from scratch and wanted
to ask a couple of questions about best practices.
First, should the guest domains be image files or LVM's or just regular
ext3 partitions? What are the pros and/or cons of each?
Second, since the Dom0 is supposed to be kept secure, and most of my
servers I don't install any X11 server on, is there any security risk
installing an X11 server on the Dom0 in order to take advantage of the
virt-manager GUI interface?
Thank you in advance for any thoughts and or opinions.
Denise Lopez
UCLA - Center for Digital Humanities
Network Services
Linux Systems Engineer
337 Charles E. Young Drive East
PPB 1020
Los Angeles, CA 90095-1499
310/206-8216
16 years, 4 months
lv as "partition"?
by Maximilian Freisinger
Hello,
im just trying to use a logical volume as a partition (for example sdc1) in my domU.
As there seems to be no possibility to define such things on virt manager, i tried virsh with attach-disk and attach-device.
But i always get:
libvir: Xen Daemon Fehler: POST Operation schlug fehl. (Xend.err 'Device 2081 not connected')
my xml for device:
my definition for disk:
virsh attach-disk /dev/master/part01 sdc1
my question is where is my error, or how can i achieve my aim?
thx
_________________________________________________________________
You keep typing, we keep giving. Download Messenger and join the i’m Initiative now.
http://im.live.com/messenger/im/home/?source=TAGLM
16 years, 4 months
Re: [Fedora-xen] Fedora Core 8 + Xenbr0 + network bridging?
by Christian Lahti
Hi Dale:
I work with David who posted the original question to the mailing list. I think we need to give a bit more background info on what we are trying to do. We are running a mixed environment of mostly CentOS 3, 4and 5, we do have a few windows servers and XP systems as well. We are looking to virtualize all these platforms. Normally we have a bonded pair of NICs for the physical hosts, we were able to get this running using CentOS 5 x86_64 with no problems, the guest machines use the bonded pair in bridged mode as expected after a bit of tweaking. The biggest issue we found with EL5 is that windows guest performace is dismal at best, hence our decision to have a look at Fedora Core 8 x86_64. I am happy to report that performance for all of our guest platforms is *very* good with FC8, but it seems that libvirt changed the way networking is setup for Xen. The default NAT configuration is pretty useless for production server environment. Thanks to the mailing list we are now able to bridge a single NIC on FC8 (like eth0 for example), but we cannot figure out how to get a bridge for bond0 (comprised of eth0 and eth1) defined and available to Xen. All the tweaks that worked find on EL5 have not worked so far on FC8. I am going to review your document tomorrow and give it a try, but any idea on whether your methodology will work on FC8 and libvirt? I am willing to blow a Sunday to get this worked out once and for all :)
Basically we are after good performance on both para and fully virtualized guests using a bonded pair of GB NICs for speed and redundancy. If this can be achieved with enterprise linux then that would be preferable, but we will go FC8 if the bonding thing can be sorted out. By the way Xensource 4.x looks to be a respin of RHEL5 and has pretty good performance but their free version is limited to 32bit (and hence 4GB ram). Adding the clustering failover is the next step of course :)
Thanks again for the help so far.
/Christian
>>>>>>>>>>>
just FYI for the list, I have a how-to for a bonded and VLAN tagged network.
http://www.certifried.com
ODT and PDF formats available.
It might not be the best way, but I've sent it out to my colleagues several times and have never received any negative feedback.
Mark
Dale Bewley wrote:
I haven't done bonding, but you should be able to bond them and then compose a bridge on top of this bonded device I would think.
--
Dale Bewley - Unix Administrator - Shields Library - UC Davis
GPG: 0xB098A0F3 0D5A 9AEB 43F4 F84C 7EFD 1753 064D 2583 B098 A0F3
--
Fedora-xen mailing list
Fedora-xen redhat com
https://www.redhat.com/mailman/listinfo/fedora-xen
****************************************************************************
Checked by MailWasher server (www.Firetrust.com)
WARNING. No FirstAlert account found.
To reduce spam further activate FirstAlert.
This message can be removed by purchasing a FirstAlert Account.
****************************************************************************
16 years, 4 months
FYI: The plan for Xen kernels in Fedora 9
by Daniel P. Berrange
This is a friendly alert of the major plans we have for Xen kernels in
Fedora 9 timeframe...
Since we first added Xen in Fedora Core 5, our kernels have been based on
a forward-port of XenSource's upstream Xen kernels, to new LKML. For a
long time we ported their 2.6.16 tree to 2.6.18. Now we do ports of their
2.6.18 tree to 2.6.21/22/23, etc. At the same time, upstream Linux gained
Xen support for i386 DomU, and shortly x86_64 DomU, and is generally
getting ever more virtualization capabilities.
As everyone knows, we have tended to lag behind Fedora's state-of-the-art
bare metal kernels by several releases due to the effort required to port
Xen to newer LKML releases. Despite our best efforts, this lag has been
getting worse, not better.
We have taken the decision, that this situation is unacceptable for Fedora 9.
We simply cannot spend more time forward porting Xen kernels. Either Xen has
to be dropped entirely, or we need a different strategy for dealing with the
kernels. Since people seeem to use Xen, we have decided not to drop it :-)
So the plan is to re-focus 100% of all Xen kernel efforts onto paravirt_ops.
LKML already has i386 pv_ops + Xen DomU. We intend to build on this to
add:
- x64_64 pv_ops
- x86_64 Xen DomU on pv_ops
- i386 & x86_64 Xen Dom0 on pv_ops
- memory balloon
- paravirt framebuffer
- save/restore
All of this based on same LKML release as Fedora bare metal. If all goes to
plan it may even be in the base kernel RPM, instead of kernel-xen, but thats
a minor concern compared to the actual coding.
Getting all this done for Fedora 9 is seriously ambitious, but it is the only
long term sustainable option, other than dropping Xen entirely.
What this means though, is that Fedora 9 Xen will certainly be going through
periods of instability and will certainly be even buggier than normal. F9
may well end up lacking features compared to Xen in Fedora 8 & earlier (eg no
PCI device passthrough, or CPU hotplug). On the plus side though we will be
100% back in sync with bare metal kernel versions & hopefully even have a
lot of this stuff merged in LKML to make ongoing maintainence sustainable.
Short term pain; Long term gain!
I have not got any ETA on when any of these kernel changes will appear in
rawhide - some time before the F9 feature freeze date is best guesstimate.
We will alert people when the time comes. There is a F9 feature page
with some amount of info about the plan...
http://fedoraproject.org/wiki/Features/XenPvops
In terms of Fedora 6/7/8 maintainence... The kernel-xen in these existing
releases already lags behind the bare metal kernel version by 2-3 releases.
We do not intend to continue trying to rebase the kernel-xen in existing
Fedora releases. It will be essentially important bug-fix mode only. This
is neccessary to enable maximum resources to be focused on the critical
Fedora 9 Xen work.
Regards,
Dan ...on behalf of some very busy Fedora Xen kernel developers :-)
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
16 years, 4 months
Fedora Core 8 + Xenbr0 + network bridging?
by David Levinger
Hello all,
I was working with Xen and Cent OS and by default a virtual networking
driver called Xenbr0 was created that acted as a "pass through" for the
virtual machine. IE that machine contacted our real DHCP server and
requested an IP address and all was well. However, on Fedora Core 8 it
seems that the default networking setup is to use virbr0 and to have a
totally different subnet and the host machine assigning IP addresses to
the guests...
How can I get back to just a pure network bridge that had the guests
contact our DHCP server for leases?
Thanks!
David
****************************************************************************
Checked by MailWasher server (www.Firetrust.com)
WARNING. No FirstAlert account found.
To reduce spam further activate FirstAlert.
This message can be removed by purchasing a FirstAlert Account.
****************************************************************************
16 years, 4 months