virt-install disk size problems
by James Harrison
Hi All,
If I use sparce=true or false I end up with a 100Gb qcow2 (raw) file. However If I look at the disk size in virt-manager the file is 20Gb.
I am having problems with mounting the / partition during the install.
I am using this virt-install command to create and install a RHEL 6 VM on F20 (updated OS as of today):
virt-install \
--connect qemu:///system \
--name Stage \
--memory 2048 \
--vcpus=1,maxvcpus=4 \
--accelerate \
--os-variant rhel6 \
--os-type linux \
--disk path=/var/lib/libvirt/images/stage.qcow2,cache=none,bus=virtio,size=100,sparse=true,format=qcow2 \
--network bridge=bridge0 \
--virt-type qemu \
--graphics vnc \
--location=/ISOs/rhel-server-6.4-x86_64-dvd.iso \
--initrd-inject=/var/lib/libvirt/images/rhel6-ks.ks \
--extra-args "ks=file:/rhel6-ks.ks" \
&
The /var/lib/libvirt/images directory is a glusterfs file system, between two machines (kvm1 and kvm2) and images are supposed to reside on this mount.
and this kickstart file (rhel6-ks.ks):
#RHEL 6 KS file
interactive
install
cdrom
lang en_US.UTF-8
keyboard uk
network --onboot=yes --device=eth0 --bootproto=dhcp --hostname=v-dst-stgjah-01.nexus.xxxxxxx.com
rootpw xxxxxxxxx
firewall --disabled
authconfig --enableshadow --passalgo=sha512
selinux --permissive
timezone --utc Europe/London
bootloader --location=mbr --driveorder=vda --append="rhgb quiet"
zerombr
clearpart --all --drives=vda
part /boot --fstype=ext4 --size=1024
part pv.008002 --size=94371840
volgroup vg_rhel6 pv.008002
logvol / --fstype=ext4 --name=lv_root --vgname=vg_rhel6 --size=15000
logvol swap --name=lv_swap --vgname=vg_rhel6 --size=4096
logvol /repos --fstype=ext4 --name=lv_repos --vgname=vg_rhel6 --grow --size=100
%packages
@base
@client-mgmt-tools
@console-internet
@core
@directory-client
@hardware-monitoring
@java-platform
@large-systems
@network-file-system-client
@performance
@server-platform
@server-policy
wget
%end
The error message I get from anaconda: "An error occurred mounting device /dev/mapper/vg_rhel6-lv_root at / mount failed: (9, None)."
Manually creating the disk using the truncate command also same problem.
I have created other RHEL 6 VMs on 10G disks and the OS installs without any problems.
James
9 years, 6 months
Revert Snapshots - VM Definition won't revert
by Jorge Fábregas
Hi,
I just realized that when one does revert to a previous snapshot ("start
a snapshot"), the properties of the actual virtual machine don't revert
back as they looked when the snapshot was taken. For example (on a
powered-down VM):
- take snapshot X
- Add some hardware (a new disk)
- revert to snapshot X
At this point I would expect the recently added disk to be removed from
the VM definition (and for the qcow2 file to be deleted from disk as
well) since those weren't there when the snapshot was taken.
I think that's the correct way it should behave and thereforew, when you
revert to a snapshot, instead of saying;
"All disk changes since the last snapshot was created will be discarded."
... it should simply say:
"All changes since the last snapshot was created will be discarded."
What do you guys think?
Thanks,
Jorge
9 years, 6 months
Re: [fedora-virt] [Libguestfs] ANNOUNCE: libguestfs 1.26 released
by Richard W.M. Jones
On Thu, Mar 27, 2014 at 10:26:42PM +0000, Richard W.M. Jones wrote:
> I'm pleased to announce libguestfs 1.26, a library and set of tools
> for accessing and modifying virtual machine disk images. This release
> took more than 6 months of work by a considerable number of people,
> and has many new features (see release notes below).
>
> You can get libguestfs 1.26 here:
>
> Main website: http://libguestfs.org/
>
> Source: http://libguestfs.org/download/1.26-stable/
> You will also need latest supermin from here:
> http://libguestfs.org/download/supermin/
>
> Fedora 20/21: http://koji.fedoraproject.org/koji/packageinfo?packageID=8391
> It will appear as an update for F20 in about a week.
Fedora 20 users can test and give feedback here:
https://admin.fedoraproject.org/updates/libguestfs-1.26.0-1.fc20,supermin...
> Debian/experimental coming soon, see:
> https://packages.debian.org/experimental/libguestfs0
>
> The Fedora and Debian packages have split dependencies so you can
> download just the features you need.
>
> From http://libguestfs.org/guestfs-release-notes.1.html :
>
> RELEASE NOTES FOR LIBGUESTFS 1.26
>
> New features
>
> Tools
>
> virt-customize(1) is a new tool for customizing virtual machine disk
> images. It lets you install packages, edit configuration files, run
> scripts, set passwords and so on. virt-builder(1) and virt-sysprep(1)
> use virt-customize, and command line options across all these tools are
> now identical.
>
> virt-diff(1) is a new tool for showing the differences between the
> filesystems of two virtual machines. It is mainly useful when showing
> what files have been changed between snapshots.
>
> virt-builder(1) has been greatly enhanced. There are many more ways to
> customize the virtual machine. It can pull templates from multiple
> repositories. A parallelized internal xzcat implementation speeds up
> template decompression. Virt-builder uses an optimizing planner to
> choose the fastest way to build the VM. It is now easier to use
> virt-builder from other programs. Internationalization support has been
> added to metadata. More efficient SELinux relabelling of files. Can
> build guests for multiple architectures. Error messages have been
> improved. (Pino Toscano)
>
> virt-sparsify(1) has a new --in-place option. This sparsifies an image
> in place (without copying it) and is also much faster. (Lots of help
> provided by Paolo Bonzini)
>
> virt-sysprep(1) can delete and scrub files under user control. You can
> lock user accounts or set random passwords on accounts. Can remove more
> log files. Can unsubscribe a guest from Red Hat Subscription Manager.
> New flexible way to enable and disable operations. (Wanlong Gao, Pino
> Toscano)
>
> virt-win-reg(1) allows you to use URIs to specify remote disk images.
>
> virt-format(1) can now pass the extra space that it recovers back to
> the host.
>
> guestfish(1) has additional environment variables to give fine control
> over the ><fs> prompt. Guestfish reads its (rarely used) configuration
> file in a different order now so that local settings override global
> settings. (Pino Toscano)
>
> virt-make-fs(1) was rewritten in C, but is unchanged in terms of
> functionality and command line usage.
>
> Language bindings
>
> The OCaml bindings have a new Guestfs.Errno module, used to check the
> error number returned by Guestfs.last_errno.
>
> PHP tests now work. (Pino Toscano)
>
> Inspection
>
> Inspection can recognize Debian live images.
>
> Architectures
>
> ARMv7 (32 bit) now supports KVM acceleration.
>
> Aarch64 (ARM 64 bit) is supported, but the appliance part does not work
> yet.
>
> PPC64 support has been fixed and enhanced.
>
> Security
>
> Denial of service when inspecting disk images with corrupt btrfs
> volumes
>
> It was possible to crash libguestfs (and programs that use libguestfs
> as a library) by presenting a disk image containing a corrupt btrfs
> volume.
>
> This was caused by a NULL pointer dereference causing a denial of
> service, and is not thought to be exploitable any further.
>
> See commit d70ceb4cbea165c960710576efac5a5716055486 for the fix. This
> fix is included in libguestfs stable branches ≥ 1.26.0, ≥ 1.24.6 and
> ≥ 1.22.8, and also in RHEL ≥ 7.0. Earlier versions of libguestfs are
> not vulnerable.
>
> Better generation of random root passwords and random seeds
>
> When generating random root passwords and random seeds, two bugs were
> fixed which are possibly security related. Firstly we no longer read
> excessive bytes from /dev/urandom (most of which were just thrown
> away). Secondly we changed the code to avoid modulo bias. These
> issues were not thought to be exploitable. (Both changes suggested by
> Edwin Török)
>
> API
>
> GUID parameters are now validated when they are passed to API calls,
> whereas previously you could have passed any string. (Pino Toscano)
>
> New APIs
>
> guestfs_add_drive_opts: new discard parameter
>
> The new discard parameter allows fine-grained control over
> discard/trim support for a particular disk. This allows the host file
> to become more sparse (or thin-provisioned) when you delete files or
> issue the guestfs_fstrim API call.
>
> guestfs_add_domain: new parameters: cachemode, discard
>
> These parameters are passed through when adding the domain's disks.
>
> guestfs_blkdiscard
>
> Discard all blocks on a guestfs device. Combined with the discard
> parameter above, this makes the host file sparse.
>
> guestfs_blkdiscardzeroes
>
> Test if discarded blocks read back as zeroes.
>
> guestfs_compare_*
>
> guestfs_copy_*
>
> For each struct returned through the API, libguestfs now generates
> guestfs_compare_* and guestfs_copy_* functions to allow you to
> compare and copy structs.
>
> guestfs_copy_attributes
>
> Copy attributes (like permissions, xattrs, ownership) from one file
> to another. (Pino Toscano)
>
> guestfs_disk_create
>
> A flexible API for creating empty disk images from scratch. This
> avoids the need to call out to external programs like qemu-img(1).
>
> guestfs_get_backend_settings
>
> guestfs_set_backend_settings
>
> Per-backend settings (can also be set via the environment variable
> LIBGUESTFS_BACKEND_SETTINGS). The main use for this is forcing TCG
> mode in the qemu-based backends, for example:
>
> export LIBGUESTFS_BACKEND=direct
> export LIBGUESTFS_BACKEND_SETTINGS=force_tcg
>
> guestfs_part_get_name
>
> Get the label or name of a partition (for GPT disk images).
>
> Build changes
>
> The following extra packages are required to build libguestfs 1.26:
>
> supermin ≥ 5
>
> Supermin version 5 is required to build this version of libguestfs.
>
> flex, bison
>
> Virt-builder now uses a real parser to parse its metadata file, so
> these tools are required.
>
> xz
>
> This is now a required build dependency, where previously it was (in
> theory) optional.
>
> Internals
>
> PO message extraction rewritten to be more robust. (Pino Toscano)
>
> podwrapper gives an error if the --insert or --verbatim argument
> pattern is not found.
>
> Libguestfs now passes the qemu -enable-fips option to enable FIPS, if
> qemu supports it.
>
> ./configure --without-qemu can be used if you don't want to specify a
> default hypervisor.
>
> Copy-on-write [COW] overlays, used for example for read-only drives,
> are now created through an internal backend API (.create_cow_overlay).
>
> Libvirt backend uses some funky C macros to generate XML. These are
> simpler and safer.
>
> The ChangeLog file format has changed. It is now just the same as git
> log, instead of using a custom format.
>
> Appliance start-up has changed:
>
> * The libguestfs appliance now initializes LVM the same way as it is
> done on physical machines.
>
> * The libguestfs appliance does not write an empty string to
> /proc/sys/kernel/hotplug when starting up.
>
> Note that you must configure your kernel to have
> CONFIG_UEVENT_HELPER_PATH="" otherwise you will get strange LVM
> errors (this applies as much to any Linux machine, not just
> libguestfs). (Peter Rajnoha)
>
> Libguestfs can now be built on arches that have ocamlc(1) but not
> ocamlopt(1). (Hilko Bengen, Olaf Hering)
>
> You cannot use ./configure --disable-daemon --enable-appliance. It made
> no sense anyway. Now it is expressly forbidden by the configure script.
>
> The packagelist file uses m4 for macro expansion instead of cpp.
>
> Bugs fixed
>
> https://bugzilla.redhat.com/1073906
>
> java bindings inspect_list_applications2 throws
> java.lang.ArrayIndexOutOfBoundsException:
>
> https://bugzilla.redhat.com/1063374
>
> [RFE] enable subscription manager clean or unregister operation to
> sysprep
>
> https://bugzilla.redhat.com/1060404
>
> virt-resize does not preserve GPT partition names
>
> https://bugzilla.redhat.com/1057504
>
> mount-local should give a clearer error if root is not mounted
>
> https://bugzilla.redhat.com/1056290
>
> virt-sparsify overwrites block devices if used as output files
>
> https://bugzilla.redhat.com/1055452
>
> libguestfs: error: invalid backend: appliance
>
> https://bugzilla.redhat.com/1054761
>
> guestfs_pvs prints "unknown device" if a physical volume is missing
>
> https://bugzilla.redhat.com/1053847
>
> Recommended default clock/timer settings
>
> https://bugzilla.redhat.com/1046509
>
> ruby-libguestfs throws "expecting 0 or 1 arguments" on
> Guestfs::Guestfs.new
>
> https://bugzilla.redhat.com/1045450
>
> Cannot inspect cirros 0.3.1 disk image fully
>
> https://bugzilla.redhat.com/1045033
>
> LIBVIRT_DEFAULT_URI=qemu:///system breaks libguestfs
>
> https://bugzilla.redhat.com/1044585
>
> virt-builder network (eg. --install) doesn't work if resolv.conf sets
> nameserver 127.0.0.1
>
> https://bugzilla.redhat.com/1044014
>
> When SSSD is installed, libvirt configuration requires
> authentication, but not clear to user
>
> https://bugzilla.redhat.com/1039995
>
> virt-make-fs fails making fat/vfat whole disk: Device partition
> expected, not making filesystem on entire device '/dev/sda' (use -I
> to override)
>
> https://bugzilla.redhat.com/1039540
>
> virt-sysprep to delete more logfiles
>
> https://bugzilla.redhat.com/1033207
>
> RFE: libguestfs inspection does not recognize Free4NAS live CD
>
> https://bugzilla.redhat.com/1028660
>
> RFE: virt-sysprep/virt-builder should have an option to lock a user
> account
>
> https://bugzilla.redhat.com/1026688
>
> libguestfs fails examining libvirt guest with ceph drives: rbd: image
> name must begin with a '/'
>
> https://bugzilla.redhat.com/1022431
>
> virt-builder fails if $HOME/.cache doesn't exist
>
> https://bugzilla.redhat.com/1022184
>
> libguestfs: do not use versioned jar file
>
> https://bugzilla.redhat.com/1020806
>
> All libguestfs LVM operations fail on Debian/Ubuntu
>
> https://bugzilla.redhat.com/1008417
>
> Need update helpout of part-set-gpt-type
>
> https://bugzilla.redhat.com/953907
>
> virt-sysprep does not correctly set the hostname on Debian/Ubuntu
>
> https://bugzilla.redhat.com/923355
>
> guestfish prints literal "\n" in error messages
>
> https://bugzilla.redhat.com/660687
>
> guestmount: "touch" command fails: touch: setting times of
> `timestamp': Invalid argument
>
> https://bugzilla.redhat.com/593511
>
> [RFE] function to get partition name
>
> https://bugzilla.redhat.com/563450
>
> list-devices returns devices of different types out of order
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
9 years, 6 months
Xen 4.4 heading for Rawhide
by M A Young
I am planning to update the xen package in Rawhide to version 4.4.0 in the
next few days. The xend functionality including the xm command is now an
optional extra which I was intending to leave out. Is that likely to cause
problems with libvirt? Also, this version makes it easier to use upstream
qemu, but Fedora qemu currently builds without xen support. How easy would
it be to change this (after xen has been updated)?
Michael Young
9 years, 6 months
Problem installing Fedora 20 cloud image
by Eric V. Smith
I want to use the stock Fedora cloud image,
Fedora-x86_64-20-20131211.1-sda.raw. Dom0 is also Fedora 20, running the
Xen kernel. I've yum update'd everything.
I have downloaded a local copy of the .raw file.
I've created a user-data file containing:
==================
#cloud-config
password: fedora
chpasswd: {expire: False}
ssh_pwauth: True
==================
And a meta-data file containing:
==================
instance-id: build-f20; local-hostname: build-f20
==================
I create an iso image of these with:
# genisoimage -output build-f20-cidata.iso -volid cidata -joliet
-rational-rock user-data meta-data
Then I try to install the image with:
# virt-install --import --name build-f20 --ram 512 --vcpus 1 --disk
path=./Fedora-x86_64-20-20131211.1-sda.raw,bus=scsi --disk
build-f20-cidata.iso,device=cdrom,bus=scsi --network
bridge=virbr0,model=virtio --graphics=none
After a number of output lines, I get the error:
[ 1.268756] VFS: Cannot open root device
"UUID=e78f2b16-8836-4e6a-9e5e-fdc6c9d3cfc3" or unknown-block(0,0): error -6
(Full --debug output at the end of the email).
I've verified that this error comes from the UUID that's in
/boot/extlinux/extlinux.conf, but everything looks correct there. I've
verified that this is the volume's UUID with
# guestfish -a Fedora-x86_64-20-20131211.1-sda.raw << EOF
> run
> vfs-uuid /dev/sda1
> EOF
e78f2b16-8836-4e6a-9e5e-fdc6c9d3cfc3
If I don't include the iso image, then after the VFS error line I get:
Please append a correct "root=" boot option; here are the available
partitions:
with no partitions listed.
I'm at a loss as to what to try next. I suspect it has something to do
with the /dev/sda1 vs. sda in the XML (which follows in the debug
output), but I'm not sure what to do about it. Any suggestions?
I've been unable to find any documentation on how to use the stock F20
image while running Dom0 as F20. Any pointers appreciated.
Thanks.
Eric.
# virt-install --import --name build-f20 --ram 512 --vcpus 1 --disk
path=./Fedora-x86_64-20-20131211.1-sda.raw,bus=scsi --disk
build-f20-cidata.iso,device=cdrom,bus=scsi --network
bridge=virbr0,model=virtio --graphics=none --debug
[Tue, 18 Mar 2014 15:55:34 virt-install 25981] DEBUG (cli:187) Launched
with command line: /usr/share/virt-manager/virt-install --import --name
build-f20 --ram 512 --vcpus 1 --disk
path=./Fedora-x86_64-20-20131211.1-sda.raw,bus=scsi --disk
build-f20-cidata.iso,device=cdrom,bus=scsi --network
bridge=virbr0,model=virtio --graphics=none --debug
[Tue, 18 Mar 2014 15:55:34 virt-install 25981] DEBUG (cli:195)
Requesting libvirt URI default
[Tue, 18 Mar 2014 15:55:34 virt-install 25981] DEBUG (cli:199) Received
libvirt URI xen:///
[Tue, 18 Mar 2014 15:55:34 virt-install 25981] DEBUG (virt-install:193)
Requesting virt method 'default', hv type 'default'.
[Tue, 18 Mar 2014 15:55:34 virt-install 25981] DEBUG (virt-install:432)
Received virt method 'xen'
[Tue, 18 Mar 2014 15:55:34 virt-install 25981] DEBUG (virt-install:433)
Hypervisor name is 'xen'
[Tue, 18 Mar 2014 15:55:34 virt-install 25981] WARNING
(virt-install:343) CDROM media does not print to the text console by
default, so you likely will not see text install output. You might want
to use --location.
[Tue, 18 Mar 2014 15:55:34 virt-install 25981] DEBUG (virt-install:551)
Guest.has_install_phase: False
Starting install...
[Tue, 18 Mar 2014 15:55:34 virt-install 25981] DEBUG (guest:446)
Generated install XML: None required
[Tue, 18 Mar 2014 15:55:34 virt-install 25981] DEBUG (guest:447)
Generated boot XML:
<domain type="xen">
<name>build-f20</name>
<uuid>c8abc5af-d6e7-4710-bb69-869cbc59872a</uuid>
<memory>524288</memory>
<currentMemory>524288</currentMemory>
<vcpu>1</vcpu>
<bootloader>/usr/bin/pygrub</bootloader>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<disk type="file" device="disk">
<source file="/root/Fedora-x86_64-20-20131211.1-sda.raw"/>
<target dev="sda" bus="scsi"/>
</disk>
<disk type="file" device="cdrom">
<source file="/root/build-f20-cidata.iso"/>
<target dev="sdb" bus="scsi"/>
<readonly/>
</disk>
<interface type="bridge">
<source bridge="virbr0"/>
<mac address="00:16:3e:df:98:ef"/>
<model type="virtio"/>
</interface>
<input type="mouse" bus="xen"/>
</devices>
</domain>
Creating domain...
| 0 B 00:00:01
[Tue, 18 Mar 2014 15:55:36 virt-install 25981] DEBUG (guest:477) XML
fetched from libvirt object:
<domain type='xen' id='55'>
<name>build-f20</name>
<uuid>c8abc5af-d6e7-4710-bb69-869cbc59872a</uuid>
<memory unit='KiB'>524288</memory>
<currentMemory unit='KiB'>524288</currentMemory>
<vcpu placement='static'>1</vcpu>
<bootloader>/usr/bin/pygrub</bootloader>
<os>
<type>linux</type>
</os>
<clock offset='utc' adjustment='reset'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<disk type='file' device='disk'>
<driver name='file'/>
<source file='/root/Fedora-x86_64-20-20131211.1-sda.raw'/>
<target dev='sda' bus='scsi'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='file'/>
<source file='/root/build-f20-cidata.iso'/>
<target dev='sdb' bus='scsi'/>
<readonly/>
</disk>
<interface type='bridge'>
<mac address='00:16:3e:df:98:ef'/>
<source bridge='virbr0'/>
<script path='/etc/xen/scripts/vif-bridge'/>
<target dev='vif55.0'/>
<model type='virtio'/>
</interface>
<console type='pty' tty='/dev/pts/4'>
<source path='/dev/pts/4'/>
<target type='xen' port='0'/>
</console>
</devices>
</domain>
[Tue, 18 Mar 2014 15:55:36 virt-install 25981] DEBUG (cli:397)
Connecting to text console
[Tue, 18 Mar 2014 15:55:36 virt-install 25981] DEBUG (cli:344) Running:
/usr/bin/virsh --connect xen:/// console build-f20
Connected to domain build-f20
Escape character is ^]
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Initializing cgroup subsys cpuacct
[ 0.000000] Linux version 3.11.10-301.fc20.x86_64
(mockbuild(a)bkernel01.phx2.fedoraproject.org) (gcc version 4.8.2 20131017
(Red Hat 4.8.2-1) (GCC) ) #1 SMP Thu Dec 5 14:01:17 UTC 2013
[ 0.000000] Command line: ro
root=UUID=e78f2b16-8836-4e6a-9e5e-fdc6c9d3cfc3 console=tty1
console=ttyS0,115200n8
[ 0.000000] ACPI in unprivileged domain disabled
[ 0.000000] e820: BIOS-provided physical RAM map:
[ 0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable
[ 0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved
[ 0.000000] Xen: [mem 0x0000000000100000-0x00000000207fffff] usable
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] DMI not present or invalid.
[ 0.000000] No AGP bridge found
[ 0.000000] e820: last_pfn = 0x20800 max_arch_pfn = 0x400000000
[ 0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[ 0.000000] init_memory_mapping: [mem 0x1fe00000-0x1fffffff]
[ 0.000000] init_memory_mapping: [mem 0x1c000000-0x1fdfffff]
[ 0.000000] init_memory_mapping: [mem 0x00100000-0x1bffffff]
[ 0.000000] init_memory_mapping: [mem 0x20000000-0x207fffff]
[ 0.000000] NUMA turned off
[ 0.000000] Faking a node at [mem 0x0000000000000000-0x00000000207fffff]
[ 0.000000] Initmem setup node 0 [mem 0x00000000-0x207fffff]
[ 0.000000] NODE_DATA [mem 0x1ff02000-0x1ff15fff]
[ 0.000000] Zone ranges:
[ 0.000000] DMA [mem 0x00001000-0x00ffffff]
[ 0.000000] DMA32 [mem 0x01000000-0xffffffff]
[ 0.000000] Normal empty
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x00001000-0x0009ffff]
[ 0.000000] node 0: [mem 0x00100000-0x207fffff]
[ 0.000000] SFI: Simple Firmware Interface v0.81
http://simplefirmware.org
[ 0.000000] smpboot: Allowing 1 CPUs, 0 hotplug CPUs
[ 0.000000] No local APIC present
[ 0.000000] APIC: disable apic facility
[ 0.000000] APIC: switched to apic NOOP
[ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000fffff]
[ 0.000000] e820: [mem 0x20800000-0xffffffff] available for PCI devices
[ 0.000000] Booting paravirtualized kernel on Xen
[ 0.000000] Xen version: 4.3.2 (preserve-AD)
[ 0.000000] setup_percpu: NR_CPUS:128 nr_cpumask_bits:128
nr_cpu_ids:1 nr_node_ids:1
[ 0.000000] PERCPU: Embedded 28 pages/cpu @ffff88001f800000 s85568
r8192 d20928 u2097152
[ 0.000000] Built 1 zonelists in Node order, mobility grouping on.
Total pages: 130922
[ 0.000000] Policy zone: DMA32
[ 0.000000] Kernel command line: ro
root=UUID=e78f2b16-8836-4e6a-9e5e-fdc6c9d3cfc3 console=tty1
console=ttyS0,115200n8
[ 0.000000] PID hash table entries: 2048 (order: 2, 16384 bytes)
[ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[ 0.000000] Checking aperture...
[ 0.000000] No AGP bridge found
[ 0.000000] Memory: 494892K/532092K available (6493K kernel code,
990K rwdata, 2864K rodata, 1424K init, 1544K bss, 37200K reserved)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[ 0.000000] Hierarchical RCU implementation.
[ 0.000000] RCU restricting CPUs from NR_CPUS=128 to nr_cpu_ids=1.
[ 0.000000] NR_IRQS:8448 nr_irqs:256 16
[ 0.000000] Console: colour dummy device 80x25
[ 0.000000] console [tty0] enabled
[ 0.000000] console [hvc0] enabled
[ 0.000000] console [ttyS0] enabled
[ 0.000000] allocated 2621440 bytes of page_cgroup
[ 0.000000] please try 'cgroup_disable=memory' option if you don't
want memory cgroups
[ 0.000000] installing Xen timer for CPU 0
[ 0.000000] tsc: Detected 3400.144 MHz processor
[ 0.001000] Calibrating delay loop (skipped), value calculated using
timer frequency.. 6800.28 BogoMIPS (lpj=3400144)
[ 0.001000] pid_max: default: 32768 minimum: 301
[ 0.001000] Security Framework initialized
[ 0.001000] SELinux: Initializing.
[ 0.001000] Dentry cache hash table entries: 65536 (order: 7, 524288
bytes)
[ 0.001000] Inode-cache hash table entries: 32768 (order: 6, 262144
bytes)
[ 0.001000] Mount-cache hash table entries: 256
[ 0.001000] Initializing cgroup subsys memory
[ 0.001000] Initializing cgroup subsys devices
[ 0.001000] Initializing cgroup subsys freezer
[ 0.001000] Initializing cgroup subsys net_cls
[ 0.001010] Initializing cgroup subsys blkio
[ 0.001046] Initializing cgroup subsys perf_event
[ 0.001090] Initializing cgroup subsys hugetlb
[ 0.001177] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[ 0.001177] ENERGY_PERF_BIAS: View and update with
x86_energy_perf_policy(8)
[ 0.001289] CPU: Physical Processor ID: 0
[ 0.001328] CPU: Processor Core ID: 0
[ 0.001631] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
[ 0.001631] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
[ 0.001631] tlb_flushall_shift: 1
[ 0.030632] Freeing SMP alternatives memory: 24K (ffffffff81e5d000 -
ffffffff81e63000)
[ 0.032322] ftrace: allocating 25129 entries in 99 pages
[ 0.036101] Performance Events: unsupported p6 CPU model 58 no PMU
driver, software events only.
[ 0.036902] Brought up 1 CPUs
[ 0.036953] NMI watchdog: disabled (cpu0): hardware events not enabled
[ 0.037005] devtmpfs: initialized
[ 0.037558] atomic64 test passed for x86-64 platform with CX8 and
with SSE
[ 0.037624] xen:grant_table: Grant tables using version 2 layout
[ 0.037675] Grant table initialized
[ 0.056717] RTC time: 165:165:165, date: 165/165/65
[ 0.056820] NET: Registered protocol family 16
[ 0.057339] PCI: setting up Xen PCI frontend stub
[ 0.057796] bio: create slab <bio-0> at 0
[ 0.057886] ACPI: Interpreter disabled.
[ 0.057917] xen:balloon: Initialising balloon driver
[ 0.058023] xen_balloon: Initialising balloon driver
[ 0.058084] vgaarb: loaded
[ 0.058142] SCSI subsystem initialized
[ 0.058240] usbcore: registered new interface driver usbfs
[ 0.058290] usbcore: registered new interface driver hub
[ 0.058352] usbcore: registered new device driver usb
[ 0.058442] PCI: System does not support PCI
[ 0.058483] PCI: System does not support PCI
[ 0.059051] NetLabel: Initializing
[ 0.059077] NetLabel: domain hash size = 128
[ 0.059107] NetLabel: protocols = UNLABELED CIPSOv4
[ 0.059149] NetLabel: unlabeled traffic allowed by default
[ 0.059212] Switched to clocksource xen
[ 0.061818] pnp: PnP ACPI: disabled
[ 0.062600] NET: Registered protocol family 2
[ 0.062711] TCP established hash table entries: 4096 (order: 4, 65536
bytes)
[ 0.062778] TCP bind hash table entries: 4096 (order: 4, 65536 bytes)
[ 0.062833] TCP: Hash tables configured (established 4096 bind 4096)
[ 0.062884] TCP: reno registered
[ 0.062910] UDP hash table entries: 256 (order: 1, 8192 bytes)
[ 0.062953] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
[ 0.063024] NET: Registered protocol family 1
[ 0.063111] platform rtc_cmos: registered platform RTC device (no PNP
device found)
[ 0.182976] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
[ 0.183175] Initialise system trusted keyring
[ 0.183239] audit: initializing netlink socket (disabled)
[ 0.183289] type=2000 audit(1395172536.673:1): initialized
[ 0.197775] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[ 0.198334] zbud: loaded
[ 0.198457] VFS: Disk quotas dquot_6.5.2
[ 0.198522] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.198782] msgmni has been set to 966
[ 0.198854] Key type big_key registered
[ 0.199217] alg: No test for stdrng (krng)
[ 0.199256] NET: Registered protocol family 38
[ 0.199297] Key type asymmetric registered
[ 0.199334] Asymmetric key parser 'x509' registered
[ 0.199401] Block layer SCSI generic (bsg) driver version 0.4 loaded
(major 252)
[ 0.199476] io scheduler noop registered
[ 0.199509] io scheduler deadline registered
[ 0.199570] io scheduler cfq registered (default)
[ 0.199656] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[ 0.199708] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[ 0.199940] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[ 0.200181] Non-volatile memory driver v1.3
[ 0.200214] Linux agpgart interface v0.103
[ 0.200323] libphy: Fixed MDIO Bus: probed
[ 0.200392] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[ 0.200441] ehci-pci: EHCI PCI platform driver
[ 0.200486] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[ 0.200537] ohci-pci: OHCI PCI platform driver
[ 0.200583] uhci_hcd: USB Universal Host Controller Interface driver
[ 0.200656] usbcore: registered new interface driver usbserial
[ 0.200703] usbcore: registered new interface driver usbserial_generic
[ 0.200752] usbserial: USB Serial support registered for generic
[ 0.200801] i8042: PNP: No PS/2 controller found. Probing ports directly.
[ 1.208821] mousedev: PS/2 mouse device common for all mice
[ 1.268983] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
[ 1.269076] rtc_cmos: probe of rtc_cmos failed with error -38
[ 1.269178] device-mapper: uevent: version 1.0.3
[ 1.269264] device-mapper: ioctl: 4.25.0-ioctl (2013-06-26)
initialised: dm-devel(a)redhat.com
[ 1.269355] Intel P-state driver initializing.
[ 1.269424] hidraw: raw HID events driver (C) Jiri Kosina
[ 1.269525] usbcore: registered new interface driver usbhid
[ 1.269567] usbhid: USB HID core driver
[ 1.269619] drop_monitor: Initializing network drop monitor service
[ 1.269743] ip_tables: (C) 2000-2006 Netfilter Core Team
[ 1.269810] TCP: cubic registered
[ 1.269840] Initializing XFRM netlink socket
[ 1.269942] NET: Registered protocol family 10
[ 1.270100] mip6: Mobile IPv6
[ 1.270126] NET: Registered protocol family 17
[ 1.270235] Loading compiled-in X.509 certificates
[ 1.270831] Loaded X.509 cert 'Fedora kernel signing key:
03591dc57a690741401a1c202e2b3d9f4fed2a0e'
[ 1.270912] registered taskstats version 1
[ 1.270965] xenbus_probe_frontend: Device with no driver: device/vbd/2048
[ 1.271019] xenbus_probe_frontend: Device with no driver: device/vbd/2064
[ 1.271100] xenbus_probe_frontend: Device with no driver: device/vif/0
[ 1.271178] Magic number: 1:252:3141
[ 1.271222] drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
[ 1.271323] md: Waiting for all devices to be available before autodetect
[ 1.271373] md: If you don't use raid, use raid=noautodetect
[ 1.271504] md: Autodetecting RAID arrays.
[ 1.271536] md: Scanned 0 and added 0 devices.
[ 1.271576] md: autorun ...
[ 1.271604] md: ... autorun DONE.
[ 1.271671] VFS: Cannot open root device
"UUID=e78f2b16-8836-4e6a-9e5e-fdc6c9d3cfc3" or unknown-block(0,0): error -6
Domain creation completed. You can restart your domain by running:
virsh --connect xen:/// start build-f20
#
9 years, 6 months
Re: [fedora-virt] Directory Passthrough
by James Harrison
I am experimenting with GlusterFS between two VM hosts in F20. The idea is the the VM images are stored on the Gluster filesystems with no VM disk cache.
You could extend so the VMs can mount a Gluster Brick? Not sure. Maybe someone can confirm if this would work?
James
9 years, 6 months
Directory Passthrough
by Robert Locke
Looking for some recommendations as to when directory passthrough
started working or will become available.
I would like to read-only mount a directory from the host (inside the
guest). Technically looking to do this in RHEL 7, but just curious as to
the proper virsh define xml to make it happen.
Any pointers to current documentation/methods?
--Rob
9 years, 6 months
Simplest way to notice a virtual machine shutdown?
by Tom Horsley
I've got one virtual machine that provides some services used
by another virtual machine. What's the simplest way to notice
when one of the machines goes down and automagically tell
the other to shutdown?
Can I get notifications of some kind from libvirt or
udev (maybe the vnet interface going away) or is it
simpler to just poll every so often to see which ones
are up?
9 years, 6 months
Isolate KVM from LAN, but not WAN?
by Tom Horsley
I came up with a nifty way to do this using VLANs, in
my router, but my new router doesn't support VLANs,
so I keep thinking I really ought to be able to do this
with iptables, but nothing I try seems to work.
Here's my old technique:
http://home.comcast.net/~tomhorsley/game/isolate.html
Now I need to figure out some way to make everything
run on the host without any help from the router.
Any ideas?
Am I going to have to run a 2nd virtual machine just
to serve as a "router" for the isolated machine
and block all local lan traffic inside the 2nd VM
(I'm pretty sure I could get that to work, but it
seems like a lot bigger hammer than I ought to need :).
9 years, 6 months
CfP 9th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '14)
by VHPC 15
we apologize if you receive multiple copies of this CfP
=================================================================
CALL FOR PAPERS
9th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
'14)
held in conjunction with Euro-Par 2014, August 25-29, Porto, Portugal
=================================================================
Date: August 26, 2014
Workshop URL: http://vhpc.org
Paper Submission Deadline: May 30, 2014
CALL FOR PAPERS
Virtualization technologies constitute a key enabling factor for flexible
resource
management in modern data centers, and particularly in cloud environments.
Cloud providers need to dynamically manage complex infrastructures in a
seamless fashion for varying workloads and hosted applications,
independently of
the customers deploying software or users submitting highly dynamic and
heterogeneous workloads. Thanks to virtualization, we have the ability to
manage
vast computing and networking resources dynamically and close to the
marginal
cost of providing the services, which is unprecedented in the history of
scientific
and commercial computing.
Various virtualization technologies contribute to the overall picture in
different
ways: machine virtualization, with its capability to enable consolidation
of multiple
under-utilized servers with heterogeneous software and operating systems
(OSes),
and its capability to live-migrate a fully operating virtual machine (VM)
with a very
short downtime, enables novel and dynamic ways to manage physical servers;
OS-level virtualization, with its capability to isolate multiple user-space
environments and to allow for their co-existence within the same OS kernel,
promises to provide many of the advantages of machine virtualization with
high
levels of responsiveness and performance; I/O Virtualization allows physical
NICs/HBAs to take traffic from multiple VMs; network virtualization, with
its
capability to create logical network overlays that are independent of the
underlying physical topology and IP addressing, provides the fundamental
ground on top of which evolved network services can be realized with an
unprecedented level of dynamicity and flexibility; the increasingly adopted
paradigm of Software-Defined Networking (SDN) promises to extend this
flexibility to the control and data planes of network paths. These
technologies
have to be inter-mixed and integrated in an intelligent way, to support
workloads that are increasingly demanding in terms of absolute performance,
responsiveness and interactivity, and have to respect well-specified
Service-
Level Agreements (SLAs), as needed for industrial-grade provided services.
Indeed, among emerging and increasingly interesting application domains
for virtualization, we can find big-data application workloads in cloud
infrastructures, interactive and real-time multimedia services in the cloud,
including real-time big-data streaming platforms such as used in real-time
analytics supporting nowadays a plethora of application domains. Distributed
cloud infrastructures promise to offer unprecedented responsiveness levels
for
hosted applications, but that is only possible if the underlying
virtualization
technologies can overcome most of the latency impairments typical of current
virtualized infrastructures (e.g., far worse tail-latency). What is more,
in data
communications Network Function Virtualization (NFV) is becoming a key
technology enabling a shift from supplying hardware-based network functions,
to providing them in a software-based and elastic way. In conjunction with
(public and private) cloud technologies, NFV may be used for constructing
the
foundation for cost-effective network functions that can easily and
seamlessly
adapt to demand, still keeping their major carrier-grade characteristics in
terms
of QoS and reliability.
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations,
each followed by 10 min discussion sections, and lightning talks, limited
to 5
minutes. Presentations may be accompanied by interactive demonstrations.
TOPICS
Topics of interest include, but are not limited to:
- Management, deployment and monitoring of virtualized environments
- Language-process virtual machines
- Performance monitoring for virtualized/cloud workloads
- Virtual machine monitor platforms
- Topology management and optimization for distributed virtualized
applications
- Paravirtualized I/O
- Improving I/O and network virtualization including use of RDMA,
Infiniband, PCIe
- Improving performance in VM access to GPUs, GPU clusters, GP-GPUs
- HPC storage virtualization
- Virtualized systems for big-data and analytics workloads
- Optimizations and enhancements to OS virtualization support
- Improving OS-level virtualization and its integration within cloud
management
- Performance modelling for virtualized/cloud applications
- Heterogeneous virtualized environments
- Network virtualization
- Software defined networking
- Network function virtualization
- Hypervisor and network virtualization QoS and SLAs
- Cloudbursting
- Evolved European grid architectures including such based on network
virtualization
- Workload characterization for VM-based environments
- Optimized communication libraries/protocols in the cloud
- System and process/bytecode VM convergence
- Cloud frameworks and APIs
- Checkpointing/migration of VM-based large compute jobs
- Job scheduling/control/policy with VMs
- Instrumentation interfaces and languages
- VMM performance (auto-)tuning on various load types
- Cloud reliability, fault-tolerance, and security
- Research, industrial and educational use cases
- Virtualization in cloud, cluster and grid environments
- Cross-layer VM optimizations
- Cloud HPC use cases including optimizations
- Services in cloud HPC
- Hypervisor extensions and tools for cluster and grid computing
- Cluster provisioning in the cloud
- Performance and cost modelling
- Languages for describing highly-distributed compute jobs
- VM cloud and cluster distribution algorithms, load balancing
- Instrumentation interfaces and languages
- Energy-aware virtualization
Important Dates
Rolling Paper registration
May 30, 2014 - Full paper submission
July 4, 2014 - Acceptance notification
October 3, 2014 - Camera-ready version due
August 26, 2014 - Workshop Date
TPC
CHAIR
Michael Alexander (chair), TU Wien, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Tommaso Cucinotta (co-chair), Bell Labs, Dublin, Ireland
PROGRAM COMMITTEE
Costas Bekas, IBM
Jakob Blomer, CERN
Roberto Canonico, University of Napoli Federico II, Italy
Paolo Costa, MS Research Cambridge, England
Jorge Ejarque Artigas, Barcelona Supercomputing Center, Spain
William Gardner, University of Guelph, USA
Balazs Gerofi, University of Tokyo, Japan
Krishna Kant, Temple University, USA
Romeo Kinzler, IBM
Nectarios Koziris, National Technical University of Athens, Greece
Giuseppe Lettieri, University of Pisa, Italy
Jean-Marc Menaud, Ecole des Mines de Nantes, France
Christine Morin, INRIA, France
Dimitrios Nikolopoulos, Queen's University of Belfast, UK
Herbert Poetzl, VServer, Austria
Luigi Rizzo, University of Pisa, Italy
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Vangelis Tasoulas, Simula Research Lab, Norway
Yoshio Turner, HP Labs, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Chao-Tung Yang, Tunghai University, Taiwan
PAPER SUBMISSION-PUBLICATION
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the Springer LNCS series - the
format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
http://www.springer.de/comp/lncs/authors.html
EasyChair Abstract Submission Link:
https://www.easychair.org/conferences/?conf=europar2014ws
GENERAL INFORMATION
The workshop is one day in length and will be held in conjunction with
Euro-Par 2014, 25-29 August, Porto, Portugal
9 years, 6 months