Re: [vdsm] [Users] [ATN] oVirt 3.3 release, moving to RC [3.3 branching, blockers review]
by Dan Kenigsberg
On Tue, Jul 30, 2013 at 06:05:46PM +0300, Moran Goldboim wrote:
> We would like to go a head on oVirt 3.3 release process and issue an RC
>
> for tomorrow meeting:
>
> Maintainers/ Package owners
> ==================
> -branch your git repo (master) with 3.3 branch (making master ready for 3.4)
> -for tomorrow meeting, overview [1], see if you have any changes to it
> -if you don't have any blockers under your component, please branch
> 3.3 with RC1 branch
>
> |
> -master
> |
> -engine_3.3
> |
> -RC1
> |
> -engine_3.2
> ...
>
> Users
> ====
> with your experience from test day and with nightlies, if you feel
> there are additional candidates to block this version for please add
> it to the tracker bug [1].
>
> Suggested Schedule
> ============
> Wed Jul 31st - review of blockers for the version and component readiness
> Mon Aug 5th - RC1 release
> Wed Aug 7th - Release readiness review (in case of blockers an RC2
> will be issued)
>
> Thanks.
>
> [1]*Bug 918494* <https://bugzilla.redhat.com/show_bug.cgi?id=918494>
> -Tracker: oVirt 3.3 release
I've just tagged vdsm-4.12.0 and branched ovirt-3.3 branch for the vdsm
repository starting at git has 620343d6317c849fc985a5083cebb68c995f7c15.
Expect a deluge of non-ovirt-3.3 merges to the master branch soon.
Future ovirt-3.3 fixes would have to be backported and cherry-picked.
Dan.
10 years, 9 months
Exploiting domain specific offload features
by M. Mohan Kumar
Hello,
We are adding features such as server offloaded cloning, snapshot of
the files(ie VM disks) and zeroed vm disk allocation in GlusterFS. As of
now only BD xlator supports offloaded cloning & snapshot. Server
offloaded zeroing of VM disks is supported both by posix and BD xlator.
The initial approach is to use xattr interface to trigger this offload
features such as
# setfattr -n "clone" -v "path-to-new-clone-file" path-to-source-file
will create clone of path-to-source-file in path-to-new-clone-file.
Cloning is done in GlusterFS server side and its kind of server
offloaded copy. Similarly snapshot can also be taken using setfattr approach.
GlusterFS storage domain is already part of VDSM and we want to exploit
offload features provided by GlusterFS through VDSM. Is there any way to
exploit these features from VDSM as of now?
10 years, 9 months
Linking to bugs from oVirt wiki
by Allon Mureinik
Hi guys,
Since a lot of us will be busy opening bugs as part of the oVirt Test Day, I cooeked up a quick template to help adding Bugzilla links.
You can just use {BZ|number} in your wiki markup, and you'll get a user-friendly link to Bugzilla with the given number (e.g., {BZ|123} will create a link to http://bugzilla.redhat.com/123).
Enjoy,
Allon
10 years, 9 months
migration progress feature
by peet@redhat.com
…
Goal
====
We have to implement a feature, migration progress bar in the UI. This
migration bar should reflect not only the progress, but if the migration
is stalled and so on.
Tasks
=====
* Get the information from libvirt: it provides job progress in the same
way for all migration-like jobs: migration, suspend, snapshot
* Feed this information to the engine
* Reflect it in the UI
API status
==========
Libvirt info is OK — it is available for any migration-like job, be it
migration, suspend or snapshot.
In VDSM, we have an API call, a separate verb to report the migration
progress: migrateStatus()
But also we have getVmList() call, polled by the engine every several
seconds.
Proposal
========
We would propose to provide an optional field, `migrateStatus`, in the
report sent by getVmList(). This approach should save a good amount of
traffic and ease the engine side logic.
Having the separate verb, this can sound weird, but I'm sure that the
optimisation worth it.
Please, read the patchset [1] and discuss it here.
Thanks.
[1] http://gerrit.ovirt.org/#/c/15125/
--
Peter V. Saveliev
10 years, 9 months
oVirt developer meeting @ KVM Forum
by Dave Neary
Hi everyone,
Put the date in your calendar! The oVirt developer meeting will be held
in Edinburgh on October 23rd alongside the KVM Forum.
As most of you know, the KVM Forum is happening alongside LinuxCon
Europe and CloudOpen Europe in Edinburgh this year, on October 21-23.
As we proposed to the oVirt board in January, we would like to take
advantage of this gathering of KVM core developers to plan the future of
the oVirt project too.
In addition to the numerous oVirt presentations which have been proposed
both for CloudOpen and the KVM Forum, we will be setting aside one day
for developer working sessions. The agenda for these sessions is not
(yet) set - we will have subject matter experts leading discussions on
the future of their component, where oVirt fits into the broader world
of virtualization and the cloud, and how we can grow the community.
Among the topics which may be on the table are:
* Storage - integration with Gluster, Ceph, Swift, Cinder, NetApp, EMC
* Core virtualization - what's missing to make oVirt the best
virtualization solution on the market? What's next? How can oVirt
best take advantage of the latest KVM features?
* Networking - Going beyond Quantum integration: L2 and L3 networking
in oVirt
* User interface & engine - making oVirt nicer to use and easeir to
learn
* Ecosystem - Integration with OpenStack, CloudStack; migration
strategy from vSphere; integration with other 3rd party projects -
what is our place in the world?
* Community and marketing - Should we add a forum? How can we grow the
user base and community of oVirt?
(Note, these are just my ideas - topics will be set by session leaders
on a specific topic).
What next?
First, if you are interested in the future of the oVirt project, please
plan to attend the developer meeting. If you are active in oVirt, but
cannot finance your travel to the event, please send an email to
dneary(a)redhat.com - I can't promise anything, but we do not want budget
constraints to be the main reason for someone missing the meeting.
Second, if you are interested in leading a working session on one of the
topics above, or a different topic which is important to you, please
send an email to the Workshop program committee at workshop-pc(a)ovirt.org
We will keep you posted with schedule updates and more details as
plannign advances during the coming weeks.
Thanks for your interest, and for your support of oVirt!
Regards,
Dave Neary.
--
Dave Neary - Community Action and Impact
Open Source and Standards, Red Hat - http://community.redhat.com
Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13
10 years, 9 months
Re: [vdsm] [Users] oVirt Weekly Meeting Minutes -- 2013-06-17
by Dan Kenigsberg
On Wed, Jul 17, 2013 at 11:00:01AM -0400, Mike Burns wrote:
> Minutes:
> http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html
> Minutes (text):
> http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt
> Log:
> http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html
>
> ============================
> #ovirt: oVirt Weekly Meeting
> ============================
>
>
> Meeting started by mburns at 14:00:11 UTC. The full logs are available
> at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html
> .
>
>
>
> Meeting summary
> ---------------
> * agenda and roll call (mburns, 14:00:23)
> * 3.3 status update (mburns, 14:00:30)
> * Workshops and Conferences (mburns, 14:00:47)
> * infra update (mburns, 14:00:50)
> * Other Topics (mburns, 14:00:53)
>
>
>
>
> Action Items
> ------------
> * fsimonce to rebuild vdsm rc2 to include glance
I've tagged vdsm with rc2, however minutes later it came to my attention
(thanks Meni), that Vdsm ties itself into a know when requested to
create a bridgeless (non-Vm) network.
A fix has been posted,
http://gerrit.ovirt.org/17085/
but master branch of vdsm is NOT of beta quality
at the moment.
10 years, 9 months
VDSM / Guest Agent: Introduction of API versioning
by Vinzenz Feenstra
Hi,
with the increased number of version we're supporting and going to
support it's going to be time for the introduction
of protocol versioning.
The motivation for this is, that currently if VDSM is sending a message
to the guest agent which is not known, it will end up being logged as
error on the guest agent side. If the guest agent is sending a message
to VDSM which VDSM does not know about this is logged as error as well.
While one could argue, that we should remove the error level or remove
the logging all together it will not work for older versions though.
My proposed solution is as follows:
By default the guest agent supposes API version 0, which means all until
this time implemented messages are supported. The same should be
supposed from the VDSM side, by default the API version shall be 0.
Once we're adding a new message (this is the case of FQDN reporting
http://gerrit.ovirt.org/#/c/16572/ )
we're going to increase the version with a new release.
Messages which are requiring a higher API version, are not supposed to
be sent via the VIO Channels.
The negotiation of the used API version would work like this:
VDSM sends the refresh command on connect to the guest agent VIO Channel
with the apiVersion argument.
Older version of the guest agent are ignoring these additional
parameters, as they are not expected however it won't
fail either on them.
If the guest agent is retrieving an API Version it will either use it,
or if it is higher than the maximum known version it will use the guest
agent's version. This version is sent as a separate message back to VDSM
which then in turn will set the value to the value retrieved by the
guest agent. (As long it is greater than 0)
Here are the proposed patch sets:
Guest Side PatchSet: http://gerrit.ovirt.org/#/c/16995
VDSM Side PatchSet: http://gerrit.ovirt.org/#/c/17004/
Please let me know your thoughts about this.
In my opinion are those changes pretty safe and straight forward, and it
will allow us to avoid unnecessary messages to be sent between guest and
host
Regards,
--
Regards,
Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
vdsm sync meeting Monday July 15th
by Dan Kenigsberg
vdsm sync meeting monday July 15
==================================
Terribly low attendence to the meeting. Showing up can be helpful to
getting your patches reviewed!
Federico's diskSizeExtend patches are currently being pushed by Ayal
after a cursory review by Sergey
> + http://gerrit.ovirt.org/#/c/14589 - volume: add the BLOCK_SIZE constant +1
> + http://gerrit.ovirt.org/#/c/14590 - volume: add the extendVolumeSize method
> + http://gerrit.ovirt.org/#/c/15614 - vm: add the live diskSizeExtend method +1
Glance upload/download enjoyed several comments from Yeela
> + http://gerrit.ovirt.org/#/c/14955 - image: add support to upload/download images
- gluster service management verb is in!
- Neutron hook patches are in!
- so is "multiple gateways" feature!
- storage functional tests are in! Thanks, Zhou and reviewers!
- Toni has posted a [WIP] about unified network configuration persistence
http://gerrit.ovirt.org/16699/
It needs some love and care and rebase on top of
http://gerrit.ovirt.org/16847
- No interesting work has been done on the IPv6 patch for ifcfg
configurator which requires serious refactoring:
http://gerrit.ovirt.org/#/c/11741/1/vdsm/configNetwork.py
- Federico says that his "init: restart sanlock groups are missing"
http://gerrit.ovirt.org/16742 could have used service-management code
by vdsm-tool. Unfortunately, it is inconvenient to re-use one command
code in another. Danken prefer to push it to a vdsm-tool "common"
module over exposing in site_packages/vdsm/__init__.py.
- ovirt-3.3 is going beta oh-so-very-soon. This means that I am about to
tag v4.12.0-rc1 after basic tests.
I know that I am missing several patches regarding policy paramenters,
which needs to get into 3.3. NOW is the time to shout about other
release blockers, and THIS is the place.
Goodbye,
I hope to see more of you in two weeks!
Dan.
10 years, 9 months
Disk monitoring in thin provison with qcow2. Why vdsm choose both passive monitor(VIR_DOMAIN_EVENT_ID_IO_ERROR Event of libvirt) and active monitor(thread calls bolockInfo() function)
by Qixiaozhen
Hi, all
I am very confused with the two ways of disk monitors: passive monitor(VIR_DOMAIN_EVENT_ID_IO_ERROR Event of libvirt) and active monitor(thread calls bolockInfo() function).
These ways have the same goal of extending the LV volume in the block storage domain.
The node who wants to enlarge its actual disk size would send the extend message to the SPM by the mailbox of the domain.
Why we choose both the ways?
I think either is OK. The reason is that the two ways would suspend the vm in the process of extending.
What's the scene of the passive monitor above? I think that once the active monitor have existed, the passive monitor event would never occur.
Can someone help me? Thank you
Qi
Appendix:
1)VIR_DOMAIN_EVENT_ID_IO_ERROR Event callback -----> passive monitor
if cif != None:
for ev in (libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE,
libvirt.VIR_DOMAIN_EVENT_ID_REBOOT,
libvirt.VIR_DOMAIN_EVENT_ID_RTC_CHANGE,
libvirt.VIR_DOMAIN_EVENT_ID_IO_ERROR_REASON,
libvirt.VIR_DOMAIN_EVENT_ID_GRAPHICS,
libvirt.VIR_DOMAIN_EVENT_ID_BLOCK_JOB):
conn.domainEventRegisterAny(None, ev,
__eventCallback, (cif, ev))
def _onAbnormalStop(self, blockDevAlias, err):
......
if err.upper() == 'ENOSPC':
for d in self._devices[vm.DISK_DEVICES]:
if d.alias == blockDevAlias:
......
capacity, alloc, physical = self._dom.blockInfo(d.path, 0)
if physical > (alloc + config.getint('irs',
'volume_utilization_chunk_mb')):
......
self._lvExtend(d.name)
2)create a new thread to sample the statistics in the vm with its running. -----> active monitor
def _initVmStats(self):
self._vmStats = VmStatsThread(self)
self._vmStats.start()
self._guestEventTime = self._startTime
----------------------------------------------------------------------
def _highWrite(self):
if not self._vm._volumesPrepared:
# Avoid queries from storage during recovery process
return
for vmDrive in self._vm._devices[vm.DISK_DEVICES]:
if vmDrive.blockDev and vmDrive.format == 'cow':
capacity, alloc, physical = \
self._vm._dom.blockInfo(vmDrive.path, 0)
if physical - alloc < self._vm._MIN_DISK_REMAIN:
......
self._vm._onHighWrite(vmDrive.name, alloc)
def _onHighWrite(self, block_dev, offset):
self.log.info('_onHighWrite: write above watermark on %s offset %s',
block_dev, offset)
self._lvExtend(block_dev)
def _lvExtend(self, block_dev, newsize=None):
......
self.cif.irs.sendExtendMsg(d.poolID, volDict, newsize * 2**20,
self._afterLvExtend)
......
10 years, 9 months