Missing numactl
by Piotr Kliczewski
Hello,
I pulled the latest changes from master to test my code and during
getCaps I got:
Thread-14::ERROR::2014-04-28
12:11:44,394::__init__::484::jsonrpc.JsonRpcServer::(_serveRequest)
[Errno 2] No such file or directory
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
480, in _serveRequest
res = method(**params)
File "/usr/share/vdsm/Bridge.py", line 211, in _dynamicMethod
result = fn(*methodArgs)
File "/usr/share/vdsm/API.py", line 1184, in getCapabilities
c = caps.get()
File "/usr/share/vdsm/caps.py", line 459, in get
caps['numaNodeDistance'] = _getNumaNodeDistance()
File "/usr/lib64/python2.7/site-packages/vdsm/utils.py", line 1003,
in __call__
value = self.func(*args)
File "/usr/share/vdsm/caps.py", line 209, in _getNumaNodeDistance
retcode, out, err = utils.execCmd(['numactl', '--hardware'])
File "/usr/lib64/python2.7/site-packages/vdsm/utils.py", line 732, in execCmd
deathSignal=deathSignal, childUmask=childUmask)
File "/usr/lib64/python2.7/site-packages/cpopen/__init__.py", line
50, in __init__
stderr=PIPE)
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/site-packages/cpopen/__init__.py", line
80, in _execute_child_v275
self._childUmask)
OSError: [Errno 2] No such file or directory
When I installed numactl rpm getCaps work again.
Do we need to add this dependency to spec file?
Thanks,
Piotr
10 years
cleaning statistics retrieval
by fromani@redhat.com
Hello VDSM developers,
I'd like to discuss and explain the plans for cleaning up Vm.getStats()
in vdsm/virt/vm.py, and how it affects a bug we have: https://bugzilla.redhat.com/show_bug.cgi?id=1073478
Let's start from the bug.
To make a long story short, there is a (small) race in VDSM, probably introduced by commit
8fedf8bde3c28edb07add23c3e9b72681cb48e49
The race can actually be triggered only if the VM is shutting down, so it is not that bad.
Fixing the reported issue in the specific case can be done with a trivial if, and that it what I did
in my initial http://gerrit.ovirt.org/#/c/25803/1/vdsm/vm.py,cm
However, this is just a bandaid and a proper solution is needed. This is the reason why
the following versions of http://gerrit.ovirt.org/#/c/25803 changed direction toward a more comprehensive
approach.
And this is the core of the issue.
My initial idea is to either return success and a complete, well formed statistics set, or return an error.
However current engine seems to not cope properly with this, and we cannot break backward compatibility.
Looks like the only way to go is to always return success and to add a field to describe the content of the
statistics (partial, complete...). THis is admittedly a far cry from the ideal solution, but it is dictated
by the need to preserve the compatibility with current/old engines.
Moreover, I would like to take the chance and cleanup/refactor the statistics collection. I already started
adding the test infrastructure: http://gerrit.ovirt.org/#/c/26536/
To summarize, what I suggest to do is:
* fix https://bugzilla.redhat.com/show_bug.cgi?id=1073478 using a simple ugly fix like the original
http://gerrit.ovirt.org/#/c/25803/1/vdsm/vm.py,cm (which I'll resubmit as new change)
* refactor and clean getStats() and friends
* on the cleaner base, properly fix the statistics collection by let getStats() always succeed and return
possibly partial stats, with a new field describing the content
please note that I'm not really happy about this solution, but, given the constraint, I don't see better
alternatives.
Feedback welcome!
Bests,
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
10 years
Fwd: [ovirt-devel] [Devel] Vdsm functional tests
by Vered Volansky
----- Forwarded Message -----
> From: "Vered Volansky" <vered(a)redhat.com>
> To: "Dan Kenigsberg" <danken(a)redhat.com>
> Cc: vdsm-devel(a)ovirt.org, devel(a)ovirt.org
> Sent: Thursday, April 10, 2014 7:37:51 AM
> Subject: Re: [ovirt-devel] [Devel] Vdsm functional tests
>
>
>
> ----- Original Message -----
> > From: "Dan Kenigsberg" <danken(a)redhat.com>
> > To: devel(a)ovirt.org
> > Cc: vdsm-devel(a)ovirt.org, vered(a)redhat.com, fromani(a)redhat.com
> > Sent: Thursday, April 3, 2014 6:08:31 PM
> > Subject: [Devel] Vdsm functional tests
> >
> > Functional tests are intended to verify that a running Vdsm instance
> > does what it should, when treated as a black box, over its public API.
> >
> > They should be comprehensive and representative of a typical field usage
> > of Vdsm. It is a sin to break such a test - but we must be able to know
> > when such a sin is committed.
> >
> > We currently have the following functional tests modules:
> >
> > - sosPluginTests.py
> > supervdsmFuncTests.py
> >
> > - storageTests.py
>
> Storage localfs will be good to go by May 8th.
> >
> > - momTests.py
> > virtTests.py
> >
> > - networkTests.py
> >
> > I'd like to have a designated developer per team (infra, storage, virt and
> > network), responsible to having these tests ever-running.
> >
> > When could we expect to have it running per commit on a Jenkins slaves?
> >
> > Volunteers, please come forward.
> >
> > Dan.
> > _______________________________________________
> > Devel mailing list
> > Devel(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
10 years
vdsm sync meeting minutes April 1st, 2014
by Dan Kenigsberg
Vdsm sync call April 1st 2014
=============================
- cpopen:
- Toni: there's a versioning mismatch: the version in pypi is higher
than the one on https://github.com/ficoos/cpopen
- Saggi: there shouldn't be any code difference
- Yaniv should sync the two.
- We use two options that are missing from Python3's Popen: umask and
deathSignal.
- Toni to re-send his Python3 port for cpopen, with tests running on
Python3, too.
- Saggi to accept it.
- Infra team to suggest missing features to Python.
- fromani described his attempts of consolidating the two
migration-monitoring threads into one. The motivation is a finer way
of contolling the migration down time based on progress. Reducing
thread numbers per se is not the main motivation.
- pep8 has changed. Again. Version 1.5.1 has even more draconic
requirements. We can comply with them, or, include our version of
pep8/pyflakes/pylint in our git repo. danken shudders at the thought
of copying the code, but having a git sub-module is a reasonable
compromise.
- Infra team to take this task (unless someone else is faster)
- Until that happens, one need to use pep8-1.4.6 or --ignore offending
errors.
- We've been asked to kill the separate vdsm-devel mailing list, and
join it into devel(a)ovirt.org. There's some resistence in the audience,
so we'd do it softly.
Next week, I'll have vdsm-devel members mass-subscribed to
devel@ovirt. If you do NOT want to be subscribed, please send me a
private request.
Then, for several months, we'd see how it goes, and whether we drown
in unrelated Engine chatter.
- We had a very (too) heated debate about ignoring failures of
setDomainRegularRole() in http://gerrit.ovirt.org/24495/ and
http://gerrit.ovirt.org/25424.
The pain point is that relying on domain role (master/regular) is
faulty by design. We cannot avoid the cases where a pool has more than
one domain with a master role written in its metadata.
One side argued that oVirt should be fixed to handle this unescapable
truth, or at least enumerate the places where Vdsm and Engine, both
current and old, depend on master role uniqueness.
The other side argued that this is not a priority task, and that we
should try to "garbage-collect" known-bad master roles as a courtesy
to people digging into domain metadata, and as a means to lessen the
problem until we kill the pool concept in the upcoming version.
I hope that I present the debate fairly enough.
Dan.
10 years
Re: [vdsm] Help Needed
by dnarayan@redhat.com
----- Original Message -----
From: "Dan Kenigsberg" <danken(a)redhat.com>
To: "Darshan Narayana Murthy" <dnarayan(a)redhat.com>
Sent: Monday, April 7, 2014 4:53:30 PM
Subject: Re: Help Needed
On Mon, Apr 07, 2014 at 06:52:36AM -0400, Darshan Narayana Murthy wrote:
> Hi Dan,
>
> I sent a patch for vdsm to get the gluster volume capacity
> statistics using libgf api ( patch : http://gerrit.ovirt.org/#/c/26343 ),
> This patch requires glusterfs-devel package for build.
>
> It looks like jenkins does not have this package and so the build
> for this patch is failing. Does jenkins automatically pull the required
> package or is there anything to be done to get this package in jenkins ?.
>
> Can you please help me to resolve this.
Generally speaking, if you want a new package installed, you should ask
that on infra(a)ovirt.org.
However, I am not at all happy with adding C code into Vdsm. What is it?
Python binding for glfs_statvfs ? Could this be implemented elsewhere
(such as an independent python-glfs package)?
Dan.
Hi,
We are making use of libgf-api for getting the statistics related
to a glusterfs volume, as it is more efficient than we mounting a volume
and getting the statistics.
libgfapi is a c api. Initially we tried using ctypes to wrap the
required functions in libgfapi. But because of a limitation in glusterfs
when these functions were invoked through supervdsm it would break.
So we thought of having an extension module that makes use of libgfapi
and provides the statistics, which can be used in vdsm.
what would be the better approach to resolve this ? Please provide us
your suggestions.
Thanks,
Darshan N
10 years
Modeling graphics framebuffer device in VDSM
by fkobzik@redhat.com
Dear VDSM devels,
I've been working on refactoring graphics devices in engine and VDSM for some
time now and I'd like know your opinion of that.
The aim of this refactoring is to model graphics framebuffer (SPICE, VNC) as
device in the engine and VDSM. This which is quite natural since libvirt treats
graphics as a device and we have some kind of devices infrastructure in both
projects. Another advantage (and actually the main reason for refactoring) is
simplified support for multiple graphics framebuffers on a single vm.
Currently, passing information about graphics from engine to VDSM is done via
'display' param in conf. In the other direction VDSM informs the engine about
graphics parameters ('displayPort', 'displaySecurePort', 'displayIp' and
'displayNetwork') in conf as well.
What I'd like to achieve is to encapsulate all this information in specParams
of the new graphics device and use specParams as a place for transfering data
about graphics device between engine and vdsm. What do you think?
the draft patch is here:
http://gerrit.ovirt.org/#/c/23555/ (it's currently marked with '-1' but it puts
some light on what the solution looks like so feel free to take a look).
Thanks,
Franta.
10 years
VM xp is down. Exit message: unsupported configuration: spice graphics are not supported with this QEMU.
by 鬼丁
ovirt-engine version 3.4.0 RC3 qemu-kvm version 1.6.1 vdsm version
4.14.6
libvirt version 1.1.3
messages as follow:
Apr 3 22:30:48 localhost vdsm vm.Vm ERROR vmId=`a4ea786e-2c1c-4159-86e0-
8744c54b3bbe`::The vm start process failed#012Traceback (most recent call
last):#012 File "/usr/share/vdsm/vm.py", line 2249, in
_startUnderlyingVm#012 self._run()#012 File "/usr/share/vdsm/vm.py",
line 3170, in _run#012 self._connection.createXML(domxml, flags),#012
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line
92, in wrapper#012 ret = f(*args, **kwargs)#012 File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 2920, in
createXML#012 if ret is None:raise libvirtError('virDomainCreateXML()
failed', conn=self)#012libvirtError: unsupported configuration: spice
graphics are not supported with this QEMU
Apr 3 22:30:50 localhost vdsm vm.Vm WARNING
vmId=`a4ea786e-2c1c-4159-86e0-8744c54b3bbe`::trying to set state to
Powering down when already Down
Apr 3 22:30:50 localhost vdsm root WARNING File:
/var/lib/libvirt/qemu/channels/a4ea786e-2c1c-4159-86e0-8744c54b3bbe.com.redhat.rhevm.vdsm
already removed
Apr 3 22:30:50 localhost vdsm root WARNING File:
/var/lib/libvirt/qemu/channels/a4ea786e-2c1c-4159-86e0-8744c54b3bbe.org.qemu.guest_agent.0
already removed
libvirtd.log as follow:
2014-04-04 02:30:48.363+0000: 988: debug : virStorageFileGetMetadata:1090 :
path=/rhev/data-center/mnt/192.168.1.130:_home_root_iso/64ac0f5a-82b2-4bae-8b74-794a0bcecf27/images/11111111-1111-1111-1111-111111111111/xp.iso
format=1 uid=107 gid=107 probe=0
2014-04-04 02:30:48.363+0000: 988: debug :
virStorageFileGetMetadataRecurse:1022 :
path=/rhev/data-center/mnt/192.168.1.130:_home_root_iso/64ac0f5a-82b2-4bae-8b74-794a0bcecf27/images/11111111-1111-1111-1111-111111111111/xp.iso
format=1 uid=107 gid=107 probe=0
2014-04-04 02:30:48.366+0000: 988: debug :
virStorageFileGetMetadataInternal:770 :
path=/rhev/data-center/mnt/192.168.1.130:_home_root_iso/64ac0f5a-82b2-4bae-8b74-794a0bcecf27/images/11111111-1111-1111-1111-111111111111/xp.iso,
fd=25, format=1
2014-04-04 02:30:48.422+0000: 988: debug : virStorageFileGetMetadata:1090 :
path=/rhev/data-center/mnt/192.168.1.130:_home_root_images/817df10f-ebe3-48ec-9e97-7c34aec1b00c/images/7eb012db-d3f3-4374-a24e-044946528f26/a2ec5f90-cc8a-43b1-bc8c-429536445a4d
format=1 uid=107 gid=107 probe=0
2014-04-04 02:30:48.422+0000: 988: debug :
virStorageFileGetMetadataRecurse:1022 :
path=/rhev/data-center/mnt/192.168.1.130:_home_root_images/817df10f-ebe3-48ec-9e97-7c34aec1b00c/images/7eb012db-d3f3-4374-a24e-044946528f26/a2ec5f90-cc8a-43b1-bc8c-429536445a4d
format=1 uid=107 gid=107 probe=0
2014-04-04 02:30:48.425+0000: 988: debug :
virStorageFileGetMetadataInternal:770 :
path=/rhev/data-center/mnt/192.168.1.130:_home_root_images/817df10f-ebe3-48ec-9e97-7c34aec1b00c/images/7eb012db-d3f3-4374-a24e-044946528f26/a2ec5f90-cc8a-43b1-bc8c-429536445a4d,
fd=25, format=1
2014-04-04 02:30:48.457+0000: 988: debug : qemuProcessStart:3710 :
Preparing monitor state
2014-04-04 02:30:48.457+0000: 988: debug : qemuProcessStart:3742 :
Assigning domain PCI addresses
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:03.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:02.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:04.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:05.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:01.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressGetNextSlot:2264 : PCI slot 0000:00:01 already in use
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressGetNextSlot:2264 : PCI slot 0000:00:02 already in use
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressGetNextSlot:2264 : PCI slot 0000:00:03 already in use
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressGetNextSlot:2264 : PCI slot 0000:00:04 already in use
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressGetNextSlot:2264 : PCI slot 0000:00:05 already in use
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressGetNextSlot:2307 : Found free PCI slot 0000:00:06
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:06.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:03.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:02.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:04.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:05.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug :
qemuDomainPCIAddressReserveAddr:2108 : Reserving PCI slot 0000:00:01.0
(multifunction='off')
2014-04-04 02:30:48.457+0000: 988: debug : qemuProcessStart:3747 : Building
emulator command line
2014-04-04 02:30:48.457+0000: 988: debug : virArchFromHost:176 : Mapped
x86_64 to 29 (x86_64)
2014-04-04 02:30:48.457+0000: 988: debug : qemuBuildCommandLine:7655 :
conn=0x7fded815c5a0 driver=0x7fded814f0d0 def=0x7fdec800a040
mon=0x7fdec8006cf0 json=1 qemuCaps=0x7fdec80020b0 migrateFrom=(null)
migrateFD=-1 snapshot=(nil) vmop=0
2014-04-04 02:30:48.467+0000: 988: error :
qemuBuildGraphicsSPICECommandLine:7171 : unsupported configuration: spice
graphics are not supported with this QEMU
2014-04-04 02:30:48.467+0000: 988: debug : qemuProcessStop:4129 : Shutting
down VM 'xp' pid=0 flags=2
2014-04-04 02:30:48.482+0000: 988: debug : qemuProcessKill:4088 : vm=xp
pid=0 flags=5
2014-04-04 02:30:48.482+0000: 988: debug : virProcessKillPainfully:269 :
vpid=0 force=1
2014-04-04 02:30:48.482+0000: 988: debug : qemuDomainCleanupRun:2257 :
driver=0x7fded814f0d0, vm=xp
2014-04-04 02:30:48.482+0000: 988: debug :
qemuProcessAutoDestroyRemove:4656 : vm=xp
2014-04-04 02:30:48.482+0000: 988: debug : virCloseCallbacksUnset:165 :
vm=xp, uuid=a4ea786e-2c1c-4159-86e0-8744c54b3bbe, cb=0x7fdee1118c50
2014-04-04 02:30:48.482+0000: 988: debug : qemuDomainObjEndJob:1165 :
Stopping job: modify (async=none)
10 years
getting python sdk modules into pycharm
by Sven Kieske
Hi,
I'm just trying to figure out
how to get the "hooking" module
into my pycharm IDE.
It doesn't seem to resolve, or was
the package renamed?
The ovirt.org wiki is very slow atm
so I don't know if there is more actual
informatin on the site.
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
10 years
Re: [vdsm] Minor infra tasks
by smizrahi@redhat.com
----- Original Message -----
> From: "ybronhei" <ybronhei(a)redhat.com>
> To: "Saggi Mizrahi" <smizrahi(a)redhat.com>, "Barak Azulay" <bazulay(a)redhat.com>, "Dima Kuznetsov"
> <dkuznets(a)redhat.com>, "Mooli Tayer" <mtayer(a)redhat.com>, "Yeela Kaplan" <ykaplan(a)redhat.com>
> Sent: Sunday, April 6, 2014 12:05:17 PM
> Subject: Re: Minor infra tasks
>
> On 04/06/2014 11:39 AM, Saggi Mizrahi wrote:
> > I made a fake project on github for minor infra tasks.
> > I'm going to add more stuff later.
> > Feel free to pick up something to do if you have the
> > time.
> >
> > https://github.com/ficoos/infra-tracker/issues
> >
> i suggest to forward it also to nsofer pioter martin toni ... even all
> vdsm-devel. all can help with those
>
> they can also add issues for us
>
> --
> Yaniv Bronhaim.
>
10 years