boot order in vm.conf
by Sandro Bonazzola
Hi,
This is my vm.conf:
vmId=f785ebfd-a9dc-4453-be0e-ae7e57a804b0
memSize=4096
macAddr=00:16:3e:71:bb:e5
display=vnc
drive=pool:63570f35-35b9-4f82-aea5-7a34535a670d,domain:77a192ce-175e-4085-bd1f-342e4993b802,image:279076a8-a888-4ac2-8b40-b72fd488c173,volume:bf24cc76-ada5-428a-8ded-0662d08fe5d5
cdrom=/Fedora-18-x86_64-DVD.iso
boot=d
vmName=oVirt Hosted Engine
spiceSecureChannels=_main,_display,_inputs,_cursor,_playback,_record,_smartcard,_usbredir
bridge=ovirtmgmt
nicModel=virtio
I create the vm calling vdsClient create.
After Fedora installation I need to boot from HD instead of cdrom.
How may I specify the boot order for having it as hd,cdrom booting from
hd on reboot?
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
11 years
Fwd: Re: [libvirt] AttributeError in virConnect.__del__ at program exit
by Sandro Bonazzola
FYI
-------- Messaggio originale --------
Oggetto: Re: [libvirt] AttributeError in virConnect.__del__ at program
exit
Data: Thu, 13 Jun 2013 09:10:13 +0200
Mittente: Sandro Bonazzola <sbonazzo(a)redhat.com>
A: Cole Robinson <crobinso(a)redhat.com>
CC: libvir-list(a)redhat.com
Il 13/06/2013 08:54, Sandro Bonazzola ha scritto:
> Il 12/06/2013 19:31, Cole Robinson ha scritto:
>> On 06/12/2013 04:10 AM, Sandro Bonazzola wrote:
>>> Il 11/06/2013 18:21, Cole Robinson ha scritto:
>>>> On 06/11/2013 07:58 AM, Sandro Bonazzola wrote:
>>>>> Hi,
>>>>> using vdsm python code, I've the following error at program exit that
>>>>> seems to be related to libvirt python code, something wrong in a destructor:
>>>>>
>>>>> Exception AttributeError: AttributeError("virConnect instance has no
>>>>> attribute 'domainEventCallbacks'",) in <bound method virConnect.__del__
>>>>> of <libvirt.virConnect instance at 0x4280f38>> ignored
>>>>>
>>>>> I'm using libvirt 1.0.6
>>>>>
>>>>> Is it a known issue? Is there any workaround / fix ?
>>>>>
>>>> I've seen this too, sometimes via the virtinst test suite. Once upon a time I
>>>> tracked it down to whether the virtinst code did 'import selinux' or not, so
>>>> maybe it's a just some weird race, or a side effect of something other
>>>> libraries do in their cleanup path. By inspection alone the __del__ handler
>>>> doesn't seem to be doing anything wrong.
>>>>
>>>> - Cole
>>> It seems related only to domainEventCallbacks so maybe it appears only
>>> after a domain creation / modification.
>>>
>> Oh, sorry, now that I look at the code it makes a bit more sense. The error I
>> was recalling was a __del__ Exception but wasn't involving domain events.
>>
>> But I still don't see why __del__ is complaining, since the whole thing is
>> wrapped in try: ... except AttributeError. So not sure what the issue is.
>>
>> - Cole
>
> According to the python documentation, you can't handle exception in
> __del__:
> http://docs.python.org/2/reference/datamodel.html#object.__del__
>
> Warning
>
> Due to the precarious circumstances under which __del__()
> <http://docs.python.org/2/reference/datamodel.html#object.__del__>
> methods are invoked, exceptions that occur during their execution are
> ignored, and a warning is printed to sys.stderr instead.
>
> So you have to test the existence of the attribute before using it and
> not try to use it and try to handle an exception that can't be handled.
You can use hasattr for checking if the object has the needed attribute.
>
>
>
>
>
>
> --
> libvir-list mailing list
> libvir-list(a)redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
11 years
Cancelled: ovirt network
by lpeer@redhat.com
The following meeting has been cancelled:
Subject: ovirt network
Organizer: "Livnat Peer" <lpeer(a)redhat.com>
Time: 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem
Recurrence : Every 5 week(s) on Wednesday No end date Effective Aug 15, 2012
Invitees: engine-devel(a)ovirt.org; vdsm-devel(a)lists.fedorahosted.org; GARGYA(a)de.ibm.com; dyasny(a)redhat.com; simon(a)redhat.com; mkolesni(a)redhat.com; atal(a)redhat.com; ilvovsky(a)redhat.com; dfediuck(a)redhat.com; pradipta.banerjee(a)gmail.com; gkotton(a)redhat.com ...
*~*~*~*~*~*~*~*~*~*
Hi All,
As discussed previously on the list, I am adding a monthly discussion on Networking in oVirt.
In this meeting we'll discuss general status of networking and features that we're missing.
Thanks, Livnat
Bridge ID: 972506565679
Dial-in information:
Reservationless-Plus Toll Free Dial-In Number (US & Canada): (800) 451-8679
Reservationless-Plus International Dial-In Number: (212) 729-5016
Conference code: 8425973915
Global Access Numbers Local:
Australia, Sydney Dial-In #: 0289852326
Austria, Vienna Dial-In #: 012534978196
Belgium, Brussels Dial-In #: 027920405
China Dial-In #: 4006205013
Denmark, Copenhagen Dial-In #: 32729215
Finland, Helsinki Dial-In #: 0923194436
France, Paris Dial-In #: 0170377140
Germany, Berlin Dial-In #: 030300190579
Ireland, Dublin Dial-In #: 014367793
Italy, Milan Dial-In #: 0236269529
Netherlands, Amsterdam Dial-In #: 0207975872
Norway, Oslo Dial-In #: 21033188
Singapore Dial-In #: 64840858
Spain, Barcelona Dial-In #: 935452328
Sweden, Stockholm Dial-In #: 0850513770
Switzerland, Geneva Dial-In #: 0225927881
United Kingdom Dial-In #: 02078970515
United Kingdom Dial-In #: 08445790676
United Kingdom, LocalCall Dial-In #: 08445790678
United States Dial-In #: 2127295016
Global Access Numbers Tollfree:
Argentina Dial-In #: 8004441016
Australia Dial-In #: 1800337169
Austria Dial-In #: 0800005898
Bahamas Dial-In #: 18002054776
Bahrain Dial-In #: 80004377
Belgium Dial-In #: 080048325
Brazil Dial-In #: 08008921002
Bulgaria Dial-In #: 008001100236
Chile Dial-In #: 800370228
Colombia Dial-In #: 018009134033
Costa Rica Dial-In #: 08000131048
Cyprus Dial-In #: 80095297
Czech Republic Dial-In #: 800700318
Denmark Dial-In #: 80887114
Dominican Republic Dial-In #: 18887512313
Estonia Dial-In #: 8000100232
Finland Dial-In #: 0800117116
France Dial-In #: 0805632867
Germany Dial-In #: 8006647541
Greece Dial-In #: 00800127562
Hong Kong Dial-In #: 800930349
Hungary Dial-In #: 0680016796
Iceland Dial-In #: 8008967
India Dial-In #: 0008006501533
Indonesia Dial-In #: 0018030179162
Ireland Dial-In #: 1800932401
Israel Dial-In #: 1809462557
Italy Dial-In #: 800985897
Jamaica Dial-In #: 18002050328
Japan Dial-In #: 0120934453
Korea (South) Dial-In #: 007986517393
Latvia Dial-In #: 80003339
Lithuania Dial-In #: 880030479
Luxembourg Dial-In #: 80026595
Malaysia Dial-In #: 1800814451
Mexico Dial-In #: 0018664590915
New Zealand Dial-In #: 0800888167
Norway Dial-In #: 80012994
Panama Dial-In #: 008002269184
Philippines Dial-In #: 180011100991
Poland Dial-In #: 008001210187
Portugal Dial-In #: 800814625
Russian Federation Dial-In #: 81080028341012
Saint Kitts and Nevis Dial-In #: 18002059252
Singapore Dial-In #: 8006162235
Slovak Republic Dial-In #: 0800001441
South Africa Dial-In #: 0800981148
Spain Dial-In #: 800300524
Sweden Dial-In #: 200896860
Switzerland Dial-In #: 800650077
Taiwan Dial-In #: 00801127141
Thailand Dial-In #: 001800656966
Trinidad and Tobago Dial-In #: 18002024615
United Arab Emirates Dial-In #: 8000650591
United Kingdom Dial-In #: 08006948057
United States Dial-In #: 8004518679
Uruguay Dial-In #: 00040190315
Venezuela Dial-In #: 08001627182
11 years
Re: [vdsm] [Engine-devel] Ovirt live build failure
by eedri@redhat.com
you need to send the email to vdsm list (vdsm-devel(a)fedorahosted.org)
and ping danken/federici/juan etc...
eyal.
----- Original Message -----
> From: "Ohad Basan" <obasan(a)redhat.com>
> To: "Eyal Edri" <eedri(a)redhat.com>
> Sent: Wednesday, June 12, 2013 9:25:59 AM
> Subject: Fwd: [Engine-devel] Ovirt live build failure
>
> didn't get a reply
> any idea?
>
>
> ----- Forwarded Message -----
> From: "Ohad Basan" <obasan(a)redhat.com>
> To: engine-devel(a)ovirt.org
> Sent: Tuesday, June 11, 2013 11:08:25 AM
> Subject: [Engine-devel] Ovirt live build failure
>
> Hello everyone.
>
> The jenkins ovirt live job is failing due to libvirt dependency:
>
> Error creating Live CD : Failed to build transaction :
> vdsm-4.11.0-28.git634b9f4.fc18.x86_64 requires libvirt >= 1.0.2-1
>
> http://jenkins.ovirt.org/job/ovirt_live_create_iso/151/artifact/ovirt-liv...
>
>
> Does anyone have an idea how to solve it?
> the only thing I thought about is compiling libvirt from source in the job
> itself.
>
> Thanks
>
> Ohad
> _______________________________________________
> Engine-devel mailing list
> Engine-devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/engine-devel
>
11 years
Fwd: delNetwork issue
by asegurap@redhat.com
Forgot to CC the list :(
----- Forwarded Message -----
> From: "Antoni Segura Puimedon" <asegurap(a)redhat.com>
> To: "Sandro Bonazzola" <sbonazzo(a)redhat.com>
> Cc: "Dan Kenigsberg" <danken(a)redhat.com>
> Sent: Friday, June 7, 2013 8:19:28 PM
> Subject: Re: delNetwork issue
>
> Hi Sandro,
>
> I tried to reproduce on my machine without any success. If you are able to
> reproduce,
> I'd like you to ping me. We should probably add -xv to the /bin/bash line in
> /etc/sysconfig/network-scripts/ifup
> and log out and err of the execCmd line to see where exactly ifup fails.
> Otherwise
> it's quite difficult to tell what happened due to the lack of error message
> in the exception.
>
> Best,
>
> Toni
>
> ----- Original Message -----
> > From: "Sandro Bonazzola" <sbonazzo(a)redhat.com>
> > To: vdsm-devel(a)lists.fedorahosted.org, "Giuseppe Vallarelli"
> > <gvallare(a)redhat.com>, "Antoni Segura Puimedon"
> > <asegurap(a)redhat.com>
> > Sent: Friday, June 7, 2013 10:05:12 AM
> > Subject: delNetwork issue
> >
> > After having created the engine bridge, trying to remove it leads to:
> >
> >
> > # /usr/share/vdsm/delNetwork engine '' '' em1
> > INFO:root:Removing network engine with vlan=None, bonding=None,
> > nics=['em1'],options={}
> > Traceback (most recent call last):
> > File "/usr/share/vdsm/configNetwork.py", line 666, in <module>
> > main()
> > File "/usr/share/vdsm/configNetwork.py", line 641, in main
> > delNetwork(bridge, **kwargs)
> > File "/usr/share/vdsm/configNetwork.py", line 355, in delNetwork
> > _removeUnusedNics(network, vlan, bonding, nics, configWriter)
> > File "/usr/share/vdsm/configNetwork.py", line 259, in _removeUnusedNics
> > ifup(nic)
> > File "/usr/share/vdsm/netconf/ifcfg.py", line 739, in ifup
> > rc, out, err = _ifup(iface)
> > File "/usr/share/vdsm/netconf/ifcfg.py", line 728, in _ifup
> > out[-1] if out else '')
> > ConfigNetworkError: (29, '')
> >
> > --
> > Sandro Bonazzola
> > Better technology. Faster innovation. Powered by community collaboration.
> > See how it works at redhat.com
> >
> >
>
11 years
bridge name in vdsClient conf file
by Sandro Bonazzola
Hi,
Is there a way for specify a bridge name in a vm.conf file to be used by
vdsClient create command?
defaults is engine and I would like to specify ovirtmgmt.
Thanks
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
11 years
configNetwork issue
by Sandro Bonazzola
Hi,
I'm trying to create a bridge adding as config bootproto='dhcp'
but I've the following error:
>>> import sys
>>> sys.path.append('/usr/share/vdsm/')
>>> import configNetwork
>>> configNetwork.addNetwork(network='engine', nics=['em1'],
bootproto='dhcp')
TypeError: objectivizeNetwork() got multiple values for keyword argument
'bootproto'
seems like objectivizeNetwork already take bootproto from **opts and fails.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
11 years
gerrit command line client
by asegurap@redhat.com
Hi all,
Having yesterday a slow Saturday and inspired by a link sent by ewoud about
gerrit ssh interface. I coded a very basic command line script to retrieve
information from gerrit. You can find it on:
https://github.com/celebdor/perryt
As of now it doesn't have a lot of options and the code is quite horrible
still. But it does allow some useful queries like the ones at the end of this
email.
The classes on perryt.py are close to modelling all the information that
gerrit 2.4.2 exposes through this interface, so it should be easy to add more
complicated and useful queries (checking dependencies and such).
You might ask: "Why to do that if gerrit already has quite big querying
possibilities?". The answers are twofold:
1. Because I can and it was raining.
2. Because it is easier for me to use than the ssh gerrit thingy.
I welcome pull requests ;-)
./perryt.py reviewer apuimedo reviewed any verified markw
Results: 31(time: 537µs)
================================================================================
(vdsm)Iece96: ifcfg: preserve 'NM_CONTROLLED=no' on removal - (OUTDATED DEP) - wudxw
http://gerrit.ovirt.org/15148
P4 (v: 1, r: 2 - [wudxw(v:1), asegurap(r:1), gvallare(r:1)])
(vdsm)I32e8e: Simplify setNewMtu() - (UP TO DATE) - wudxw
http://gerrit.ovirt.org/15355
P6 (v: 1, r: 1 - [wudxw(v:1), asegurap(r:1)])
(vdsm)I9e11f: NetReload: netmodels for delNetwork - (OUTDATED DEP) - wudxw
http://gerrit.ovirt.org/14873
P10 (v: -1, r: 1 - [wudxw(v:-1), asegurap(r:1)])
(vdsm)Id254a: Move removing libvirt network to configurator - (UP TO DATE) - wudxw
http://gerrit.ovirt.org/15417
P1 (v: 0, r: -1 - [wudxw(v:1), oVirt Jenkins CI Server(v:-1), asegurap(r:-1)])
P3 (v: 1, r: 1 - [wudxw(v:1), asegurap(r:1)])
P5 (v: 1, r: 1 - [wudxw(v:1), asegurap(r:1)])
(vdsm)I798cc: Separate libvirt network configuration from ifcfg - (UP TO DATE) - wudxw
http://gerrit.ovirt.org/15178
P5 (v: 1, r: 1 - [wudxw(v:1), asegurap(r:1)])
P7 (v: 1, r: 1 - [wudxw(v:1), asegurap(r:1)])
P8 (v: 2, r: 1 - [wudxw(v:1), asegurap(r:1), asegurap(v:1)])
(vdsm)I9ab6f: NetReload: netmodels for editBonding/removeBonding - (UP TO DATE) - wudxw
http://gerrit.ovirt.org/15356
P4 (v: 1, r: -1 - [wudxw(v:1), asegurap(r:-1)])
P6 (v: 1, r: 1 - [wudxw(v:1), asegurap(r:1)])
P7 (v: 2, r: 1 - [wudxw(v:1), asegurap(r:1), asegurap(v:1)])
or another example:
./perryt.py owner alonbl status open
Results: 12(time: 161µs)
================================================================================
(ovirt-engine)I76019: packaging: periodically check if ovirt-engine upgrade available - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/10976
P2 (v: 0, r: -1 - [oschreib(r:-1), alourie(r:1), alonbl(r:-1)])
(ovirt-engine)I017a5: packaging: setup: use firewalld implementation of otopi - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/15114
P1 (v: 0, r: 0 - [alonbl(r:-1), sbonazzo(r:1)])
(ovirt-engine)I3570a: packaging: log rotate - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/14961
P3 (v: 2, r: 4 - [yzaslavs(r:1), amureini(r:1), alonbl(v:1), sbonazzo(r:1), didi(r:1), didi(v:1)])
(ovirt-engine)I7f3ab: pki: set ownership of apache key to root - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/15266
P1 (v: 1, r: 0 - [alonbl(v:1)])
(ovirt-engine)I638e9: pki: upgrade: do not overwrite apache certificate and key - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/15267
P1 (v: 1, r: 0 - [alonbl(v:1)])
(ovirt-image-uploader)Id8600: core: modify copyright of base po - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/15294
P1 (v: 0, r: 1 - [kroberts(r:1)])
(ovirt-iso-uploader)I11ea0: core: modify copyright of base po - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/15295
P1 (v: 0, r: 1 - [kroberts(r:1)])
(ovirt-engine)Iec8bb: pki: remove database config upgrade - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/15184
P1 (v: 1, r: 1 - [emesika(r:1), alonbl(v:1)])
(ovirt-engine-sdk-java)I2f390: build: support make out of tarball - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/15332
P1 (v: 1, r: 0 - [alonbl(v:1)])
(ovirt-engine-sdk-java)I0eb58: build: spec: support rhel and centos - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/15333
P1 (v: 1, r: 0 - [alonbl(v:1)])
(ovirt-log-collector)Ieef24: core: modify copyright of base po - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/15293
P1 (v: 0, r: 1 - [kroberts(r:1)])
(ovirt-engine)Icf94f: packaging: setup: support older psycopg2 - (UP TO DATE) - alonbl
http://gerrit.ovirt.org/15358
P2 (v: 1, r: 1 - [alonbl(v:1), sbonazzo(r:1)])
11 years
issue while creating data domain over NFS
by Sandro Bonazzola
Hi here is the relevant log in vdsm.log: vdsm is taken from master,
commit 419cafb
Thread-273::INFO::2013-06-07
09:49:24,305::logUtils::44::dispatcher::(wrapper) Run and protect:
createVolume(sdUUID='5569b4ce-c43b-434e-b0d2-7066a6e9489a',
spUUID='1fa7166d-ca70-4c02-acec-bd888aa8e5f9',
imgUUID='2fdbf1be-f028-492c-8741-a46789f8660a', size=41943040,
volFormat=5, preallocate=2, diskType=2,
volUUID='316faf9c-1780-4820-85bf-c886c3894a09', desc='Hosted Engine
Image', srcImgUUID='00000000-0000-0000-0000-000000000000',
srcVolUUID='00000000-0000-0000-0000-000000000000')
Thread-273::INFO::2013-06-07
09:49:24,305::fileSD::315::Storage.StorageDomain::(validate)
sdUUID=5569b4ce-c43b-434e-b0d2-7066a6e9489a
Thread-273::DEBUG::2013-06-07
09:49:24,316::persistentDict::234::Storage.PersistentDict::(refresh)
read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=local_storage',
'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=30',
'LOCKPOLICY=ON', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1',
'POOL_DESCRIPTION=local_datacenter',
'POOL_DOMAINS=5569b4ce-c43b-434e-b0d2-7066a6e9489a:Active',
'POOL_SPM_ID=1', 'POOL_SPM_LVER=0',
'POOL_UUID=1fa7166d-ca70-4c02-acec-bd888aa8e5f9',
'REMOTE_PATH=192.168.1.104:/var/lib/images2', 'ROLE=Master',
'SDUUID=5569b4ce-c43b-434e-b0d2-7066a6e9489a', 'TYPE=NFS', 'VERSION=3',
'_SHA_CKSUM=b6e962017a3320b970ca08ba64cc23cb2b663197']
Thread-273::DEBUG::2013-06-07
09:49:24,317::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`Storage.5569b4ce-c43b-434e-b0d2-7066a6e9489a`ReqID=`c52a859d-3d30-41d7-99cd-4ff8f3b5c98a`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '1404' at 'createVolume'
Thread-273::DEBUG::2013-06-07
09:49:24,318::resourceManager::541::ResourceManager::(registerResource)
Trying to register resource
'Storage.5569b4ce-c43b-434e-b0d2-7066a6e9489a' for lock type 'shared'
Thread-273::DEBUG::2013-06-07
09:49:24,318::resourceManager::600::ResourceManager::(registerResource)
Resource 'Storage.5569b4ce-c43b-434e-b0d2-7066a6e9489a' is free. Now
locking as 'shared' (1 active user)
Thread-273::DEBUG::2013-06-07
09:49:24,318::resourceManager::237::ResourceManager.Request::(grant)
ResName=`Storage.5569b4ce-c43b-434e-b0d2-7066a6e9489a`ReqID=`c52a859d-3d30-41d7-99cd-4ff8f3b5c98a`::Granted
request
Thread-273::DEBUG::2013-06-07
09:49:24,319::task::811::TaskManager.Task::(resourceAcquired)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::_resourcesAcquired:
Storage.5569b4ce-c43b-434e-b0d2-7066a6e9489a (shared)
Thread-273::DEBUG::2013-06-07
09:49:24,319::task::974::TaskManager.Task::(_decref)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::ref 1 aborting False
Thread-273::DEBUG::2013-06-07
09:49:24,335::task::736::TaskManager.Task::(_save)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::_save: orig
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b
temp
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b.temp
Thread-273::DEBUG::2013-06-07
09:49:24,389::taskManager::68::TaskManager::(scheduleJob) scheduled job
createVolume for task 5ee704f3-4834-4413-8e5b-9e22852fac5b
Thread-273::INFO::2013-06-07
09:49:24,390::logUtils::47::dispatcher::(wrapper) Run and protect:
createVolume, Return response: None
Thread-273::DEBUG::2013-06-07
09:49:24,390::task::1163::TaskManager.Task::(prepare)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::Prepare: 1 jobs exist, move
to acquiring
Thread-273::DEBUG::2013-06-07
09:49:24,390::task::579::TaskManager.Task::(_updateState)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::moving from state preparing
-> state acquiring
Thread-273::DEBUG::2013-06-07
09:49:24,391::task::736::TaskManager.Task::(_save)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::_save: orig
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b
temp
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b.temp
Thread-273::DEBUG::2013-06-07
09:49:24,488::task::579::TaskManager.Task::(_updateState)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::moving from state acquiring
-> state queued
Thread-273::DEBUG::2013-06-07
09:49:24,489::task::736::TaskManager.Task::(_save)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::_save: orig
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b
temp
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b.temp
Thread-273::DEBUG::2013-06-07
09:49:24,573::taskManager::50::TaskManager::(_queueTask) queuing task:
5ee704f3-4834-4413-8e5b-9e22852fac5b
Thread-273::DEBUG::2013-06-07
09:49:24,574::taskManager::56::TaskManager::(_queueTask) task queued:
5ee704f3-4834-4413-8e5b-9e22852fac5b
72ab072f-e451-4a82-99b2-b85ef36aba04::DEBUG::2013-06-07
09:49:24,574::threadPool::57::Misc.ThreadPool::(setRunningTask) Number
of running tasks: 1
Thread-273::DEBUG::2013-06-07
09:49:24,576::task::1165::TaskManager.Task::(prepare)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::returning
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,576::threadPool::205::Misc.ThreadPool.WorkerThread::(run) Task:
5ee704f3-4834-4413-8e5b-9e22852fac5b running: <bound method Task.commit
of <storage.task.Task instance at 0x7f9334103ea8>> with: None
Thread-273::DEBUG::2013-06-07
09:49:24,578::task::974::TaskManager.Task::(_decref)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::ref 0 aborting False
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,579::task::1176::TaskManager.Task::(commit)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::committing task:
5ee704f3-4834-4413-8e5b-9e22852fac5b
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,581::task::579::TaskManager.Task::(_updateState)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::moving from state queued ->
state running
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,583::task::736::TaskManager.Task::(_save)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::_save: orig
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b
temp
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b.temp
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,675::task::889::TaskManager.Task::(_runJobs)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::Task.run: running job 0:
createVolume: <bound method StoragePool.createVolume of
<storage.sp.StoragePool object at 0x2454dd0>> (args:
('5569b4ce-c43b-434e-b0d2-7066a6e9489a',
'2fdbf1be-f028-492c-8741-a46789f8660a', 41943040, 5, 2, 2,
'316faf9c-1780-4820-85bf-c886c3894a09', 'Hosted Engine Image',
'00000000-0000-0000-0000-000000000000',
'00000000-0000-0000-0000-000000000000') kwargs: {})
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,675::task::315::TaskManager.Task::(run)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::Job.run: running
createVolume: <bound method StoragePool.createVolume of
<storage.sp.StoragePool object at 0x2454dd0>> (args:
('5569b4ce-c43b-434e-b0d2-7066a6e9489a',
'2fdbf1be-f028-492c-8741-a46789f8660a', 41943040, 5, 2, 2,
'316faf9c-1780-4820-85bf-c886c3894a09', 'Hosted Engine Image',
'00000000-0000-0000-0000-000000000000',
'00000000-0000-0000-0000-000000000000') kwargs: {}) callback None
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,676::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`5569b4ce-c43b-434e-b0d2-7066a6e9489a_imageNS.2fdbf1be-f028-492c-8741-a46789f8660a`ReqID=`ad9c6518-a498-4c53-b995-32ebdad2e934`::Request
was made in '/usr/share/vdsm/storage/sp.py' line '1973' at 'createVolume'
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,676::resourceManager::541::ResourceManager::(registerResource)
Trying to register resource
'5569b4ce-c43b-434e-b0d2-7066a6e9489a_imageNS.2fdbf1be-f028-492c-8741-a46789f8660a'
for lock type 'exclusive'
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,677::resourceFactories::125::Storage.ResourcesFactories::(__getResourceCandidatesList)
Image 2fdbf1be-f028-492c-8741-a46789f8660a does not exist in domain
5569b4ce-c43b-434e-b0d2-7066a6e9489a
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,677::resourceManager::600::ResourceManager::(registerResource)
Resource
'5569b4ce-c43b-434e-b0d2-7066a6e9489a_imageNS.2fdbf1be-f028-492c-8741-a46789f8660a'
is free. Now locking as 'exclusive' (1 active user)
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,677::resourceManager::237::ResourceManager.Request::(grant)
ResName=`5569b4ce-c43b-434e-b0d2-7066a6e9489a_imageNS.2fdbf1be-f028-492c-8741-a46789f8660a`ReqID=`ad9c6518-a498-4c53-b995-32ebdad2e934`::Granted
request
5ee704f3-4834-4413-8e5b-9e22852fac5b::INFO::2013-06-07
09:49:24,677::image::122::Storage.Image::(create) Create placeholder
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/5569b4ce-c43b-434e-b0d2-7066a6e9489a/images/2fdbf1be-f028-492c-8741-a46789f8660a
for image's volumes
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,678::task::736::TaskManager.Task::(_save)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::_save: orig
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b
temp
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b.temp
5ee704f3-4834-4413-8e5b-9e22852fac5b::INFO::2013-06-07
09:49:24,831::volume::464::Storage.Volume::(create) Creating volume
316faf9c-1780-4820-85bf-c886c3894a09
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:24,832::task::736::TaskManager.Task::(_save)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::_save: orig
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b
temp
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b.temp
5ee704f3-4834-4413-8e5b-9e22852fac5b::DEBUG::2013-06-07
09:49:31,268::task::736::TaskManager.Task::(_save)
Task=`5ee704f3-4834-4413-8e5b-9e22852fac5b`::_save: orig
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b
temp
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/mastersd/master/tasks/5ee704f3-4834-4413-8e5b-9e22852fac5b.temp
5ee704f3-4834-4413-8e5b-9e22852fac5b::INFO::2013-06-07
09:49:31,573::fileVolume::154::Storage.Volume::(_create) Request to
create RAW volume
/rhev/data-center/1fa7166d-ca70-4c02-acec-bd888aa8e5f9/5569b4ce-c43b-434e-b0d2-7066a6e9489a/images/2fdbf1be-f028-492c-8741-a46789f8660a/316faf9c-1780-4820-85bf-c886c3894a09
with size = 41943040 sectors
Thread-263::DEBUG::2013-06-07
09:49:44,063::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
'/usr/bin/dd iflag=direct
if=/rhev/data-center/mnt/192.168.1.104:_var_lib_images2/5569b4ce-c43b-434e-b0d2-7066a6e9489a/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-263::DEBUG::2013-06-07
09:49:44,068::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records in\n0+1 records out\n479 bytes (479 B) copied,
5.6832e-05 s, 8.4 MB/s\n'; <rc> = 0
Thread-263::ERROR::2013-06-07
09:49:44,069::misc::240::Storage.Misc::(readspeed) Unable to parse dd
output: '479 bytes (479 B) copied, 5.6832e-05 s, 8.4 MB/s'
Thread-263::ERROR::2013-06-07
09:49:44,069::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain)
Error while collecting domain 5569b4ce-c43b-434e-b0d2-7066a6e9489a
monitoring information
Traceback (most recent call last):
File "/usr/share/vdsm/storage/domainMonitor.py", line 203, in
_monitorDomain
self.nextStatus.readDelay = self.domain.getReadDelay()
File "/usr/share/vdsm/storage/fileSD.py", line 238, in getReadDelay
stats = misc.readspeed(self.metafile, 4096)
File "/usr/share/vdsm/storage/misc.py", line 241, in readspeed
raise se.MiscFileReadException(path)
MiscFileReadException: Internal file read failure:
('/rhev/data-center/mnt/192.168.1.104:_var_lib_images2/5569b4ce-c43b-434e-b0d2-7066a6e9489a/dom_md/metadata',)
Thread-263::DEBUG::2013-06-07
09:49:44,069::domainMonitor::233::Storage.DomainMonitorThread::(_monitorDomain)
Domain 5569b4ce-c43b-434e-b0d2-7066a6e9489a changed its status to Invalid
Thread-274::DEBUG::2013-06-07
09:49:44,070::misc::925::Event.Storage.DomainMonitor.onDomainConnectivityStateChange::(_emit)
Emitting event
Thread-274::DEBUG::2013-06-07
09:49:44,070::misc::935::Event.Storage.DomainMonitor.onDomainConnectivityStateChange::(_emit)
Calling registered method `_upgradePoolDomain`
Thread-274::DEBUG::2013-06-07
09:49:44,071::misc::945::Event.Storage.DomainMonitor.onDomainConnectivityStateChange::(_emit)
Event emitted
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
11 years