stale gerrit patches
by iheim@redhat.com
we have some very old gerrit patches.
I'm for abandoning patches which were not touched over 60 days (to begin
with, I think the number should actually be lower).
they can always be re-opened by any interested party post their closure.
i.e., looking at gerrit, the patch list should actually get attention,
and not be a few worth looking at, with a "lot of old patches"
thoughts?
Thanks,
Itamar
10 years, 6 months
[ATN] vdsm yum install is missing pkg for centos & f18
by eedri@redhat.com
fyi,
as part of adding more verification jobs on vdsm, infra recently added a new job [1] that installs vdsm.
currently that job is working only on f19 and fails on f18 and centos64[2]:
Are there any plans to fix this soon? how users are supposed to install vdsm if those pkg are missing?
Eyal.
[1]http://jenkins.ovirt.org/job/vdsm_install_rpm_sanity_gerrit/label=fedor...
[2]
centos:
---> Package lm_sensors-libs.x86_64 0:3.1.1-17.el6 will be installed
---> Package perl-hivex.x86_64 0:1.3.3-4.2.el6 will be installed
---> Package vdsm.x86_64 0:4.12.0-149.git44993c6.el6 will be installed
--> Processing Dependency: selinux-policy-targeted >= 3.7.19-213 for package: vdsm-4.12.0-149.git44993c6.el6.x86_64
---> Package vdsm-hook-vhostmd.noarch 0:4.12.0-149.git44993c6.el6 will be installed
--> Processing Dependency: vhostmd for package: vdsm-hook-vhostmd-4.12.0-149.git44993c6.el6.noarch
--> Finished Dependency Resolution
Error: Package: vdsm-hook-vhostmd-4.12.0-149.git44993c6.el6.noarch (/vdsm-hook-vhostmd-4.12.0-149.git44993c6.el6.noarch)
Requires: vhostmd
Error: Package: vdsm-4.12.0-149.git44993c6.el6.x86_64 (/vdsm-4.12.0-149.git44993c6.el6.x86_64)
Requires: selinux-policy-targeted >= 3.7.19-213
Installed: selinux-policy-targeted-3.7.19-195.el6_4.12.noarch (@updates)
selinux-policy-targeted = 3.7.19-195.el6_4.12
Available: selinux-policy-targeted-3.7.19-195.el6.noarch (base)
selinux-policy-targeted = 3.7.19-195.el6
Available: selinux-policy-targeted-3.7.19-195.el6_4.1.noarch (updates)
selinux-policy-targeted = 3.7.19-195.el6_4.1
Available: selinux-policy-targeted-3.7.19-195.el6_4.3.noarch (updates)
selinux-policy-targeted = 3.7.19-195.el6_4.3
Available: selinux-policy-targeted-3.7.19-195.el6_4.5.noarch (updates)
selinux-policy-targeted = 3.7.19-195.el6_4.5
Available: selinux-policy-targeted-3.7.19-195.el6_4.6.noarch (updates)
selinux-policy-targeted = 3.7.19-195.el6_4.6
Available: selinux-policy-targeted-3.7.19-195.el6_4.10.noarch (updates)
selinux-policy-targeted = 3.7.19-195.el6_4.10
f18:
--> Running transaction check
---> Package dracut.x86_64 0:024-18.git20130102.fc18 will be updated
---> Package dracut.x86_64 0:024-25.git20130205.fc18 will be an update
---> Package vdsm.x86_64 0:4.12.0-149.git44993c6.fc18 will be installed
--> Processing Dependency: libvirt-daemon >= 1.0.2-1 for package: vdsm-4.12.0-149.git44993c6.fc18.x86_64
--> Finished Dependency Resolution
Error: Package: vdsm-4.12.0-149.git44993c6.fc18.x86_64 (/vdsm-4.12.0-149.git44993c6.fc18.x86_64)
Requires: libvirt-daemon >= 1.0.2-1
Installed: libvirt-daemon-0.10.2.7-1.fc18.x86_64 (@updates)
libvirt-daemon = 0.10.2.7-1.fc18
Available: libvirt-daemon-0.10.2.2-3.fc18.x86_64 (fedora)
libvirt-daemon = 0.10.2.2-3.fc18
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Build step 'Execute shell' marked build as failure
10 years, 6 months
Re: [vdsm] start vm which in pool failed-please help test the code
by bigclouds
At 2013-09-24 09:01:45,bigclouds <bigclouds(a)163.com> wrote:
my code is attached
1. modify hooks.py in function _runHooksDir , add 'scriptenv['M_vmName'] = vmconf.get('vmName', "")' for script needs vmname
2.modify vdsm service which call hooks.pyc from hooks.pyc to hooks.py , for u modify it above.
3.cp 40_guestname to /usr/.../.../hooks/before_vm_start/
4. yum install
libguestfs-winsupport-1.0-7.el6.x86_64
libguestfs-tools-c-1.16.34-2.el6.x86_64
python-libguestfs-1.16.34-2.el6.x86_64
libguestfs-1.16.34-2.el6.x86_64
libguestfs-tools-1.16.34-2.el6.x86_64
5.restart vdsm service
6.u need create a vm of a pool
thanks.
At 2013-09-23 16:51:40,"Dan Kenigsberg" <danken(a)redhat.com> wrote:
>On Sun, Sep 22, 2013 at 05:57:25PM +0800, bigclouds wrote:
>> hi, Dan
>> i am happy to contribute my code until it is tested.
>>
>
>Could you at least share the offending domxml?
>
>>
>> i am not sure if it is related to selinux because no matter i enable or disable selinux (SELINUX=disabled mode) the error remain.
>>
>>
>> i define a xml recording to vdsm.log infos, and 'virsh start myvm', the same error occurs
>> i copy the comand line which is recorded in libvirtd.log or myvm.log when start a vm, and lauch it directly, it starts without error.
>>
>>
>> do u notice the confision, u can start it throught command line, but fail throught libvirt.
>> i have checked almost every place like perm,ownner ship, lv active, backing file.....etc
>>
>>
>> i am going to cry. (._.)
>
>No need for that ;-)
>
>Which libvirt version do you use? which storage (nfs/block)?
>Could it be another case of the the libvirt regression about
>supplementarry groups? If so,
>https://rhn.redhat.com/errata/RHSA-2013-1272.html is out and a libvirt
>upgrade is most welcome.
>
>>
>>
>>
>>
>>
>>
>> -----error log- testname-1.log---
>> 2013-09-22 00:50:15.430+0000: starting up
>> LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name testname-1 -S -M rhel6.4.0 -cpu Nehalem -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid 24f7e975-9aa5-4a14-b0f0-590add14c8b5 -smbios type=1,manufacturer=mcVdi,product=mcVdi Node,version=6-4.el6.centos.10,serial=25F59E10-794D-11E1-8835-3440B587CE3F_34:40:b5:87:ce:3f,uuid=24f7e975-9aa5-4a14-b0f0-590add14c8b5 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/testname-1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/c04b1d4f-abeb-4e64-8932-2f325a0a5af4,if=none,id=drive-ide0-0-0,format=qcow2,serial=ac025dc1-4e25-4b71-8c56-88dcb61b9f09,cache=none,werror=stop,rerror=stop,aio=native -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=31,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:05:b9,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/testname-1.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/testname-1.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -spice port=5904,tls-port=5905,addr=192.168.5.100,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
>> char device redirected to /dev/pts/4
>> qemu-kvm: -drive file=/rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/c04b1d4f-abeb-4e64-8932-2f325a0a5af4,if=none,id=drive-ide0-0-0,format=qcow2,serial=ac025dc1-4e25-4b71-8c56-88dcb61b9f09,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/c04b1d4f-abeb-4e64-8932-2f325a0a5af4: Operation not permitted
10 years, 6 months
Keeping VDSM Compatible with Ubuntu
by zhshzhou@linux.vnet.ibm.com
Hi all,
Recently we merged some patches to make VDSM run on Ubuntu. We also have
some packaging scripts in the debian/ sub-dir. You can either build .deb
packages manually or find binary packages from VDSM PPA [1] on
launchpad.net. Once you add that PPA, you can use apt-get to install
VDSM and its dependencies.
I'll setup a Jenkins on my laptop to test the master branch
automatically by build and install VDSM on Ubuntu, then running unit and
functional tests. I suggest when adding a new change, it's better to
make sure it is covered by unit or functional tests. If you change the
packaging code, for example adding new options to configure.ac, editing
vdsm.spec.in, and changing VDSM daemon startup behavior, I am happy to
be invited to review your patch.
I'd also like to listen to your suggestions on this topic, thanks!
[1] https://launchpad.net/~zhshzhou/+archive/vdsm-ubuntu
--
Thanks and best regards!
Zhou Zheng Sheng / 周征晟
E-mail: zhshzhou(a)linux.vnet.ibm.com
Telephone: 86-10-82454397
10 years, 7 months
Re: [vdsm] [Users] vdsm live migration errors in latest master
by Dan Kenigsberg
On Mon, Sep 23, 2013 at 04:05:34PM -0500, Dead Horse wrote:
> Seeing failed live migrations and these errors in the vdsm logs with latest
> VDSM/Engine master.
> Hosts are EL6.4
Thanks for posting this report.
The log is from the source of migration, right?
Could you trace the history of the hosts of this VM? Could it be that it
was started on an older version of vdsm (say ovirt-3.3.0) and then (due
to migration or vdsm upgrade) got into a host with a much newer vdsm?
Would you share the vmCreate (or vmMigrationCreate) line for this Vm in
your log? I smells like an unintended regression of
http://gerrit.ovirt.org/17714
vm: extend shared property to support locking
solving it may not be trivial, as we should not call
_normalizeDriveSharedAttribute() automatically on migration destination,
as it may well still be apart of a 3.3 clusterLevel.
Also, migration from vdsm with extended shared property, to an ovirt 3.3
vdsm is going to explode (in a different way), since the destination
does not expect the extended values.
Federico, do we have a choice but to revert that patch, and use
something like "shared3" property instead?
>
> Thread-1306::ERROR::2013-09-23
> 16:02:42,422::BindingXMLRPC::993::vds::(wrapper) unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/BindingXMLRPC.py", line 979, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/BindingXMLRPC.py", line 211, in vmDestroy
> return vm.destroy()
> File "/usr/share/vdsm/API.py", line 323, in destroy
> res = v.destroy()
> File "/usr/share/vdsm/vm.py", line 4326, in destroy
> response = self.releaseVm()
> File "/usr/share/vdsm/vm.py", line 4292, in releaseVm
> self._cleanup()
> File "/usr/share/vdsm/vm.py", line 2750, in _cleanup
> self._cleanupDrives()
> File "/usr/share/vdsm/vm.py", line 2482, in _cleanupDrives
> drive, exc_info=True)
> File "/usr/lib64/python2.6/logging/__init__.py", line 1329, in error
> self.logger.error(msg, *args, **kwargs)
> File "/usr/lib64/python2.6/logging/__init__.py", line 1082, in error
> self._log(ERROR, msg, args, **kwargs)
> File "/usr/lib64/python2.6/logging/__init__.py", line 1082, in error
> self._log(ERROR, msg, args, **kwargs)
> File "/usr/lib64/python2.6/logging/__init__.py", line 1173, in _log
> self.handle(record)
> File "/usr/lib64/python2.6/logging/__init__.py", line 1183, in handle
> self.callHandlers(record)
> File "/usr/lib64/python2.6/logging/__init__.py", line 1220, in
> callHandlers
> hdlr.handle(record)
> File "/usr/lib64/python2.6/logging/__init__.py", line 679, in handle
> self.emit(record)
> File "/usr/lib64/python2.6/logging/handlers.py", line 780, in emit
> msg = self.format(record)
> File "/usr/lib64/python2.6/logging/__init__.py", line 654, in format
> return fmt.format(record)
> File "/usr/lib64/python2.6/logging/__init__.py", line 436, in format
> record.message = record.getMessage()
> File "/usr/lib64/python2.6/logging/__init__.py", line 306, in getMessage
> msg = msg % self.args
> File "/usr/share/vdsm/vm.py", line 107, in __str__
> if not a.startswith('__')]
> File "/usr/share/vdsm/vm.py", line 1344, in hasVolumeLeases
> if self.shared != DRIVE_SHARED_TYPE.EXCLUSIVE:
> AttributeError: 'Drive' object has no attribute 'shared'
>
> - DHC
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
10 years, 7 months
start vm which in pool failed
by bigclouds
when starting a vm of a pool fails. in the process of hooks, i modify guestvm hostname, and modify the path of backing file of chain. do nothing else.
i can munually define a xml(after hooks), and start it without error.
env:
libvirt-0.10.2-18.el6_4.5.x86_64
2.6.32-358.6.2.el6.x86_64
centos6.4
1.where does this error message come from ?
Storage.StorageDomain WARNING Could not find mapping for lv d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/cae1bb2d-0529-4287-95a3-13dcb14f082f
2.
Thread-6291::ERROR::2013-09-18 12:44:21,205::vm::683::vm.Vm::(_startUnderlyingVm) vmId=`24f7e975-9aa5-4a14-b0f0-590add14c8b5`::The vm start process failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 645, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/libvirtvm.py", line 1529, in _run
self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 83, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2645, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/4
qemu-kvm: -drive file=/rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/cae1bb2d-0529-4287-95a3-13dcb14f082f,if=none,id=drive-ide0-0-0,format=qcow2,serial=ac025dc1-4e25-4b71-8c56-88dcb61b9f09,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/cae1bb2d-0529-4287-95a3-13dcb14f082f: Operation not permitted
10 years, 7 months
VDSM RPM Build failed with error SecureXMLRPCServer.py:143:1: E303 too many blank lines
by tjeyasin@redhat.com
Hi All,
The vdsm (master branch) rpm build failed with the following error:
find . -path './.git' -prune -type f -o \
-name '*.py' -o -name '*.py.in' | xargs /usr/bin/pyflakes | \
grep -w -v "\./vdsm/storage/lvm\.py.*: list comprehension redefines 'lv' from line .*" | \
while read LINE; do echo "$LINE"; false; done
/usr/bin/pep8 --version
1.4.6
/usr/bin/pep8 --exclude="config.py,constants.py" --filename '*.py,*.py.in' \
client lib/cpopen/*.py lib/vdsm/*.py lib/vdsm/*.py.in tests vds_bootstrap vdsm-tool vdsm/*.py vdsm/*.py.in vdsm/netconf vdsm/sos/vdsm.py.in vdsm/storage vdsm/vdsm vdsm_api vdsm_hooks vdsm_reg
lib/vdsm/SecureXMLRPCServer.py:143:1: E303 too many blank lines (3)
make[3]: *** [check-local] Error 1
make[3]: Leaving directory `/home/timothy/rpmbuild/BUILD/vdsm-4.12.0'
make[2]: *** [check-am] Error 2
make[2]: Leaving directory `/home/timothy/rpmbuild/BUILD/vdsm-4.12.0'
make[1]: *** [check-recursive] Error 1
make[1]: Leaving directory `/home/timothy/rpmbuild/BUILD/vdsm-4.12.0'
error: Bad exit status from /var/tmp/rpm-tmp.PcZNhb (%check)
Regards,
Tim
10 years, 7 months
Re: [vdsm] rebasing for oVirt 3.3.1?
by Dan Kenigsberg
On Mon, Sep 09, 2013 at 03:17:35PM +0300, Itamar Heim wrote:
> with 3.3.0 coming soon, one of the questions I heard is "what about
> 3.3.1" considering the number of patches fox bugs that went into
> master branch since since we branched to stabilize 3.3.0.
> i.e., most of the work in master branch has been focused on bug fixes)
>
> so my suggestion is for 3.3.1 that we rebase from master, then move
> to backporting patches to that branch for the rest of 3.3 time
> frame.
>
> while this poses a small risk, i believe its the best course forward
> to making ovirt 3.3 a more robust and stable version going forward.
>
> this is mostly about ovirt-engine, and probably vdsm. for the other
> projects, its up to the maintainer, based on risk/benefit.
To make this happen for Vdsm, we need to slow things down a bit,
stabilize what we have, and test it out.
Most of our work since ovirt-3.3 was bug fixing (23 patches), but some
of the 101 patches we've got are related to refactoring (19), cleanups
(27), test improvements (21), behind-the-scenes features (6), and
visible features (5).
Refactoring included Zhou Zheng Sheng's Ubuntu-readiness patches, which
may still incur surprises to sysV/systemd/upstart service framework, and
changes to how network configurators are to be used.
Behind-the-scenes features include speedup to block-based storage:
- One shot teardown.
- Avoid Img and Vol produces in fileVolume.getV*Size
- Make lvm.listPVNames() be based on vgs information.
- One shot prepare.
- Introduce lvm short filters.
Visible features are few, and only one of them:
- clientIF: automatically unpause vms in EIO when SD becomes active
carries some kind of a risk to a timely release. The rest of them are:
- Support for multiple heads for Qxl display device
- Add support for direct setting of cpu_shares when creating a VM
- Introducing hidden_vlans configurable.
- macspoof hooks: new hook script to enable macspoof filtering per vnic.
I think we can release vdsm-4.13.0 within a week if we put a hold on new
features and big changes, and put enough effort into testing the
mostly-changed areas:
- service framework
- VM lifecycle over block storage (including auto unpause)
- network configuration
Then, we could release vdsm-4.13.z without risking the stability of
ovirt-3.3.1.
Let's do it!
Dan.
10 years, 7 months
Re: [vdsm] [Engine-devel] fake VDSM as oVirt project?
by eedri@redhat.com
shouldn't this be on vdsm-devel?
[adding relevant groups]
----- Original Message -----
> From: "Liran Zelkha" <liran.zelkha(a)gmail.com>
> To: "Tomas Jelinek" <tjelinek(a)redhat.com>
> Cc: "engine-devel" <engine-devel(a)ovirt.org>
> Sent: Friday, September 13, 2013 9:52:42 AM
> Subject: Re: [Engine-devel] fake VDSM as oVirt project?
>
> +1 I use it constantly.
>
>
> On Fri, Sep 13, 2013 at 8:48 AM, Tomas Jelinek < tjelinek(a)redhat.com > wrote:
>
>
> Hi all,
>
> some time ago Libor Spevak created a simple web app called vdsm fake:
> documented: http://www.ovirt.org/VDSM_Fake
> published: https://github.com/lspevak/ovirt-vdsmfake
>
> It is basically a simple hackable java web application which can emulate the
> VDSM so you can connect the
> engine to it. It is especially useful for:
> - having tons of cheap fake hosts on one machine to stress your engine
> - doing some experiments with VDSM API (e.g. vfeenstr proposes a new VDSM API
> to lower the network traffic between
> engine <-> VDSM and uses the vdsm fake to implement it and do some tests to
> get some numbers on how does this change the things)
>
> Omer came up with an idea of making this app as one of oVirt's project (
> http://www.ovirt.org/Subprojects ) maybe with repository on
> oVirt's gerrit making it more accessible for getting/contributing for the
> whole community.
>
> What do you think about it?
>
> Tomas
> _______________________________________________
> Engine-devel mailing list
> Engine-devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/engine-devel
>
>
> _______________________________________________
> Engine-devel mailing list
> Engine-devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/engine-devel
>
10 years, 7 months