Change in vdsm[master]: [WIP]add simple balloon functional testcase
by lvroyce@linux.vnet.ibm.com
Royce Lv has uploaded a new change for review.
Change subject: [WIP]add simple balloon functional testcase
......................................................................
[WIP]add simple balloon functional testcase
Change-Id: Ie8140fe1c754d9d4026c503a19420e6552a3f4fe
Signed-off-by: Royce Lv<lvroyce(a)linux.vnet.ibm.com>
---
M tests/functional/xmlrpcTests.py
1 file changed, 34 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/20/12820/1
diff --git a/tests/functional/xmlrpcTests.py b/tests/functional/xmlrpcTests.py
index 3eb65e4..88dd2c5 100644
--- a/tests/functional/xmlrpcTests.py
+++ b/tests/functional/xmlrpcTests.py
@@ -19,6 +19,7 @@
#
import os
+import time
import tempfile
import pwd
import grp
@@ -29,6 +30,7 @@
from testrunner import VdsmTestCase as TestCaseBase
from testrunner import permutations, expandPermutations
from nose.plugins.skip import SkipTest
+from momTests import skipNoMOM
try:
import rtslib
except ImportError:
@@ -169,6 +171,38 @@
with RollbackContext() as rollback:
self._runVMKernelBootTemplate(rollback, customization)
+ @skipNoKVM
+ @skipNoMOM
+ def testSmallVMBallooning(self):
+ policyStr = """
+ (def set_guest (guest)
+ {
+ (guest.Control "balloon_target" 0)
+ })
+ (with Guests guest (set_guest guest))"""
+ balloonSpec = {'device': 'memballoon',
+ 'type': 'balloon',
+ 'specParams': {'model': 'virtio'}}
+ customization = {'vmId': '77777777-ffff-3333-bbbb-555555555555',
+ 'vmName': 'vdsm_testBalloonVM',
+ 'devices': [balloonSpec]}
+ policy = {'balloon': policyStr}
+
+ with RollbackContext() as rollback:
+ self._runVMKernelBootTemplate(rollback, customization)
+ self._enableBalloonPolicy(policy, rollback)
+ time.sleep(12) # MOM policy engine wake up evey 10s
+ balloonInf = self.s.getVmStats(
+ customization['vmId'])['statsList'][0]['balloonInfo']
+ self.assertEqual(balloonInf['balloon_cur'], 0)
+
+ def _enableBalloonPolicy(self, policy, rollback):
+ r = self.s.setMOMPolicy(policy)
+ self.assertVdsOK(r)
+ undo = lambda: \
+ self.assertVdsOK(self.s.resetMOMPolicy())
+ rollback.prependDefer(undo)
+
def _runVMKernelBootTemplate(self, rollback, vmDef={}, distro='fedora'):
kernelArgsDistro = {
# Fedora: The initramfs is generated by dracut. The following
--
To view, visit http://gerrit.ovirt.org/12820
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ie8140fe1c754d9d4026c503a19420e6552a3f4fe
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Royce Lv <lvroyce(a)linux.vnet.ibm.com>
9 years, 7 months
Change in vdsm[master]: testing storageTest.py as CI job.
by vvolansk@redhat.com
Vered Volansky has uploaded a new change for review.
Change subject: testing storageTest.py as CI job.
......................................................................
testing storageTest.py as CI job.
Change-Id: I4d0caab1749e075f3650c91161b473e66b19977d
Signed-off-by: Vered Volansky <vvolansk(a)redhat.com>
---
M vdsm/storage/sp.py
1 file changed, 1 insertion(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/43/23343/1
diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
index 0bab95d..0cb7164 100644
--- a/vdsm/storage/sp.py
+++ b/vdsm/storage/sp.py
@@ -1,6 +1,7 @@
#
# Copyright 2009-2011 Red Hat, Inc.
#
+#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
--
To view, visit http://gerrit.ovirt.org/23343
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I4d0caab1749e075f3650c91161b473e66b19977d
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Vered Volansky <vvolansk(a)redhat.com>
9 years, 8 months
Change in vdsm[master]: ifcfg: Log unhandled exception for new Thread
by Maor Lipchuk
Maor Lipchuk has uploaded a new change for review.
Change subject: ifcfg: Log unhandled exception for new Thread
......................................................................
ifcfg: Log unhandled exception for new Thread
Ading a traceback log for unhandled exceptions,
when openning a new thread, so it will not die silently.
Since the log in ifcfg instance is inaccessible from the decorator,
we use the default root logger.
Change-Id: I2ce44b4586e85438898fcdcd2d62d80813caa5ba
Signed-off-by: Maor Lipchuk <mlipchuk(a)redhat.com>
---
M vdsm/netconf/ifcfg.py
1 file changed, 1 insertion(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/85/23085/1
diff --git a/vdsm/netconf/ifcfg.py b/vdsm/netconf/ifcfg.py
index 52b95b1..8431e80 100644
--- a/vdsm/netconf/ifcfg.py
+++ b/vdsm/netconf/ifcfg.py
@@ -738,6 +738,7 @@
def ifup(iface, async=False):
"Bring up an interface"
+ @utils.traceback()
def _ifup(netIf):
rc, out, err = utils.execCmd([constants.EXT_IFUP, netIf], raw=False)
--
To view, visit http://gerrit.ovirt.org/23085
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I2ce44b4586e85438898fcdcd2d62d80813caa5ba
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Maor Lipchuk <mlipchuk(a)redhat.com>
9 years, 9 months
Change in vdsm[master]: Allow GZIP transport on XMLRPC
by Vinzenz Feenstra
Vinzenz Feenstra has uploaded a new change for review.
Change subject: Allow GZIP transport on XMLRPC
......................................................................
Allow GZIP transport on XMLRPC
Change-Id: Iebdaa04b17e2c1df1c1852ed536c5d6d8ec8d88b
Signed-off-by: Vinzenz Feenstra <vfeenstr(a)redhat.com>
---
M lib/vdsm/utils.py
M lib/vdsm/vdscli.py.in
2 files changed, 113 insertions(+), 2 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/01/25201/1
diff --git a/lib/vdsm/utils.py b/lib/vdsm/utils.py
index b682dec..81397ef 100644
--- a/lib/vdsm/utils.py
+++ b/lib/vdsm/utils.py
@@ -37,6 +37,7 @@
import fcntl
import functools
import glob
+import gzip
import io
import itertools
import logging
@@ -44,6 +45,7 @@
import os
import platform
import pwd
+import re
import select
import shutil
import signal
@@ -148,6 +150,28 @@
raise
+##
+# Encode a string using the gzip content encoding such as specified by the
+# Content-Encoding: gzip
+# in the HTTP header, as described in RFC 1952
+#
+# @param data the unencoded data
+# @return the encoded data
+
+def gzip_encode(data):
+ """data -> gzip encoded data
+
+ Encode data using the gzip content encoding as described in RFC 1952
+ """
+ f = StringIO()
+ gzf = gzip.GzipFile(mode="wb", fileobj=f, compresslevel=1)
+ gzf.write(data)
+ gzf.close()
+ encoded = f.getvalue()
+ f.close()
+ return encoded
+
+
class IPXMLRPCRequestHandler(SimpleXMLRPCRequestHandler):
if config.getboolean('vars', 'xmlrpc_http11'):
@@ -171,6 +195,26 @@
# the methods only on Python 2.6.
if sys.version_info[:2] == (2, 6):
+ def __init__(self, *args, **kwargs):
+ self.encode_threshold = 1400
+ SimpleXMLRPCRequestHandler.__init__(self, *args, **kwargs)
+
+ # a re to match a gzip Accept-Encoding
+ aepattern = re.compile(r"""
+ \s* ([^\s;]+) \s* #content-coding
+ (;\s* q \s*=\s* ([0-9\.]+))? #q
+ """, re.VERBOSE | re.IGNORECASE)
+
+ def accept_encodings(self):
+ r = {}
+ ae = self.headers.get("Accept-Encoding", "")
+ for e in ae.split(","):
+ match = self.aepattern.match(e)
+ if match:
+ v = match.group(3)
+ v = float(v) if v else 1.0
+ r[match.group(1)] = v
+ return r
def do_POST(self):
# Check that the path is legal
@@ -183,7 +227,7 @@
# We read this in chunks to avoid straining
# socket.read(); around the 10 or 15Mb mark, some platforms
# begin to have problems (bug #792570).
- max_chunk_size = 10*1024*1024
+ max_chunk_size = 10 * 1024 * 1024
size_remaining = int(self.headers["content-length"])
L = []
while size_remaining:
@@ -218,9 +262,20 @@
# got a valid XML RPC response
self.send_response(200)
self.send_header("Content-type", "text/xml")
+ pureLen = len(response)
+ if self.encode_threshold is not None:
+ if pureLen > self.encode_threshold:
+ q = self.accept_encodings().get("gzip", 0)
+ if q:
+ response = gzip_encode(response)
+ self.send_header("Content-encoding", "gzip")
self.send_header("Content-length", str(len(response)))
self.end_headers()
self.wfile.write(response)
+ import datetime
+ import re
+ with open('/tmp/rpc-stats.log', 'a') as statslog:
+ statslog.write("%s - %s - %d - %d\n" % (datetime.datetime.utcnow().isoformat(), re.search(r'(?<=<methodName>)(.+)(?=<\/methodName)', data).group(0), len(response), pureLen))
def report_404(self):
self.send_response(404)
diff --git a/lib/vdsm/vdscli.py.in b/lib/vdsm/vdscli.py.in
index 5fa7528..e9e5c26 100644
--- a/lib/vdsm/vdscli.py.in
+++ b/lib/vdsm/vdscli.py.in
@@ -19,10 +19,12 @@
# Refer to the README and COPYING files for full details of the license
#
+import gzip
import xmlrpclib
import subprocess
import os
import re
+import StringIO
import sys
from xml.parsers.expat import ExpatError
from . import SecureXMLRPCServer
@@ -34,7 +36,61 @@
d_port = '54321'
+if sys.version_info[:2] == (2, 6):
+ class GzipDecodedResponse(gzip.GzipFile if gzip else object):
+ """a file-like object to decode a response encoded with the gzip
+ method, as described in RFC 1952.
+ """
+ def __init__(self, response):
+ data = response.read()
+ with open("/tmp/client.dump", "wb") as f:
+ f.write(data)
+ self.stringio = StringIO.StringIO(data)
+ gzip.GzipFile.__init__(self, mode="rb", fileobj=self.stringio)
+
+ def close(self):
+ gzip.GzipFile.close(self)
+ self.stringio.close()
+
+ def wrap_request(transport):
+ self = transport
+
+ def wrapped_request(host, handler, request_body, verbose=0):
+ # issue XML-RPC request
+ h = self.make_connection(host)
+ if verbose:
+ h.set_debuglevel(1)
+
+ self.send_request(h, handler, request_body)
+ self.send_host(h, host)
+ self.send_user_agent(h)
+ h.putheader("Accept-Encoding", "gzip")
+ self.send_content(h, request_body)
+
+ errcode, errmsg, headers = h.getreply()
+
+ if errcode != 200:
+ raise xmlrpclib.ProtocolError(host + handler, errcode, errmsg,
+ headers)
+
+ try:
+ sock = h._conn.sock
+ except AttributeError:
+ sock = None
+
+ self.verbose = verbose
+ #print headers.dict
+ if headers.get('Content-encoding', '') == 'gzip':
+ return self.parse_response(GzipDecodedResponse(h.getfile()))
+ else:
+ return self._parse_response(h.getfile(), sock)
+
+ transport.request = wrapped_request
+
+
def wrap_transport(transport):
+ if sys.version_info[:2] == (2, 6):
+ wrap_request(transport)
old_parse_response = transport.parse_response
def wrapped_parse_response(*args, **kwargs):
@@ -42,7 +98,7 @@
return old_parse_response(*args, **kwargs)
except ExpatError:
sys.stderr.write('Parsing error was thrown during parsing '
- 'response when provided: {}'.format(args[1]))
+ 'response when provided: {0}'.format(args))
raise
transport.parse_response = wrapped_parse_response
return transport
--
To view, visit http://gerrit.ovirt.org/25201
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Iebdaa04b17e2c1df1c1852ed536c5d6d8ec8d88b
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Vinzenz Feenstra <vfeenstr(a)redhat.com>
9 years, 9 months
Change in vdsm[master]: [WIP] lvm: Add an option to replace locking type 4
by ykaplan@redhat.com
Yeela Kaplan has uploaded a new change for review.
Change subject: [WIP] lvm: Add an option to replace locking type 4
......................................................................
[WIP] lvm: Add an option to replace locking type 4
Replace lvm locking type 4 with locking type 1
only in case of lvm cluster safe commands.
Change-Id: I9a67a7fa20145763d8ab5cdbf293a9c3eb070067
Signed-off-by: Yeela Kaplan <ykaplan(a)redhat.com>
---
M vdsm/storage/lvm.py
1 file changed, 13 insertions(+), 7 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/45/23645/1
diff --git a/vdsm/storage/lvm.py b/vdsm/storage/lvm.py
index 4f3416f..932d69e 100644
--- a/vdsm/storage/lvm.py
+++ b/vdsm/storage/lvm.py
@@ -257,13 +257,15 @@
return self._extraCfg
- def _addExtraCfg(self, cmd, devices=tuple()):
+ def _addExtraCfg(self, cmd, devices=tuple(), safe):
newcmd = [constants.EXT_LVM, cmd[0]]
if devices:
conf = _buildConfig(devices)
else:
conf = self._getCachedExtraCfg()
+ if not safe:
+ conf = conf.replace("locking_type=4", "locking_type=1")
newcmd += ["--config", conf]
if len(cmd) > 1:
@@ -290,13 +292,17 @@
self._vgs = {}
self._lvs = {}
- def cmd(self, cmd, devices=tuple()):
- finalCmd = self._addExtraCfg(cmd, devices)
+ def cmd(self, cmd, devices=tuple(), safe=True):
+ """
+ Use safe as False only for lvm cluster safe commands.
+ These are cmds that don't change metadata of an existing VG.
+ """
+ finalCmd = self._addExtraCfg(cmd, devices, safe)
rc, out, err = misc.execCmd(finalCmd, sudo=True)
if rc != 0:
# Filter might be stale
self.invalidateFilter()
- newCmd = self._addExtraCfg(cmd)
+ newCmd = self._addExtraCfg(cmd, safe)
# Before blindly trying again make sure
# that the commands are not identical, because
# the devlist is sorted there is no fear
@@ -717,7 +723,7 @@
cmd.extend(("--metadatasize", metadatasize, "--metadatacopies", "2",
"--metadataignore", "y"))
cmd.extend(devices)
- rc, out, err = _lvminfo.cmd(cmd, devices)
+ rc, out, err = _lvminfo.cmd(cmd, devices, False)
return rc, out, err
@@ -929,7 +935,7 @@
# Activate the 1st PV metadata areas
cmd = ["pvchange", "--metadataignore", "n"]
cmd.append(pvs[0])
- rc, out, err = _lvminfo.cmd(cmd, tuple(pvs))
+ rc, out, err = _lvminfo.cmd(cmd, tuple(pvs), False)
if rc != 0:
raise se.PhysDevInitializationError(pvs[0])
@@ -937,7 +943,7 @@
if initialTag:
options.extend(("--addtag", initialTag))
cmd = ["vgcreate"] + options + [vgName] + pvs
- rc, out, err = _lvminfo.cmd(cmd, tuple(pvs))
+ rc, out, err = _lvminfo.cmd(cmd, tuple(pvs), False)
if rc == 0:
_lvminfo._invalidatepvs(pvs)
_lvminfo._invalidatevgs(vgName)
--
To view, visit http://gerrit.ovirt.org/23645
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I9a67a7fa20145763d8ab5cdbf293a9c3eb070067
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yeela Kaplan <ykaplan(a)redhat.com>
9 years, 9 months
Change in vdsm[master]: Avoid redundant volume produces.
by ewarszaw@redhat.com
Eduardo has uploaded a new change for review.
Change subject: Avoid redundant volume produces.
......................................................................
Avoid redundant volume produces.
Add sd.getVolumePath() returns the volume path without produce it.
Deprecating hsm.getVolumePath() and hsm.prepareVolume().
When removed, remove API.prepare(), BindingXMLRPC.volumePrepare(),
API.getPath, BindingXMLRPC.volumeGetPath(), etc.
Change-Id: I3ad53a7e8a66d7f9bdd62048f2bf1f722a490c5c
Signed-off-by: Eduardo <ewarszaw(a)redhat.com>
---
M vdsm/storage/fileSD.py
M vdsm/storage/hsm.py
M vdsm/storage/sd.py
3 files changed, 11 insertions(+), 6 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/91/17991/1
diff --git a/vdsm/storage/fileSD.py b/vdsm/storage/fileSD.py
index 9d1493d..8cbea23 100644
--- a/vdsm/storage/fileSD.py
+++ b/vdsm/storage/fileSD.py
@@ -302,8 +302,7 @@
Return the volume lease (leasePath, leaseOffset)
"""
if self.hasVolumeLeases():
- vol = self.produceVolume(imgUUID, volUUID)
- volumePath = vol.getVolumePath()
+ volumePath = self.getVolumePath(imgUUID, volUUID)
leasePath = volumePath + fileVolume.LEASE_FILEEXT
return leasePath, fileVolume.LEASE_FILEOFFSET
return None, None
@@ -426,8 +425,9 @@
# NFS volumes. In theory it is necessary to fix the permission
# of the leaf only but to not introduce an additional requirement
# (ordered volUUIDs) we fix them all.
- for vol in [self.produceVolume(imgUUID, x) for x in volUUIDs]:
- self.oop.fileUtils.copyUserModeToGroup(vol.getVolumePath())
+ for volUUID in volUUIDs:
+ volPath = self.getVolumePath(imgUUID, volUUID)
+ self.oop.fileUtils.copyUserModeToGroup(volPath)
@classmethod
def format(cls, sdUUID):
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index c754ee8..3545677 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -3076,6 +3076,7 @@
volUUID=volUUID).getInfo()
return dict(info=info)
+ @deprecated
@public
def getVolumePath(self, sdUUID, spUUID, imgUUID, volUUID, options=None):
"""
@@ -3100,8 +3101,7 @@
"""
vars.task.getSharedLock(STORAGE, sdUUID)
path = sdCache.produce(
- sdUUID=sdUUID).produceVolume(imgUUID=imgUUID,
- volUUID=volUUID).getVolumePath()
+ sdUUID=sdUUID).getVolumePath(imgUUID, volUUID)
return dict(path=path)
@public
@@ -3127,6 +3127,7 @@
if fails:
self.log.error("Failed to remove the following rules: %s", fails)
+ @deprecated
@public
def prepareVolume(self, sdUUID, spUUID, imgUUID, volUUID, rw=True,
options=None):
diff --git a/vdsm/storage/sd.py b/vdsm/storage/sd.py
index 36c4877..dde7832 100644
--- a/vdsm/storage/sd.py
+++ b/vdsm/storage/sd.py
@@ -640,6 +640,10 @@
# If it has a repo we don't have multiple domains. Assume single pool
return os.path.join(self.storage_repository, self.getPools()[0])
+ def getVolumePath(self, imgUUID, volUUID):
+ return os.path.join(self.mountpoint, self.sdUUID, 'images', imgUUID,
+ volUUID)
+
def getIsoDomainImagesDir(self):
"""
Get 'images' directory from Iso domain
--
To view, visit http://gerrit.ovirt.org/17991
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I3ad53a7e8a66d7f9bdd62048f2bf1f722a490c5c
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Eduardo <ewarszaw(a)redhat.com>
9 years, 9 months
Change in vdsm[master]: [WIP] destroy storage pool using command type 1
by ykaplan@redhat.com
Yeela Kaplan has uploaded a new change for review.
Change subject: [WIP] destroy storage pool using command type 1
......................................................................
[WIP] destroy storage pool using command type 1
Change-Id: I67cda9abd0bbc01d7d0642d5d3327f8687d7f728
Signed-off-by: Yeela Kaplan <ykaplan(a)redhat.com>
---
M vdsm/storage/blockSD.py
M vdsm/storage/sd.py
M vdsm/storage/sp.py
3 files changed, 29 insertions(+), 6 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/98/24398/1
diff --git a/vdsm/storage/blockSD.py b/vdsm/storage/blockSD.py
index 799ee01..0bbe5dd 100644
--- a/vdsm/storage/blockSD.py
+++ b/vdsm/storage/blockSD.py
@@ -559,9 +559,9 @@
raise se.VolumesZeroingError(path)
if version in VERS_METADATA_LV:
- md = LvBasedSDMetadata(vgName, sd.METADATA)
+ LvBasedSDMetadata(vgName, sd.METADATA)
elif version in VERS_METADATA_TAG:
- md = TagBasedSDMetadata(vgName)
+ TagBasedSDMetadata(vgName)
logBlkSize, phyBlkSize = lvm.getVGBlockSizes(vgName)
@@ -1327,10 +1327,10 @@
vgName = vg.name
toAdd = encodeVgTags(leaseParams)
toAdd += encodeVgTags({sd.DMDK_POOLS: spUUID,
- sd.DMDK_ROLE: sd.MASTER_DOMAIN})
+ sd.DMDK_ROLE: sd.MASTER_DOMAIN})
toDel = encodeVgTags({sd.DMDK_ROLE: sd.REGULAR_DOMAIN,
- sd.DMDK_POOLS: spUUID,
- sd.DMDK_POOLS: ''})
+ sd.DMDK_POOLS: spUUID,
+ sd.DMDK_POOLS: ''})
lvm.changeVGTags(vgName, delTags=toDel, addTags=toAdd, safe=False)
def refreshDirTree(self):
@@ -1357,6 +1357,26 @@
finally:
self._extendlock.release()
+ def detachMaster(self, spUUID):
+ self.invalidateMetadata()
+ pools = self.getPools()
+ try:
+ pools.remove(spUUID)
+ except ValueError:
+ self.log.error(
+ "Can't remove pool %s from domain %s pool list %s, "
+ "it does not exist",
+ spUUID, self.sdUUID, str(pools))
+ return
+ vgUUID = self.getInfo()['vguuid']
+ vg = lvm.getVGbyUUID(vgUUID)
+ vgName = vg.name
+ toAdd = encodeVgTags({sd.DMDK_POOLS: '',
+ sd.DMDK_ROLE: sd.REGULAR_DOMAIN})
+ toDel = encodeVgTags({sd.DMDK_POOLS: spUUID,
+ sd.DMDK_ROLE: sd.MASTER_DOMAIN})
+ lvm.changeVGTags(vgName, delTags=toDel, addTags=toAdd, safe=False)
+
def refresh(self):
self.refreshDirTree()
lvm.invalidateVG(self.sdUUID)
diff --git a/vdsm/storage/sd.py b/vdsm/storage/sd.py
index 23ca112..6849545 100644
--- a/vdsm/storage/sd.py
+++ b/vdsm/storage/sd.py
@@ -524,6 +524,9 @@
# Last thing to do is to remove pool from domain
# do any required cleanup
+ def detachMaster(self, spUUID):
+ self.detach(spUUID)
+
# I personally don't think there is a reason to pack these
# but I already changed too much.
def changeLeaseParams(self, leaseParamPack):
diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
index d228a9d..6ec941d 100644
--- a/vdsm/storage/sp.py
+++ b/vdsm/storage/sp.py
@@ -963,7 +963,7 @@
# Forced detach master domain
self.forcedDetachSD(self.masterDomain.sdUUID)
- self.masterDomain.detach(self.spUUID)
+ self.masterDomain.detachMaster(self.spUUID)
@unsecured
def _convertDomain(self, domain, targetFormat=None):
--
To view, visit http://gerrit.ovirt.org/24398
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I67cda9abd0bbc01d7d0642d5d3327f8687d7f728
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yeela Kaplan <ykaplan(a)redhat.com>
9 years, 9 months
Change in vdsm[master]: [WIP] Create storage pool using command type 1
by ykaplan@redhat.com
Yeela Kaplan has uploaded a new change for review.
Change subject: [WIP] Create storage pool using command type 1
......................................................................
[WIP] Create storage pool using command type 1
Change-Id: Ia64f6dd2df38d2968f03ce66094f3ba7b4343503
Signed-off-by: Yeela Kaplan <ykaplan(a)redhat.com>
---
M vdsm/storage/blockSD.py
M vdsm/storage/hsm.py
M vdsm/storage/lvm.py
M vdsm/storage/sd.py
M vdsm/storage/sp.py
5 files changed, 71 insertions(+), 74 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/47/23647/1
diff --git a/vdsm/storage/blockSD.py b/vdsm/storage/blockSD.py
index 7980c80..bb7f365 100644
--- a/vdsm/storage/blockSD.py
+++ b/vdsm/storage/blockSD.py
@@ -92,6 +92,12 @@
VERS_METADATA_TAG = (2, 3)
+def encodeVgTags(tagsDict):
+ return [VGTagMetadataRW.METADATA_TAG_PREFIX +
+ lvmTagEncode("%s=%s" % (k, v))
+ for k, v in tagsDict.items()]
+
+
def encodePVInfo(pvInfo):
return (
"pv:%s," % pvInfo["guid"] +
@@ -130,6 +136,13 @@
def lvmTagDecode(s):
return LVM_ENC_ESCAPE.sub(lambda c: unichr(int(c.groups()[0])), s)
+
+
+def encodeVgTags(tagsDict):
+ tags = [VGTagMetadataRW.METADATA_TAG_PREFIX +
+ lvmTagEncode("%s=%s" % (k, v))
+ for k, v in tagsDict.items()]
+ return tuple(tags)
def _tellEnd(devPath):
@@ -523,7 +536,7 @@
# least SDMETADATA/METASIZE units, we know we can use the first
# SDMETADATA bytes of the metadata volume for the SD metadata.
# pass metadata's dev to ensure it is the first mapping
- mapping = cls.getMetaDataMapping(vgName)
+ #mapping = cls.getMetaDataMapping(vgName)
# Create the rest of the BlockSD internal volumes
lvm.createLV(vgName, sd.LEASES, sd.LEASES_SIZE, safe=False)
@@ -558,6 +571,7 @@
logBlkSize, phyBlkSize = lvm.getVGBlockSizes(vgName)
+ mapping = cls.getMetaDataMapping(vgName)
# create domain metadata
# FIXME : This is 99% like the metadata in file SD
# Do we really need to keep the VGUUID?
@@ -565,11 +579,11 @@
initialMetadata = {
sd.DMDK_VERSION: version,
sd.DMDK_SDUUID: sdUUID,
- sd.DMDK_TYPE: storageType,
- sd.DMDK_CLASS: domClass,
+ sd.DMDK_TYPE: sd.storageType(storageType),
+ sd.DMDK_CLASS: sd.class2name(domClass),
sd.DMDK_DESCRIPTION: domainName,
sd.DMDK_ROLE: sd.REGULAR_DOMAIN,
- sd.DMDK_POOLS: [],
+ sd.DMDK_POOLS: '',
sd.DMDK_LOCK_POLICY: '',
sd.DMDK_LOCK_RENEWAL_INTERVAL_SEC: sd.DEFAULT_LEASE_PARAMS[
sd.DMDK_LOCK_RENEWAL_INTERVAL_SEC],
@@ -585,8 +599,8 @@
}
initialMetadata.update(mapping)
-
- md.update(initialMetadata)
+ toAdd = encodeVgTags(initialMetadata)
+ lvm.changeVGTags(vgName, delTags=(), addTags=toAdd, safe=False)
# Mark VG with Storage Domain Tag
try:
@@ -1302,6 +1316,22 @@
# It is time to deactivate the master LV now
lvm.deactivateLVs(self.sdUUID, MASTERLV)
+ def initMasterParams(self, poolMD, params):
+ vgUUID = self.getInfo()['vguuid']
+ vg = lvm.getVGbyUUID(vgUUID)
+ vgName = vg.name
+ toAdd = encodeVgTags(params)
+ lvm.changeVGTags(vgName, addTags=toAdd, safe=False)
+
+ def setMasterDomainParams(self, spUUID, leaseParams):
+ vgUUID = self.getInfo()['vguuid']
+ vg = lvm.getVGbyUUID(vgUUID)
+ vgName = vg.name
+ toAdd = encodeVgTags(leaseParams)
+ toAdd += encodeVgTags({sd.DMDK_POOLS: [spUUID],
+ sd.DMDK_ROLE: sd.MASTER_DOMAIN})
+ lvm.changeVGTags(vgName, delTags=(), addTags=toAdd, safe=False)
+
def refreshDirTree(self):
# create domain images folder
imagesPath = os.path.join(self.domaindir, sd.DOMAIN_IMAGES)
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 5c73dd9..ff27d53 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -942,35 +942,15 @@
if masterDom not in domList:
raise se.InvalidParameterException("masterDom", str(masterDom))
+ if len(domList) > 1:
+ raise NotImplementedError("Create storage pool "
+ "only with master domain")
+
if len(poolName) > sp.MAX_POOL_DESCRIPTION_SIZE:
raise se.StoragePoolDescriptionTooLongError()
- msd = sdCache.produce(sdUUID=masterDom)
- msdType = msd.getStorageType()
- msdVersion = msd.getVersion()
- if (msdType in sd.BLOCK_DOMAIN_TYPES and
- msdVersion in blockSD.VERS_METADATA_LV and
- len(domList) > sp.MAX_DOMAINS):
- raise se.TooManyDomainsInStoragePoolError()
-
- for sdUUID in domList:
- try:
- dom = sdCache.produce(sdUUID=sdUUID)
- # TODO: consider removing validate() from here, as the domains
- # are going to be accessed much later, and may loose validity
- # until then.
- dom.validate()
- except:
- raise se.StorageDomainAccessError(sdUUID)
- # If you remove this condition, remove it from
- # StoragePool.attachSD() too.
- if dom.isData() and (dom.getVersion() > msdVersion):
- raise se.MixedSDVersionError(dom.sdUUID, dom.getVersion(),
- msd.sdUUID, msdVersion)
-
vars.task.getExclusiveLock(STORAGE, spUUID)
- for dom in sorted(domList):
- vars.task.getExclusiveLock(STORAGE, dom)
+ vars.task.getExclusiveLock(STORAGE, masterDom)
return sp.StoragePool(spUUID, self.domainMonitor, self.taskMng).create(
poolName, masterDom, domList, masterVersion, leaseParams)
diff --git a/vdsm/storage/lvm.py b/vdsm/storage/lvm.py
index 0f96df6..c1a0b92 100644
--- a/vdsm/storage/lvm.py
+++ b/vdsm/storage/lvm.py
@@ -302,7 +302,7 @@
if rc != 0:
# Filter might be stale
self.invalidateFilter()
- newCmd = self._addExtraCfg(cmd, safe)
+ newCmd = self._addExtraCfg(cmd, tuple(), safe)
# Before blindly trying again make sure
# that the commands are not identical, because
# the devlist is sorted there is no fear
diff --git a/vdsm/storage/sd.py b/vdsm/storage/sd.py
index 7f00533..c968d7b 100644
--- a/vdsm/storage/sd.py
+++ b/vdsm/storage/sd.py
@@ -766,6 +766,15 @@
def isMaster(self):
return self.getMetaParam(DMDK_ROLE).capitalize() == MASTER_DOMAIN
+ @classmethod
+ def initMasterParams(cls, poolMD, params):
+ poolMD.update(params)
+
+ def setMasterDomainParams(self, spUUID, leaseParams):
+ self.changeLeaseParams(leaseParams)
+ self.setMetaParam(DMDK_POOLS, [spUUID])
+ self.changeRole(MASTER_DOMAIN)
+
def initMaster(self, spUUID, leaseParams):
self.invalidateMetadata()
pools = self.getPools()
@@ -774,9 +783,7 @@
raise se.StorageDomainAlreadyAttached(pools[0], self.sdUUID)
with self._metadata.transaction():
- self.changeLeaseParams(leaseParams)
- self.setMetaParam(DMDK_POOLS, [spUUID])
- self.changeRole(MASTER_DOMAIN)
+ self.setMasterDomainParams(spUUID, leaseParams)
def isISO(self):
return self.getMetaParam(DMDK_CLASS) == ISO_DOMAIN
diff --git a/vdsm/storage/sp.py b/vdsm/storage/sp.py
index 50e29ef..0b00264 100644
--- a/vdsm/storage/sp.py
+++ b/vdsm/storage/sp.py
@@ -588,9 +588,8 @@
@unsecured
def create(self, poolName, msdUUID, domList, masterVersion, leaseParams):
"""
- Create new storage pool with single/multiple image data domain.
- The command will create new storage pool meta-data attach each
- storage domain to that storage pool.
+ Create new storage pool with single image data domain.
+ The command will create new storage pool meta-data
At least one data (images) domain must be provided
'poolName' - storage pool name
'msdUUID' - master domain of this pool (one of domList)
@@ -600,27 +599,20 @@
"masterVersion=%s %s", self.spUUID, poolName, msdUUID,
domList, masterVersion, leaseParams)
- if msdUUID not in domList:
- raise se.InvalidParameterException("masterDomain", msdUUID)
+ # Check the master domain before pool creation
+ try:
+ msd = sdCache.produce(msdUUID)
+ msd.validate()
+ except se.StorageException:
+ self.log.error("Unexpected error", exc_info=True)
+ raise se.StorageDomainAccessError(msdUUID)
- # Check the domains before pool creation
- for sdUUID in domList:
- try:
- domain = sdCache.produce(sdUUID)
- domain.validate()
- if sdUUID == msdUUID:
- msd = domain
- except se.StorageException:
- self.log.error("Unexpected error", exc_info=True)
- raise se.StorageDomainAccessError(sdUUID)
-
- # Validate unattached domains
- if not domain.isISO():
- domain.invalidateMetadata()
- spUUIDs = domain.getPools()
- # Non ISO domains have only 1 pool
- if len(spUUIDs) > 0:
- raise se.StorageDomainAlreadyAttached(spUUIDs[0], sdUUID)
+ # Validate unattached domains
+ msd.invalidateMetadata()
+ spUUIDs = msd.getPools()
+ # Non ISO domains have only 1 pool
+ if len(spUUIDs) > 0:
+ raise se.StorageDomainAlreadyAttached(spUUIDs[0], msdUUID)
fileUtils.createdir(self.poolPath)
self._acquireTemporaryClusterLock(msdUUID, leaseParams)
@@ -629,23 +621,10 @@
self._setSafe()
# Mark 'master' domain
# We should do it before actually attaching this domain to the pool
- # During 'master' marking we create pool metadata and each attached
- # domain should register there
+ # During 'master' marking we create pool metadata
self.createMaster(poolName, msd, masterVersion, leaseParams)
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
- # Attach storage domains to the storage pool
- # Since we are creating the pool then attach is done from the hsm
- # and not the spm therefore we must manually take the master domain
- # lock
- # TBD: create will receive only master domain and further attaches
- # should be done under SPM
- # Master domain was already attached (in createMaster),
- # no need to reattach
- for sdUUID in domList:
- # No need to attach the master
- if sdUUID != msdUUID:
- self.attachSD(sdUUID)
except Exception:
self.log.error("Create pool %s canceled ", poolName, exc_info=True)
try:
@@ -716,13 +695,14 @@
@unsecured
def initParameters(self, poolName, domain, masterVersion):
- self._getPoolMD(domain).update({
+ params = {
PMDK_SPM_ID: SPM_ID_FREE,
PMDK_LVER: LVER_INVALID,
PMDK_MASTER_VER: masterVersion,
PMDK_POOL_DESCRIPTION: poolName,
PMDK_DOMAINS: {domain.sdUUID: sd.DOM_ACTIVE_STATUS},
- })
+ }
+ domain.initMasterParams(self._getPoolMD(domain), params)
@unsecured
def createMaster(self, poolName, domain, masterVersion, leaseParams):
--
To view, visit http://gerrit.ovirt.org/23647
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ia64f6dd2df38d2968f03ce66094f3ba7b4343503
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yeela Kaplan <ykaplan(a)redhat.com>
9 years, 9 months
Change in vdsm[master]: [WIP] Create storage domain using command type 1
by ykaplan@redhat.com
Yeela Kaplan has uploaded a new change for review.
Change subject: [WIP] Create storage domain using command type 1
......................................................................
[WIP] Create storage domain using command type 1
All bootstrap operaions are executed using command type 1.
Change-Id: I127af299086ec5572d29686451d4892c9ff0330d
Signed-off-by: Yeela Kaplan <ykaplan(a)redhat.com>
---
M vdsm/storage/blockSD.py
M vdsm/storage/lvm.py
2 files changed, 15 insertions(+), 14 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/46/23646/1
diff --git a/vdsm/storage/blockSD.py b/vdsm/storage/blockSD.py
index 55bd796..7980c80 100644
--- a/vdsm/storage/blockSD.py
+++ b/vdsm/storage/blockSD.py
@@ -517,7 +517,7 @@
# Create metadata service volume
metasize = cls.metaSize(vgName)
- lvm.createLV(vgName, sd.METADATA, "%s" % (metasize))
+ lvm.createLV(vgName, sd.METADATA, "%s" % (metasize), safe=False)
# Create the mapping right now so the index 0 is guaranteed
# to belong to the metadata volume. Since the metadata is at
# least SDMETADATA/METASIZE units, we know we can use the first
@@ -526,11 +526,11 @@
mapping = cls.getMetaDataMapping(vgName)
# Create the rest of the BlockSD internal volumes
- lvm.createLV(vgName, sd.LEASES, sd.LEASES_SIZE)
- lvm.createLV(vgName, sd.IDS, sd.IDS_SIZE)
- lvm.createLV(vgName, sd.INBOX, sd.INBOX_SIZE)
- lvm.createLV(vgName, sd.OUTBOX, sd.OUTBOX_SIZE)
- lvm.createLV(vgName, MASTERLV, MASTERLV_SIZE)
+ lvm.createLV(vgName, sd.LEASES, sd.LEASES_SIZE, safe=False)
+ lvm.createLV(vgName, sd.IDS, sd.IDS_SIZE, safe=False)
+ lvm.createLV(vgName, sd.INBOX, sd.INBOX_SIZE, safe=False)
+ lvm.createLV(vgName, sd.OUTBOX, sd.OUTBOX_SIZE, safe=False)
+ lvm.createLV(vgName, MASTERLV, MASTERLV_SIZE, safe=False)
# Create VMS file system
_createVMSfs(os.path.join("/dev", vgName, MASTERLV))
@@ -591,7 +591,7 @@
# Mark VG with Storage Domain Tag
try:
lvm.replaceVGTag(vgName, STORAGE_UNREADY_DOMAIN_TAG,
- STORAGE_DOMAIN_TAG)
+ STORAGE_DOMAIN_TAG, safe=False)
except se.StorageException:
raise se.VolumeGroupUninitialized(vgName)
diff --git a/vdsm/storage/lvm.py b/vdsm/storage/lvm.py
index 932d69e..0f96df6 100644
--- a/vdsm/storage/lvm.py
+++ b/vdsm/storage/lvm.py
@@ -257,7 +257,7 @@
return self._extraCfg
- def _addExtraCfg(self, cmd, devices=tuple(), safe):
+ def _addExtraCfg(self, cmd, devices=tuple(), safe=True):
newcmd = [constants.EXT_LVM, cmd[0]]
if devices:
conf = _buildConfig(devices)
@@ -656,6 +656,7 @@
globals()["_current_lvmconf"] = _current_lvmconf.replace("locking_type=4",
"locking_type=1")
log.debug("### _current_lvmconf %s", globals()["_current_lvmconf"])
+
def bootstrap(refreshlvs=()):
"""
@@ -1061,7 +1062,7 @@
def createLV(vgName, lvName, size, activate=True, contiguous=False,
- initialTag=None):
+ initialTag=None, safe=True):
"""
Size units: MB (1024 ** 2 = 2 ** 20)B.
"""
@@ -1078,7 +1079,7 @@
if initialTag is not None:
cmd.extend(("--addtag", initialTag))
cmd.extend(("--name", lvName, vgName))
- rc, out, err = _lvminfo.cmd(cmd, _lvminfo._getVGDevs((vgName, )))
+ rc, out, err = _lvminfo.cmd(cmd, _lvminfo._getVGDevs((vgName, )), safe)
if rc == 0:
_lvminfo._invalidatevgs(vgName)
@@ -1280,7 +1281,7 @@
return os.path.exists(lvPath(vgName, lvName))
-def changeVGTags(vgName, delTags=(), addTags=()):
+def changeVGTags(vgName, delTags=(), addTags=(), safe=True):
delTags = set(delTags)
addTags = set(addTags)
if delTags.intersection(addTags):
@@ -1296,7 +1297,7 @@
cmd.extend(("--addtag", tag))
cmd.append(vgName)
- rc, out, err = _lvminfo.cmd(cmd, _lvminfo._getVGDevs((vgName, )))
+ rc, out, err = _lvminfo.cmd(cmd, _lvminfo._getVGDevs((vgName, )), safe)
_lvminfo._invalidatevgs(vgName)
if rc != 0:
raise se.VolumeGroupReplaceTagError(
@@ -1321,8 +1322,8 @@
raise se.VolumeGroupRemoveTagError(vgName)
-def replaceVGTag(vg, oldTag, newTag):
- changeVGTags(vg, [oldTag], [newTag])
+def replaceVGTag(vg, oldTag, newTag, safe=True):
+ changeVGTags(vg, [oldTag], [newTag], safe)
def addVGTags(vgName, tags):
--
To view, visit http://gerrit.ovirt.org/23646
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I127af299086ec5572d29686451d4892c9ff0330d
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Yeela Kaplan <ykaplan(a)redhat.com>
9 years, 9 months
Change in vdsm[master]: [WIP] Towards a more (block) secure HSM.
by ewarszaw@redhat.com
Eduardo has uploaded a new change for review.
Change subject: [WIP] Towards a more (block) secure HSM.
......................................................................
[WIP] Towards a more (block) secure HSM.
Change-Id: I30df4ee5cdb6b44cf14d8cb155436aac7442a07d
---
M vdsm/storage/hsm.py
M vdsm/storage/lvm.py
M vdsm/storage/sp.py
3 files changed, 25 insertions(+), 5 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/18/2218/1
--
To view, visit http://gerrit.ovirt.org/2218
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I30df4ee5cdb6b44cf14d8cb155436aac7442a07d
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Eduardo <ewarszaw(a)redhat.com>
9 years, 9 months