Change in vdsm[master]: [WIP] Added gluster geo-replication support
by barumuga@redhat.com
Hello Ayal Baron, Timothy Asir, Saggi Mizrahi, Federico Simoncelli, Dan Kenigsberg,
I'd like you to do a code review. Please visit
http://gerrit.ovirt.org/8375
to review the following change.
Change subject: [WIP] Added gluster geo-replication support
......................................................................
[WIP] Added gluster geo-replication support
Below new verbs are added
glusterValidateSshConnection
glusterSetupSshConnection
Change-Id: Ic783abd5f1b63bc5116ce4ff2a3c7be92001a387
Signed-off-by: Bala.FA <barumuga(a)redhat.com>
---
M vdsm.spec.in
M vdsm/gluster/api.py
M vdsm/gluster/exception.py
3 files changed, 137 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/75/8375/1
diff --git a/vdsm.spec.in b/vdsm.spec.in
index 22e48fb..020344f 100644
--- a/vdsm.spec.in
+++ b/vdsm.spec.in
@@ -352,6 +352,7 @@
Requires: %{name} = %{version}-%{release}
Requires: glusterfs glusterfs-server glusterfs-fuse
+Requires: python-paramiko
%description gluster
Gluster plugin enables VDSM to serve Gluster functionalities.
diff --git a/vdsm/gluster/api.py b/vdsm/gluster/api.py
index a825de2..338cb06 100644
--- a/vdsm/gluster/api.py
+++ b/vdsm/gluster/api.py
@@ -19,11 +19,24 @@
#
from functools import wraps
+import socket
+import paramiko
+import re
from vdsm.define import doneCode
import supervdsm as svdsm
+from vdsm.config import config
+from vdsm import utils
+import exception as ge
_SUCCESS = {'status': doneCode}
+_KEYFILE = config.get('vars', 'trust_store_path') + '/keys/vdsmkey.pem'
+_sshKeyGenCommandPath = utils.CommandPath("ssh-keygen",
+ "/usr/bin/ssh-keygen",
+ )
+_SSH_COPY_ID_CMD = "umask 077; test -d ~/.ssh || mkdir ~/.ssh ; " \
+ "cat >> ~/.ssh/authorized_keys && (test -x /sbin/restorecon && " \
+ "/sbin/restorecon ~/.ssh ~/.ssh/authorized_keys >/dev/null 2>&1 || true)"
def exportAsVerb(func):
@@ -43,6 +56,52 @@
class VolumeStatus():
ONLINE = 'ONLINE'
OFFLINE = 'OFFLINE'
+
+
+class HostKeyMatchException(paramiko.SSHException):
+ def __init__(self, hostname, fingerprint, expected_fingerprint):
+ err = 'Fingerprint of Host key ' \
+ '%s for server %s does not match with %s' % \
+ (fingerprint, hostname, expected_fingerprint)
+ paramiko.SSHException.__init__(self, err)
+ self.hostname = hostname
+ self.fingerprint = fingerprint
+ self.expected_fingerprint = expected_fingerprint
+
+
+class HostKeyMatchPolicy(paramiko.AutoAddPolicy):
+ def __init__(self, expected_fingerprint):
+ self.expected_fingerprint = expected_fingerprint
+
+ def missing_host_key(self, client, hostname, key):
+ s = paramiko.util.hexlify(key.get_fingerprint())
+ fingerprint = ':'.join(re.findall('..', s))
+ if fingerprint == self.expected_fingerprint.upper():
+ paramiko.AutoAddPolicy.missing_host_key(self, client, hostname,
+ key)
+ else:
+ raise HostKeyMatchException(hostname, fingerprint,
+ self.expected_fingerprint)
+
+
+class GlusterSsh(paramiko.SSHClient):
+ def __init__(self, hostname, fingerprint, port=22, username=None,
+ password=None, pkey=None, key_filename=None, timeout=None,
+ allow_agent=True, look_for_keys=True, compress=False):
+ paramiko.SSHClient.__init__(self)
+ key_file_list = [_KEYFILE]
+ if key_filename:
+ key_file_list.append(list(key_filename))
+ self.set_missing_host_key_policy(HostKeyMatchPolicy(fingerprint))
+ try:
+ paramiko.SSHClient.connect(self, hostname, port, username,
+ password, pkey, key_file_list, timeout,
+ allow_agent, look_for_keys, compress)
+ except socket.error, e:
+ err = ['%s: %s' % (hostname, e)]
+ raise ge.GlusterSshConnectionFailedException(err=err)
+ except HostKeyMatchException, e:
+ raise ge.GlusterSshHostKeyMismatchException(err=[e.err])
class GlusterApi(object):
@@ -236,6 +295,47 @@
def volumeProfileStop(self, volumeName, options=None):
self.svdsmProxy.glusterVolumeProfileStop(volumeName)
+ def _validateSshConnection(self, hostname, fingerprint, username):
+ try:
+ ssh = GlusterSsh(hostname,
+ fingerprint,
+ username=username)
+ ssh.close()
+ return True
+ except paramiko.AuthenticationException, e:
+ raise ge.GlusterSshHostKeyAuthException(err=[str(e)])
+
+ @exportAsVerb
+ def validateSshConnection(self, hostname, fingerprint, username):
+ return self._validateSshConnection(hostname, fingerprint, username)
+
+ @exportAsVerb
+ def setupSshConnection(self, hostname, fingerprint, username, password):
+ rc, out, err = utils.execCmd([_sshKeyGenCommandPath.cmd, '-y', '-f',
+ _KEYFILE])
+ if rc != 0:
+ raise ge.GlusterSshPubKeyGenerationFailedException(rc=rc, err=err)
+
+ try:
+ ssh = GlusterSsh(hostname,
+ fingerprint,
+ username=username,
+ password=password)
+ c = ssh.get_transport().open_session()
+ stdin, stdout, stderr = c.exec_command(_SSH_COPY_ID_CMD)
+ stdin.write('\n'.join(out) + '\n')
+ stdin.flush()
+ stdin.close()
+ rc = c.recv_exit_status()
+ ssh.close()
+ if rc != 0:
+ err = stderr.read().splitlines()
+ raise ge.GlusterSshSetupExecFailedException(rc=rc, err=err)
+ except paramiko.AuthenticationException, e:
+ raise ge.GlusterSshHostAuthException(err=[str(e)])
+
+ return self._validateSshConnection(hostname, fingerprint, username)
+
def getGlusterMethods(gluster):
l = []
diff --git a/vdsm/gluster/exception.py b/vdsm/gluster/exception.py
index 6d94ae3..0143a5e 100644
--- a/vdsm/gluster/exception.py
+++ b/vdsm/gluster/exception.py
@@ -377,3 +377,39 @@
class GlusterHostsListFailedException(GlusterHostException):
code = 4407
message = "Hosts list failed"
+
+
+# Ssh
+class GlusterSshException(GlusterException):
+ code = 4500
+ message = "Gluster ssh exception"
+
+
+class GlusterSshConnectionFailedException(GlusterSshException):
+ code = 4501
+ message = "SSH connection failed"
+
+
+class GlusterSshHostKeyMismatchException(GlusterSshException):
+ code = 4502
+ message = "Host key match failed"
+
+
+class GlusterSshHostKeyAuthException(GlusterSshException):
+ code = 4503
+ message = "SSH host key authentication failed"
+
+
+class GlusterSshHostAuthException(GlusterSshException):
+ code = 4504
+ message = "SSH host authentication failed"
+
+
+class GlusterSshPubKeyGenerationFailedException(GlusterSshException):
+ code = 4505
+ message = "SSH public key generation failed"
+
+
+class GlusterSshSetupExecFailedException(GlusterSshException):
+ code = 4506
+ message = "SSH key setup execution failed"
--
To view, visit http://gerrit.ovirt.org/8375
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic783abd5f1b63bc5116ce4ff2a3c7be92001a387
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Bala.FA <barumuga(a)redhat.com>
Gerrit-Reviewer: Ayal Baron <abaron(a)redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken(a)redhat.com>
Gerrit-Reviewer: Federico Simoncelli <fsimonce(a)redhat.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi(a)redhat.com>
Gerrit-Reviewer: Timothy Asir <tjeyasin(a)redhat.com>
10 years, 1 month
Change in vdsm[master]: vdsm-gluster: Added gluster volume geo-replication create pu...
by tjeyasin@redhat.com
Hello Ayal Baron, Bala.FA, Saggi Mizrahi, Dan Kenigsberg,
I'd like you to do a code review. Please visit
http://gerrit.ovirt.org/17650
to review the following change.
Change subject: vdsm-gluster: Added gluster volume geo-replication create push-pem verb
......................................................................
vdsm-gluster: Added gluster volume geo-replication create push-pem verb
Create the geo-replication session. The push-pem option is needed to
perform the necessary pem-file setup on the slave nodes.
Change-Id: I4f0b7ba685918bf147eb291c2bbe90527b965416
Signed-off-by: Timothy Asir <tjeyasin(a)redhat.com>
---
M client/vdsClientGluster.py
M vdsm/gluster/api.py
M vdsm/gluster/cli.py
M vdsm/gluster/exception.py
4 files changed, 69 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/50/17650/1
diff --git a/client/vdsClientGluster.py b/client/vdsClientGluster.py
index 90af83e..e5f036d 100644
--- a/client/vdsClientGluster.py
+++ b/client/vdsClientGluster.py
@@ -424,6 +424,27 @@
pp.pprint(status)
return status['status']['code'], status['status']['message']
+ def do_glusterVolumeGeoRepCreatePushPem(self, args):
+ params = self._eqSplit(args)
+ try:
+ masterVolName = params.get('masterVolName', '')
+ slaveHost = params.get('slaveHost', '')
+ slaveVolName = params.get('slaveVolName', '')
+ force = (params.get('force', 'no').upper() == 'YES')
+
+ except:
+ raise ValueError
+
+ if not (masterVolName and slaveHost and slaveVolName):
+ raise ValueError
+
+ status = self.s.glusterVolumeGeoRepCreatePushPem(masterVolName,
+ slaveHost,
+ slaveVolName,
+ force)
+ pp.pprint(status)
+ return status['status']['code'], status['status']['message']
+
def getGlusterCmdDict(serv):
return \
@@ -705,4 +726,15 @@
'not set'
'(swift, glusterd, smb, memcached)'
)),
+ 'glusterVolumeGeoRepCreatePushPem': (
+ serv.do_glusterVolumeGeoRepCreatePushPem,
+ ('masterVolName=<master_volume_name> '
+ 'slaveHost=<slave_host_name> '
+ 'slaveVolName=<slave_volume_name> '
+ '[force={yes|no}]\n\t'
+ '<master_volume_name> is an existing volume name in the master node\n\t'
+ '<slave_host_name> is remote slave host name or ip\n\t'
+ '<slave_volume_name> is an available existing volume name in the slave node',
+ 'Create the geo-replication session'
+ )),
}
diff --git a/vdsm/gluster/api.py b/vdsm/gluster/api.py
index 4bd8308..422ab0f 100644
--- a/vdsm/gluster/api.py
+++ b/vdsm/gluster/api.py
@@ -287,6 +287,17 @@
status = self.svdsmProxy.glusterServicesGet(serviceNames)
return {'services': status}
+ @exportAsVerb
+ def volumeGeoRepCreatePushPem(self, masterVolName, slaveHost,
+ slaveVolName, force=True,
+ options=None):
+ status = self.svdsmProxy.glusterVolumeGeoRepCreatePushPem(
+ masterVolName,
+ slaveHost,
+ slaveVolName,
+ force=True)
+ return {'geo-rep': status}
+
def getGlusterMethods(gluster):
l = []
diff --git a/vdsm/gluster/cli.py b/vdsm/gluster/cli.py
index bac6d1c..33a06ec 100644
--- a/vdsm/gluster/cli.py
+++ b/vdsm/gluster/cli.py
@@ -897,3 +897,18 @@
return _parseVolumeProfileInfo(xmltree, nfs)
except _etreeExceptions:
raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
+
+@makePublic
+def volumeGeoRepCreatePushPem(masterVolName, slaveHost, slaveVolName,
+ force=True):
+ command = _getGlusterVolCmd() + ["geo-replication", masterVolName,
+ "%s::%s" % (slaveHost, slaveVolName),
+ "create", "push-pem"]
+ try:
+ if force:
+ xmltree = _execGlusterXml(command + ["force"])
+ else:
+ xmltree = _execGlusterXml(command)
+ return True
+ except ge.GlusterCmdFailedException as e:
+ raise ge.GlusterGeoRepCreatePushPemFailedException(rc=e.rc, err=e.err)
diff --git a/vdsm/gluster/exception.py b/vdsm/gluster/exception.py
index c569a9e..bcc0eb5 100644
--- a/vdsm/gluster/exception.py
+++ b/vdsm/gluster/exception.py
@@ -484,3 +484,14 @@
prefix = "%s: " % (action)
self.message = prefix + "Service action is not supported"
self.err = [self.message]
+
+
+#geo-replication
+class GlusterGeoRepException(GlusterException):
+ code = 4560
+ message = "Gluster Geo-Replication Exception"
+
+
+class GlusterGeoRepCreatePushPemFailedException(GlusterGeoRepException):
+ code = 4562
+ message = "Geo Creation of public key file failed"
--
To view, visit http://gerrit.ovirt.org/17650
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I4f0b7ba685918bf147eb291c2bbe90527b965416
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Timothy Asir <tjeyasin(a)redhat.com>
Gerrit-Reviewer: Ayal Baron <abaron(a)redhat.com>
Gerrit-Reviewer: Bala.FA <barumuga(a)redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken(a)redhat.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi(a)redhat.com>
10 years, 1 month
Change in vdsm[master]: gluster: Add gluster volume geo-replication configuration fe...
by tjeyasin@redhat.com
Hello Ayal Baron, Bala.FA, Saggi Mizrahi, Dan Kenigsberg,
I'd like you to do a code review. Please visit
http://gerrit.ovirt.org/18303
to review the following change.
Change subject: gluster: Add gluster volume geo-replication configuration feature
......................................................................
gluster: Add gluster volume geo-replication configuration feature
Configure geo-replication options between the hosts specified by
MASTER and SLAVE. This provides listing, add, remove & update
geo-rep configurations.
New verbs:
* glusterVolumeGeoRepConfigList
- return value structure:
{{NAME: VALUE,
writable: BOOL,
'description': DESCRIPTION}... }
* glusterVolumeGeoRepConfigGet
- return value structure:
{NAME: VALUE,
writable: BOOL,
'description': DESCRIPTION}
* glusterVolumeGeoRepConfigSet
* glusterVolumeGeoRepConfigDelete
Change-Id: I9c43f0950bbaa215cfe22ba18cc02e5c5851c347
Signed-off-by: Timothy Asir <tjeyasin(a)redhat.com>
---
M client/vdsClientGluster.py
M vdsm/gluster/api.py
M vdsm/gluster/cli.py
M vdsm/gluster/exception.py
M vdsm/gluster/vdsmapi-gluster-schema.json
5 files changed, 471 insertions(+), 0 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/03/18303/1
diff --git a/client/vdsClientGluster.py b/client/vdsClientGluster.py
index 0e8ebd1..8258c26 100644
--- a/client/vdsClientGluster.py
+++ b/client/vdsClientGluster.py
@@ -453,6 +453,70 @@
pp.pprint(status)
return status['status']['code'], status['status']['message']
+ def do_glusterVolumeGeoRepConfigList(self, args):
+ params = self._eqSplit(args)
+ masterVolName = params.get('masterVolName', '')
+ slaveHost = params.get('slaveHost', '')
+ slaveVolName = params.get('slaveVolName', '')
+ if not(masterVolName and slaveHost and slaveVolName):
+ raise ValueError
+
+ status = self.s.glusterVolumeGeoRepConfigList(masterVolName,
+ slaveHost,
+ slaveVolName)
+ pp.pprint(status)
+ return status['status']['code'], status['status']['message']
+
+ def do_glusterVolumeGeoRepConfigSet(self, args):
+ params = self._eqSplit(args)
+ masterVolName = params.get('masterVolName', '')
+ slaveHost = params.get('slaveHost', '')
+ slaveVolName = params.get('slaveVolName', '')
+ key = params.get('key', '')
+ value = params.get('value', '')
+ if not(masterVolName and slaveHost and slaveVolName and key and value):
+ raise ValueError
+
+ status = self.s.glusterVolumeGeoRepConfigSet(masterVolName,
+ slaveHost,
+ slaveVolName,
+ key,
+ value)
+ pp.pprint(status)
+ return status['status']['code'], status['status']['message']
+
+ def do_glusterVolumeGeoRepConfigGet(self, args):
+ params = self._eqSplit(args)
+ masterVolName = params.get('masterVolName', '')
+ slaveHost = params.get('slaveHost', '')
+ slaveVolName = params.get('slaveVolName', '')
+ key = params.get('key', '')
+ if not(masterVolName and slaveHost and slaveVolName and key):
+ raise ValueError
+
+ status = self.s.glusterVolumeGeoRepConfigGet(masterVolName,
+ slaveHost,
+ slaveVolName,
+ key)
+ pp.pprint(status)
+ return status['status']['code'], status['status']['message']
+
+ def do_glusterVolumeGeoRepConfigDelete(self, args):
+ params = self._eqSplit(args)
+ masterVolName = params.get('masterVolName', '')
+ slaveHost = params.get('slaveHost', '')
+ slaveVolName = params.get('slaveVolName', '')
+ key = params.get('key', '')
+ if not(masterVolName and slaveHost and slaveVolName and key):
+ raise ValueError
+
+ status = self.s.glusterVolumeGeoRepConfigDelete(masterVolName,
+ slaveHost,
+ slaveVolName,
+ key)
+ pp.pprint(status)
+ return status['status']['code'], status['status']['message']
+
def getGlusterCmdDict(serv):
return \
@@ -754,5 +818,56 @@
'<slave_host_name>is remote slave host name or ip\n\t'
'<slave_volume_name>existing volume name in the slave node',
'Delete the geo-replication session'
+ )),
+ 'glusterVolumeGeoRepConfigList': (
+ serv.do_glusterVolumeGeoRepConfigList,
+ ('masterVolName=<master_volume_name> slaveHost=<slave_host> '
+ 'slaveVolName=<slave_volume_name>\n\t'
+ '<master_volume_name> is an existing volume name in '
+ 'the master node\n\t'
+ '<slave_host> is slave host name\n\t'
+ '<slave_volume_name> is an existing volume name '
+ 'in the slave node',
+ 'list volume geo-replication configurations'
+ )),
+ 'glusterVolumeGeoRepConfigSet': (
+ serv.do_glusterVolumeGeoRepConfigSet,
+ ('masterVolName=<master_volume_name> slaveHost=<slave_host> '
+ 'slaveVolName=<slave_volume_name> keyName=<key> '
+ 'value=<value>\n\t'
+ '<master_volume_name> is an existing volume name '
+ 'in the master node\n\t'
+ '<slave_host> is slave host name\n\t'
+ '<slave_volume_name> is an existing volume name in '
+ 'the slave node\n\t'
+ '<key> is the key name\n\t'
+ '<value> is the key value',
+ 'set volume geo-replication configuration'
+ )),
+ 'glusterVolumeGeoRepConfigGet': (
+ serv.do_glusterVolumeGeoRepConfigGet,
+ ('masterVolName=<master_volume_name> slaveHost=<slave_host> '
+ 'slaveVolName=<slave_volume_name> keyName=<key> '
+ 'value=<value>\n\t'
+ '<master_volume_name> is an existing volume name in '
+ 'the master node\n\t'
+ '<slave_host> is slave host name\n\t'
+ '<slave_volume_name> is an existing volume name in '
+ 'the slave node\n\t'
+ '<key> is the key name',
+ 'get volume geo-replication configuration'
+ )),
+ 'glusterVolumeGeoRepConfigDelete': (
+ serv.do_glusterVolumeGeoRepConfigDelete,
+ ('masterVolName=<master_volume_name> slaveHost=<slave_host> '
+ 'slaveVolName=<slave_volume_name> keyName=<key> '
+ 'value=<value>\n\t'
+ '<master_volume_name> is an existing volume name in '
+ 'the master node\n\t'
+ '<slave_host> is slave host name\n\t'
+ '<slave_volume_name> is an existing volume name in '
+ 'the slave node\n\t'
+ '<key> is the key name',
+ 'Delete volume geo-replication configuration'
))
}
diff --git a/vdsm/gluster/api.py b/vdsm/gluster/api.py
index d65a8a2..6a58ece 100644
--- a/vdsm/gluster/api.py
+++ b/vdsm/gluster/api.py
@@ -304,6 +304,47 @@
slaveVolName)
return {'geo-rep': status}
+ @exportAsVerb
+ def volumeGeoRepConfigList(self, masterVolName, slaveHost, slaveVolName,
+ options=None):
+ status = self.svdsmProxy.glusterVolumeGeoRepConfigList(masterVolName,
+ slaveHost,
+ slaveVolName)
+ return {'geoRepConfig': status}
+
+ @exportAsVerb
+ def volumeGeoRepConfigSet(self, masterVolName, slaveHost, slaveVolName,
+ key, value, options=None):
+ status = self.svdsmProxy.glusterVolumeGeoRepConfigSet(masterVolName,
+ slaveHost,
+ slaveVolName,
+ key,
+ value)
+ return {'geoRepSet': status}
+
+ @exportAsVerb
+ def volumeGeoRepConfigGet(self, masterVolName, slaveHost, slaveVolName,
+ key, options=None):
+
+ fp = open("/tmp/file1.txt", "w")
+ fp.write("entering geo rep config get")
+ fp.close()
+ status = self.svdsmProxy.glusterVolumeGeoRepConfigGet(masterVolName,
+ slaveHost,
+ slaveVolName,
+ key)
+ return {'geoRepGet': status}
+
+ @exportAsVerb
+ def volumeGeoRepConfigDelete(self, masterVolName, slaveHost, slaveVolName,
+ key, options=None):
+
+ status = self.svdsmPorxy.glusterVolumeGeoRepConfigDelete(masterVolName,
+ slaveHost,
+ slaveVolName,
+ key)
+ return {'geoRepDelete': status}
+
def getGlusterMethods(gluster):
l = []
diff --git a/vdsm/gluster/cli.py b/vdsm/gluster/cli.py
index 4529a44..d16d88f 100644
--- a/vdsm/gluster/cli.py
+++ b/vdsm/gluster/cli.py
@@ -73,6 +73,95 @@
RDMA = 'RDMA'
+class GeoRepConf:
+ EDITABLE = 'True'
+ READONLY = 'False'
+
+
+GEOREP_CONFIG = {
+ 'gluster_log_file': (
+ 'glusterLogFile',
+ 'Path to the geo-replication glusterfs log file',
+ GeoRepConf.EDITABLE),
+ 'gluster_log_level': (
+ 'glusterLogLevel',
+ 'The log level for glusterfs processes',
+ GeoRepConf.EDITABLE),
+ 'log_file': (
+ 'logFile',
+ 'The path to the geo-replication log file',
+ GeoRepConf.EDITABLE),
+ 'ssh_command': (
+ 'sshCommand',
+ 'Command to connect to the remote machine (the default is ssh)',
+ GeoRepConf.EDITABLE),
+ 'rsync_command': (
+ 'rsyncCommand',
+ 'rsync command for syncronizing (default - rsync)',
+ GeoRepConf.EDITABLE),
+ 'volume_id': (
+ 'volumeID',
+ 'Volume ID which can be use to delete the existing master UID '
+ 'for the intermediate/slave node',
+ GeoRepConf.EDITABLE),
+ 'timeout': (
+ 'timeout',
+ 'The timeout period', GeoRepConf.EDITABLE),
+ 'sync_jobs': (
+ 'syncJobs',
+ 'The number of simultaneous files/directories '
+ 'that can be synchronized',
+ GeoRepConf.EDITABLE),
+ 'ignore_deletes': (
+ 'ignoreDeletes',
+ 'delete files only in master and will not trigger a delete '
+ 'operation in the slave. Hence, the slave will remain as a '
+ 'superset of the master and can be used to recover the master '
+ 'in case of any crash or accidental delete',
+ GeoRepConf.EDITABLE),
+ 'checkpoint': (
+ 'checkPoint',
+ 'Sets the checkpoint with the given value (label). If the value is '
+ 'set as now, then the current time will be used as value (label)',
+ GeoRepConf.EDITABLE),
+ 'gluster_command_dir': (
+ 'glusterCommandDir',
+ 'gluster command path',
+ GeoRepConf.READONLY),
+ 'gluster_params': (
+ 'glusterParams',
+ 'gluster parameters',
+ GeoRepConf.READONLY),
+ 'pid_file': (
+ 'pidFile',
+ 'geo replication session pid file',
+ GeoRepConf.READONLY),
+ 'session_owner': (
+ 'sessionOwner',
+ 'session owner uuid',
+ GeoRepConf.READONLY),
+ 'socketdir': (
+ 'socketDir',
+ 'socket directory',
+ GeoRepConf.READONLY),
+ 'special_sync_mode': (
+ 'specialSyncMode',
+ 'special sync mode',
+ GeoRepConf.READONLY),
+ 'state_detail_file': (
+ 'stateDetailFile',
+ 'state detail file',
+ GeoRepConf.READONLY),
+ 'state_file': (
+ 'stateFile',
+ 'state file',
+ GeoRepConf.READONLY),
+ 'working_dir': (
+ 'workingDir',
+ 'working directory',
+ GeoRepConf.READONLY)}
+
+
def _execGluster(cmd):
return utils.execCmd(cmd)
@@ -938,3 +1027,89 @@
return True
except ge.GlusterCmdFailedException as e:
raise ge.GlusterGeoRepDeletionFailedException(rc=e.rc, err=e.err)
+
+
+def _parseVolumeGeoRepConfigListXml(tree):
+ config = {}
+
+ for el in tree.findall('geoRepConfig'):
+ try:
+ key = el.find('name').text
+ config[GEOREP_CONFIG[key]] = {
+ 'value': el.find('value').text,
+ 'description': GEOREP_CONFIG[key][1],
+ 'editable': GEOREP_CONFIG[key][2]}
+ except KeyError:
+ pass # omitting unwanted items
+ return config
+
+
+def _parseVolumeGeoRepConfigList(out):
+ config = {}
+
+ for line in out:
+ k, v = line.split(':')
+ try:
+ config[GEOREP_CONFIG[k][0]] = {'value': v.strip(),
+ 'description': GEOREP_CONFIG[k][1],
+ 'editable': GEOREP_CONFIG[k][2]}
+ except KeyError:
+ pass # omitting unwanted items
+ return config
+
+
+@makePublic
+def volumeGeoRepConfigList(masterVolName, slaveHost, slaveVolName):
+ """Returns
+ {{OPTIONNAME: VALUE,
+ 'writable': BOOL,
+ 'description': DESCRIPTION}... }
+ """
+ command = _getGlusterVolCmd() + ["geo-replication", masterVolName,
+ "%s::%s" % (slaveHost, slaveVolName),
+ "config"]
+
+ try:
+ xmltree = _execGlusterXml(command)
+ return _parseVolumeGeoRepConfigListXml(xmltree)
+ except ge.GlusterCmdFailedException as e:
+ rc, out, err = _execGluster(command)
+ if rc:
+ raise ge.GlusterGeoRepConfigListFailedException(rc, out, err)
+ return _parseVolumeGeoRepConfigList(out)
+
+
+@makePublic
+def volumeGeoRepConfigSet(masterVolName, slaveHost, slaveVolName, key, value):
+ command = _getGlusterVolCmd() + ["geo-replication", masterVolName,
+ "%s::%s" % (slaveHost, slaveVolName),
+ "config", key, value]
+ rc, out, err = _execGluster(command)
+ if rc:
+ raise ge.GlusterGeoRepConfigSetFailedException(rc, out, err)
+ return True
+
+
+@makePublic
+def volumeGeoRepConfigGet(masterVolName, slaveHost, slaveVolName, key):
+ command = _getGlusterVolCmd() + ["geo-replication", masterVolName,
+ "%s::%s" % (slaveHost, slaveVolName),
+ "config", key]
+ rc, out, err = _execGluster(command)
+ if rc:
+ raise ge.GlusterGeoRepConfigGetFailedException(rc, out, err)
+
+ return {key: {'name': out.strip(),
+ 'description': GEOREP_CONFIG[key][1],
+ 'editable': GEOREP_CONFIG[key][2]}}
+
+
+@makePublic
+def volumeGeoRepConfigDelete(masterVolName, slaveHost, slaveVolName, key):
+ command = _getGlusterVolCmd() + ["geo-replication", masterVolName,
+ "%s::%s" % (slaveHost, slaveVolName),
+ "config", "!%s" % key]
+ rc, out, err = _execGluster(command)
+ if rc:
+ raise ge.GlusterGeoRepConfigDeleteFailedException(rc, out, err)
+ return True
diff --git a/vdsm/gluster/exception.py b/vdsm/gluster/exception.py
index 0e98c06..846a70e 100644
--- a/vdsm/gluster/exception.py
+++ b/vdsm/gluster/exception.py
@@ -511,3 +511,23 @@
class GlusterGeoRepDeletionFailedException(GlusterGeoRepException):
code = 4564
message = "Geo Rep session deletion failed"
+
+
+class GlusterGeoRepConfigListFailedException(GlusterVolumeException):
+ code = 4165
+ message = "Get volume geo-replication config list failed"
+
+
+class GlusterGeoRepConfigSetFailedException(GlusterVolumeException):
+ code = 4166
+ message = "Set volume geo-replication config failed"
+
+
+class GlusterGeoRepConfigGetFailedException(GlusterVolumeException):
+ code = 4167
+ message = "Get volume geo-replication config failed"
+
+
+class GlusterGeoRepConfigDeleteFailedException(GlusterVolumeException):
+ code = 4168
+ message = "Delete volume geo-replication config failed"
diff --git a/vdsm/gluster/vdsmapi-gluster-schema.json b/vdsm/gluster/vdsmapi-gluster-schema.json
index e4dad64..da4177e 100644
--- a/vdsm/gluster/vdsmapi-gluster-schema.json
+++ b/vdsm/gluster/vdsmapi-gluster-schema.json
@@ -414,3 +414,123 @@
{'command': {'class': 'GlusterGeoRep', 'name': 'delete'},
'data': {'masterVolName': 'str', 'slaveHost': 'str', 'slaveVolName': 'str'},
'returns': 'bool'}
+
+##
+# @GeoRepConfig:
+#
+# Geo replication config details.
+#
+# @optionname: Config option name
+#
+# @editable: Editable option or not
+#
+# @description: Option details
+#
+# Since: 4.10.3
+##
+{'type': 'GeoRepConfig',
+ 'data': {'optionname': 'str', 'editable': 'bool', 'description': 'str'}}
+
+##
+# @GlusterGeoRep.configList:
+#
+# List Geo Replication configuration
+#
+# @mastervolname: is an existing volume name in the master node
+#
+# @slavehost: is remote slave host name or ip
+#
+# @slavevolname: is an available existing volume name in the slave node
+#
+# Returns:
+# List of geo replication configurations
+#
+# Since: 4.10.3
+##
+{'command': {'class': 'GlusterGeoRep', 'name': 'geoRepConfigList'},
+ 'data': {'mastervolname': 'str', 'slavehost': 'str', 'slavevolname': 'str'},
+ 'returns': 'GeoRepConfig'}
+
+##
+# @GlusterGeoRep.configSet:
+#
+# Set Geo Replication config option
+#
+# @mastervolname: is an existing volume name in the master node
+#
+# @slavehost: is remote slave host name or ip
+#
+# @slavevolname: is an available existing volume name in the slave node
+#
+# @key: valid configuration option name
+#
+# @value: value to the option
+#
+# Returns:
+# True if it sets value to the option successfully
+#
+# Since: 4.10.3
+##
+{'command': {'class': 'GlusterGeoRep', 'name': 'geoRepConfigSet'},
+ 'data': {'mastervolname': 'str', 'slavehost': 'str', 'slavevolname': 'str', 'key': 'str', 'value': 'str'},
+ 'returns': 'bool'}
+
+##
+# @GeoRepConfig:
+#
+# Geo replication config details.
+#
+# @optionname: Config option name
+#
+# @editable: Editable option or not
+#
+# @description: Option details
+#
+# Since: 4.10.3
+##
+{'type': 'GeoRepConfig',
+ 'data': {'optionname': 'str', 'editable': 'bool', 'description': 'str'}}
+
+##
+# @GlusterVolume.geoRepConfigGet:
+#
+# Get value of the Geo Replication config option
+#
+# @mastervolname: is an existing volume name in the master node
+#
+# @slavehost: is remote slave host name or ip
+#
+# @slavevolname: is an available existing volume name in the slave node
+#
+# @key: valid configuration option name
+#
+# Returns:
+# The value of the Geo Replication config option
+#
+# Since: 4.10.3
+##
+{'command': {'class': 'GlusterGeoRep', 'name': 'geoRepConfigGet'},
+ 'data': {'mastervolname': 'str', 'slavehost': 'str', 'slavevolname': 'str', 'key': 'str'},
+ 'returns': 'GeoRepConfig'}
+
+##
+# @GlusterVolume.geoRepConfigDelete:
+#
+# Delete Geo Replication config option
+#
+# @mastervolname: is an existing volume name in the master node
+#
+# @slavehost: is remote slave host name or ip
+#
+# @slavevolname: is an available existing volume name in the slave node
+#
+# @key: valid configuration option name
+#
+# Returns:
+# True if it deletes the option successfully
+#
+# Since: 4.10.3
+##
+{'command': {'class': 'GlusterGeoRep', 'name': 'geoRepConfigDelete'},
+ 'data': {'mastervolname': 'str', 'slavehost': 'str', 'slavevolname': 'str', 'key': 'str'},
+ 'returns': 'bool'}
--
To view, visit http://gerrit.ovirt.org/18303
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I9c43f0950bbaa215cfe22ba18cc02e5c5851c347
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Timothy Asir <tjeyasin(a)redhat.com>
Gerrit-Reviewer: Ayal Baron <abaron(a)redhat.com>
Gerrit-Reviewer: Bala.FA <barumuga(a)redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken(a)redhat.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi(a)redhat.com>
10 years, 1 month
Change in vdsm[master]: Add qemu's memory usage to VM statistics.
by ghammer@redhat.com
Gal Hammer has uploaded a new change for review.
Change subject: Add qemu's memory usage to VM statistics.
......................................................................
Add qemu's memory usage to VM statistics.
Change-Id: Ibeb35759454c4a9b41e1303956267e93ca3545a0
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=799285
Signed-off-by: Gal Hammer <ghammer(a)redhat.com>
---
M vdsm/config.py.in
M vdsm/libvirtvm.py
2 files changed, 14 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/06/9006/1
diff --git a/vdsm/config.py.in b/vdsm/config.py.in
index df85e7e..ee1627b 100644
--- a/vdsm/config.py.in
+++ b/vdsm/config.py.in
@@ -111,6 +111,8 @@
('vm_sample_net_interval', '5', None),
('vm_sample_net_window', '2', None),
+
+ ('vm_sample_memory_interval', '2', None),
('trust_store_path', '@TRUSTSTORE@',
'Where the certificates and keys are situated.'),
diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py
index 86e39a3..f76f35c 100644
--- a/vdsm/libvirtvm.py
+++ b/vdsm/libvirtvm.py
@@ -91,10 +91,13 @@
self._sampleNet,
config.getint('vars', 'vm_sample_net_interval'),
config.getint('vars', 'vm_sample_net_window')))
+ self.sampleMem = (utils.AdvancedStatsFunction(self._sampleMem,
+ config.getint('vars', 'vm_sample_memory_interval')))
self.addStatsFunction(
self.highWrite, self.updateVolumes, self.sampleCpu,
- self.sampleDisk, self.sampleDiskLatency, self.sampleNet)
+ self.sampleDisk, self.sampleDiskLatency, self.sampleNet,
+ self.sampleMem)
def _highWrite(self):
if not self._vm.isDisksStatsCollectionEnabled():
@@ -168,6 +171,14 @@
netSamples[nic.name] = self._vm._dom.interfaceStats(nic.name)
return netSamples
+ def _sampleMem(self):
+ memUsage = {}
+ for line in open('/proc/%d/status' %(self.conf['pid'])):
+ var, value = line.strip().split()[0:2]
+ if var in ('VmSize:', 'VmRSS:', 'VmData:'):
+ memUsage[var[:-1]] = long(value)
+ return memUsage
+
def _diff(self, prev, curr, val):
return prev[val] - curr[val]
--
To view, visit http://gerrit.ovirt.org/9006
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ibeb35759454c4a9b41e1303956267e93ca3545a0
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Gal Hammer <ghammer(a)redhat.com>
10 years, 1 month
Change in vdsm[master]: [WIP] Start moving proc parsing to it's own module
by smizrahi@redhat.com
Saggi Mizrahi has uploaded a new change for review.
Change subject: [WIP] Start moving proc parsing to it's own module
......................................................................
[WIP] Start moving proc parsing to it's own module
Change-Id: I7ba84c7ece95bdef7448a7c7af277e7f58695401
Signed-off-by: Saggi Mizrahi <smizrahi(a)redhat.com>
---
M vdsm.spec.in
M vdsm/API.py
M vdsm/Makefile.am
M vdsm/caps.py
A vdsm/procfs.py
M vdsm/utils.py
M vdsm/vm.py
7 files changed, 53 insertions(+), 40 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/13/7513/1
diff --git a/vdsm.spec.in b/vdsm.spec.in
index bd01c2a..1f01961 100644
--- a/vdsm.spec.in
+++ b/vdsm.spec.in
@@ -573,6 +573,7 @@
%{_datadir}/%{vdsm_name}/supervdsmServer.py*
%{_datadir}/%{vdsm_name}/vmChannels.py*
%{_datadir}/%{vdsm_name}/vmContainer.py*
+%{_datadir}/%{vdsm_name}/procfs.py*
%{_datadir}/%{vdsm_name}/tc.py*
%{_datadir}/%{vdsm_name}/vdsm
%{_datadir}/%{vdsm_name}/vdsm-restore-net-config
diff --git a/vdsm/API.py b/vdsm/API.py
index 720c3b9..aab69cd 100644
--- a/vdsm/API.py
+++ b/vdsm/API.py
@@ -37,6 +37,7 @@
from vdsm.define import doneCode, errCode, Kbytes, Mbytes
import caps
from vdsm.config import config
+import procfs
import supervdsm
@@ -864,7 +865,7 @@
"""
def _readSwapTotalFree():
- meminfo = utils.readMemInfo()
+ meminfo = procfs.meminfo()
return meminfo['SwapTotal'] / 1024, meminfo['SwapFree'] / 1024
stats = {}
@@ -1111,17 +1112,16 @@
memCommitted = self._memCommitted()
resident = 0
for v in self._cif.vmContainer.getVMs():
- if v.conf['pid'] == '0':
- continue
try:
- statmfile = file('/proc/' + v.conf['pid'] + '/statm')
- resident += int(statmfile.read().split()[1])
+ resident += v.statm().resident
except:
pass
+
resident *= PAGE_SIZE_BYTES
- meminfo = utils.readMemInfo()
- freeOrCached = (meminfo['MemFree'] +
- meminfo['Cached'] + meminfo['Buffers']) * Kbytes
+
+ meminfo = procfs.meminfo()
+ freeOrCached = (meminfo['MemFree'] + meminfo['Cached'] +
+ meminfo['Buffers']) * Kbytes
return freeOrCached + resident - memCommitted - \
config.getint('vars', 'host_mem_reserve') * Mbytes
diff --git a/vdsm/Makefile.am b/vdsm/Makefile.am
index 574d762..1a3ac43 100644
--- a/vdsm/Makefile.am
+++ b/vdsm/Makefile.am
@@ -47,6 +47,7 @@
momIF.py \
neterrors.py \
parted_utils.py \
+ procfs.py \
pthread.py \
supervdsm.py \
supervdsmServer.py \
diff --git a/vdsm/caps.py b/vdsm/caps.py
index f1641ff..39fc837 100644
--- a/vdsm/caps.py
+++ b/vdsm/caps.py
@@ -41,6 +41,7 @@
from vdsm import utils
from vdsm import constants
import storage.hba
+import procfs
# For debian systems we can use python-apt if available
try:
@@ -271,7 +272,7 @@
caps['HBAInventory'] = storage.hba.HBAInventory()
caps['vmTypes'] = ['kvm']
- caps['memSize'] = str(utils.readMemInfo()['MemTotal'] / 1024)
+ caps['memSize'] = str(procfs.meminfo()['MemTotal'] / 1024)
caps['reservedMem'] = str(
config.getint('vars', 'host_mem_reserve') +
config.getint('vars', 'extra_mem_reserve'))
diff --git a/vdsm/procfs.py b/vdsm/procfs.py
new file mode 100644
index 0000000..29fc973
--- /dev/null
+++ b/vdsm/procfs.py
@@ -0,0 +1,31 @@
+from collections import namedtuple
+
+buffsize = 4096
+
+MemStat = namedtuple("MemStat",
+ "size, resident, share, text, UNUSED1, data, UNUSED2")
+
+
+def statm(pid):
+ """
+ Parses statm for a pid. Note all results are in pages.
+ """
+ with open("/proc/%d/statm" % pid, "rb") as f:
+ return MemStat(*(int(val) for val in f.read().split()))
+
+
+def meminfo():
+ """
+ Parse ``/proc/meminfo`` and return its content as a dictionary.
+
+ note.
+ All values are in KB
+ """
+ meminfo = {}
+ with open("/proc/meminfo", "rb") as f:
+ f.seek(0)
+ lines = f.readlines()
+ for var, val in (l.split()[0:2] for l in lines):
+ meminfo[var[:-1]] = int(val)
+
+ return meminfo
diff --git a/vdsm/utils.py b/vdsm/utils.py
index 5e2d4e5..048a528 100644
--- a/vdsm/utils.py
+++ b/vdsm/utils.py
@@ -19,7 +19,8 @@
#
"""
-A module containing miscellaneous functions and classes that are user plentifuly around vdsm.
+A module containing miscellaneous functions and classes that are user
+plentifuly around vdsm.
.. attribute:: utils.symbolerror
@@ -28,7 +29,8 @@
from SimpleXMLRPCServer import SimpleXMLRPCServer
import SocketServer
import threading
-import os, time
+import os
+import time
import logging
import errno
import subprocess
@@ -42,6 +44,7 @@
import constants
from config import config
import netinfo
+import procfs
_THP_STATE_PATH = '/sys/kernel/mm/transparent_hugepage/enabled'
if not os.path.exists(_THP_STATE_PATH):
@@ -63,34 +66,6 @@
os.unlink(fileToRemove)
except:
pass
-
-def readMemInfo():
- """
- Parse ``/proc/meminfo`` and return its content as a dictionary.
-
- For a reason unknown to me, ``/proc/meminfo`` is is sometime
- empty when opened. If that happens, the function retries to open it
- 3 times.
-
- :returns: a dictionary representation of ``/proc/meminfo``
- """
- # FIXME the root cause for these retries should be found and fixed
- tries = 3
- meminfo = {}
- while True:
- tries -= 1
- try:
- lines = []
- lines = file('/proc/meminfo').readlines()
- for line in lines:
- var, val = line.split()[0:2]
- meminfo[var[:-1]] = int(val)
- return meminfo
- except:
- logging.warning(lines, exc_info=True)
- if tries <= 0:
- raise
- time.sleep(0.1)
#Threaded version of SimpleXMLRPCServer
class SimpleThreadedXMLRPCServer(SocketServer.ThreadingMixIn, SimpleXMLRPCServer):
@@ -225,7 +200,7 @@
"""
BaseSample.__init__(self, pid, ifids)
self.totcpu = TotalCpuSample()
- meminfo = readMemInfo()
+ meminfo = procfs.meminfo()
freeOrCached = (meminfo['MemFree'] +
meminfo['Cached'] + meminfo['Buffers'])
self.memUsed = 100 - int(100.0 * (freeOrCached) / meminfo['MemTotal'])
diff --git a/vdsm/vm.py b/vdsm/vm.py
index c1a22b0..bd436e0 100644
--- a/vdsm/vm.py
+++ b/vdsm/vm.py
@@ -36,6 +36,7 @@
import libvirt
from vdsm import vdscli
import caps
+import procfs
DEFAULT_BRIDGE = config.get("vars", "default_bridge")
@@ -693,6 +694,9 @@
load = len(self.cif.vmContainer.getVMs())
return base * (doubler + load) / doubler
+ def statm(self):
+ return procfs.statm(int(self.conf['pid']))
+
def saveState(self):
if self.destroyed:
return
--
To view, visit http://gerrit.ovirt.org/7513
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I7ba84c7ece95bdef7448a7c7af277e7f58695401
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Saggi Mizrahi <smizrahi(a)redhat.com>
10 years, 1 month
Change in vdsm[master]: Refactor prepareVolumePath
by smizrahi@redhat.com
Saggi Mizrahi has uploaded a new change for review.
Change subject: Refactor prepareVolumePath
......................................................................
Refactor prepareVolumePath
Change-Id: I57bb8684fd11a47843a158d13fcc2815147fa7ef
Signed-off-by: Saggi Mizrahi <smizrahi(a)redhat.com>
---
M vdsm/API.py
M vdsm/clientIF.py
M vdsm/libvirtvm.py
M vdsm/storage/devicemapper.py
M vdsm/vm.py
5 files changed, 93 insertions(+), 63 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/55/7755/1
diff --git a/vdsm/API.py b/vdsm/API.py
index 720c3b9..c324d79 100644
--- a/vdsm/API.py
+++ b/vdsm/API.py
@@ -173,8 +173,7 @@
# NOTE: pickled params override command-line params. this
# might cause problems if an upgrade took place since the
# parmas were stored.
- fname = self._cif.prepareVolumePath(paramFilespec)
- try:
+ with self._cif.preparedDrive(paramFilespec) as fname:
with file(fname) as f:
pickledMachineParams = pickle.load(f)
@@ -183,8 +182,6 @@
+ str(pickledMachineParams))
self.log.debug('former conf ' + str(vmParams))
vmParams.update(pickledMachineParams)
- finally:
- self._cif.teardownVolumePath(paramFilespec)
except:
self.log.error("Error restoring VM parameters",
exc_info=True)
@@ -299,9 +296,15 @@
:param hiberVolHandle: opaque string, indicating the location of
hibernation images.
"""
- params = {'vmId': self._UUID, 'mode': 'file',
- 'hiberVolHandle': hibernationVolHandle}
- response = self.migrate(params)
+ v = self._getVmObject()
+ if v is None:
+ return errCode['noVM']
+
+ try:
+ response = self.hibernate(hibernationVolHandle)
+ except vm.WrongStateError:
+ response = errCode['noVM']
+
if not response['status']['code']:
response['status']['message'] = 'Hibernation process starting'
return response
diff --git a/vdsm/clientIF.py b/vdsm/clientIF.py
index 55a7fc9..0446eb2 100644
--- a/vdsm/clientIF.py
+++ b/vdsm/clientIF.py
@@ -25,6 +25,7 @@
from xml.dom import minidom
import uuid
import errno
+from contextlib import contextmanager
from storage.dispatcher import Dispatcher
from storage.hsm import HSM
@@ -44,6 +45,7 @@
import blkid
import supervdsm
import vmContainer
+from storage import devicemapper
try:
import gluster.api as gapi
_glusterEnabled = True
@@ -239,50 +241,74 @@
self.log.info('Error finding path for device', exc_info=True)
raise vm.VolumeError(uuid)
+ def _preparePoolImage(self, drive):
+ res = self.irs.prepareImage(
+ drive['domainID'], drive['poolID'],
+ drive['imageID'], drive['volumeID'])
+
+ if res['status']['code']:
+ raise vm.VolumeError(drive)
+
+ drive['volumeChain'] = res['chain']
+ return res['path']
+
+ def _prepareDmDevice(self, drive, vmId):
+ volPath = devicemapper.getDevicePathByGuid(drive["GUID"])
+
+ if not os.path.exists(volPath):
+ raise vm.VolumeError(drive)
+
+ res = self.irs.appropriateDevice(drive["GUID"], vmId)
+ if res['status']['code']:
+ raise vm.VolumeError(drive)
+
+ return volPath
+
+ def _prepareScsiDevice(self, drive):
+ return self._getUUIDSpecPath(drive["UUID"])
+
+ def _prepareVmPayload(self, drive, vmId):
+ '''
+ vmPayload is a key in specParams
+ 'vmPayload': {'file': {'filename': 'content'}}
+ '''
+ for key, files in drive['specParams']['vmPayload'].iteritems():
+ if key == 'file':
+ svdsm = supervdsm.getProxy()
+ if drive['device'] == 'cdrom':
+ return svdsm.mkIsoFs(vmId, files)
+ elif drive['device'] == 'floppy':
+ return svdsm.mkFloppyFs(vmId, files)
+
+ raise vm.VolumeError(drive)
+
+ def _preparePath(self, drive):
+ return drive['path']
+
+ @contextmanager
+ def perparedDrive(self, drive, vmId=None):
+ path = self.prepareVolumePath(drive, vmId)
+ try:
+ yield path
+ finally:
+ self.teardownVolumePath(drive, vmId)
+
def prepareVolumePath(self, drive, vmId=None):
if type(drive) is dict:
- # PDIV drive format
if drive['device'] == 'disk' and vm.isVdsmImage(drive):
- res = self.irs.prepareImage(
- drive['domainID'], drive['poolID'],
- drive['imageID'], drive['volumeID'])
+ volPath = self._preparePoolImage(drive)
- if res['status']['code']:
- raise vm.VolumeError(drive)
-
- volPath = res['path']
- drive['volumeChain'] = res['chain']
-
- # GUID drive format
elif "GUID" in drive:
- volPath = os.path.join("/dev/mapper", drive["GUID"])
+ volPath = self._prepareDmDevice(drive, vmId)
- if not os.path.exists(volPath):
- raise vm.VolumeError(drive)
-
- res = self.irs.appropriateDevice(drive["GUID"], vmId)
- if res['status']['code']:
- raise vm.VolumeError(drive)
-
- # UUID drive format
elif "UUID" in drive:
- volPath = self._getUUIDSpecPath(drive["UUID"])
+ volPath = self._prepareScsiDevice(drive)
elif 'specParams' in drive and 'vmPayload' in drive['specParams']:
- '''
- vmPayload is a key in specParams
- 'vmPayload': {'file': {'filename': 'content'}}
- '''
- for key, files in drive['specParams']['vmPayload'].iteritems():
- if key == 'file':
- if drive['device'] == 'cdrom':
- volPath = supervdsm.getProxy().mkIsoFs(vmId, files)
- elif drive['device'] == 'floppy':
- volPath = \
- supervdsm.getProxy().mkFloppyFs(vmId, files)
+ volPath = self._prepareVmPayload(drive, vmId)
elif "path" in drive:
- volPath = drive['path']
+ volPath = self._preparePath(drive)
else:
raise vm.VolumeError(drive)
@@ -301,17 +327,22 @@
self.log.info("prepared volume path: %s", volPath)
return volPath
- def teardownVolumePath(self, drive):
- res = {'status': doneCode}
- if type(drive) == dict:
- try:
- res = self.irs.teardownImage(drive['domainID'],
- drive['poolID'], drive['imageID'])
- except KeyError:
- #This drive is not a vdsm image (quartet)
- self.log.info("Avoiding tear down drive %s", str(drive))
+ def _teardownPoolImage(self, drive):
+ try:
+ res = self.irs.teardownImage(drive['domainID'],
+ drive['poolID'], drive['imageID'])
+ return res['status']['code']
+ except KeyError:
+ #This drive is not a vdsm image (quartet)
+ self.log.info("Avoiding tear down drive %s", str(drive))
+ return doneCode
- return res['status']['code']
+ def teardownVolumePath(self, drive):
+ if type(drive) == dict:
+ return self._teardownPoolImage(drive)
+ else:
+ # Other types don't require tear down
+ return 0
def createVm(self, vmParams):
try:
@@ -320,6 +351,7 @@
except vmContainer.VmContainerError as e:
if e.errno == errno.EEXIST:
return errCode['exist']
+
return
def waitForShutdown(self, timeout=None):
diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py
index a530228..ea0d017 100644
--- a/vdsm/libvirtvm.py
+++ b/vdsm/libvirtvm.py
@@ -404,11 +404,8 @@
hooks.before_vm_hibernate(self._vm._dom.XMLDesc(0), self._vm.conf)
try:
self._vm._vmStats.pause()
- fname = self._vm.cif.prepareVolumePath(self._dst)
- try:
+ with self._vm.cif.preparedDrive(self._dst) as fname:
self._vm._dom.save(fname)
- finally:
- self._vm.cif.teardownVolumePath(self._dst)
except:
self._vm._vmStats.cont()
raise
@@ -1397,11 +1394,8 @@
elif 'restoreState' in self.conf:
hooks.before_vm_dehibernate(self.conf.pop('_srcDomXML'), self.conf)
- fname = self.cif.prepareVolumePath(self.conf['restoreState'])
- try:
+ with self.cif.preparedDrive(self.conf['restoreState']) as fname:
self._connection.restore(fname)
- finally:
- self.cif.teardownVolumePath(self.conf['restoreState'])
self._dom = NotifyingVirDomain(
self._connection.lookupByUUIDString(self.id),
diff --git a/vdsm/storage/devicemapper.py b/vdsm/storage/devicemapper.py
index a1651e0..388c1cd 100644
--- a/vdsm/storage/devicemapper.py
+++ b/vdsm/storage/devicemapper.py
@@ -46,6 +46,10 @@
(major, minor)))
+def getDevicePathByGuid(devGuid):
+ return DMPATH_FORMAT % devGuid
+
+
def getSysfsPath(devName):
if "/" in devName:
raise ValueError("devName has an illegal format. "
diff --git a/vdsm/vm.py b/vdsm/vm.py
index 3aa9f52..49193d3 100644
--- a/vdsm/vm.py
+++ b/vdsm/vm.py
@@ -210,12 +210,9 @@
if ignoreParam in self._machineParams:
del self._machineParams[ignoreParam]
- fname = self._vm.cif.prepareVolumePath(self._dstparams)
- try:
- with file(fname, "w") as f:
+ with self._vm.cif.preparedDrive(self._dstparams) as fname:
+ with file(fname, "wb") as f:
pickle.dump(self._machineParams, f)
- finally:
- self._vm.cif.teardownVolumePath(self._dstparams)
self._vm.setDownStatus(NORMAL, "SaveState succeeded")
self.status = {
--
To view, visit http://gerrit.ovirt.org/7755
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I57bb8684fd11a47843a158d13fcc2815147fa7ef
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Saggi Mizrahi <smizrahi(a)redhat.com>
10 years, 1 month
Change in vdsm[master]: [wip] hsm: remove superfluous refreshes at startup
by Federico Simoncelli
Federico Simoncelli has uploaded a new change for review.
Change subject: [wip] hsm: remove superfluous refreshes at startup
......................................................................
[wip] hsm: remove superfluous refreshes at startup
During the startup it's not mandatory to refresh the iscsi connections
(the sdcache is already stale) and the lvm module can handle the lazy
initialization.
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=870768
Change-Id: I8386d40c644c99a52f04b6b41b392abf16e3a2a6
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
M vdsm/storage/hsm.py
1 file changed, 0 insertions(+), 3 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/76/9276/1
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 46d1605..6a5040a 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -364,9 +364,6 @@
self.log.warn("Failed to clean Storage Repository.", exc_info=True)
def storageRefresh():
- lvm._lvminfo.bootstrap()
- sdCache.refreshStorage()
-
fileUtils.createdir(self.tasksDir)
# TBD: Should this be run in connectStoragePool? Should tasksDir
# exist under pool link as well (for hsm tasks)
--
To view, visit http://gerrit.ovirt.org/9276
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I8386d40c644c99a52f04b6b41b392abf16e3a2a6
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
10 years, 1 month
Change in vdsm[master]: Remove unnecesary preparePaths.
by ewarszaw@redhat.com
Eduardo has uploaded a new change for review.
Change subject: Remove unnecesary preparePaths.
......................................................................
Remove unnecesary preparePaths.
Recovering running VM's therefore paths are already prepared.
Prepare paths is not locking the volumes anymore (Federico).
Change-Id: I35890d36227633ca147387d670c152b9be357e50
---
M vdsm/clientIF.py
1 file changed, 0 insertions(+), 16 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/86/786/1
--
To view, visit http://gerrit.ovirt.org/786
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I35890d36227633ca147387d670c152b9be357e50
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Eduardo <ewarszaw(a)redhat.com>
10 years, 1 month
Change in vdsm[master]: Avoid template deactivation and lock.
by ewarszaw@redhat.com
Eduardo has uploaded a new change for review.
Change subject: Avoid template deactivation and lock.
......................................................................
Avoid template deactivation and lock.
Change-Id: Ieedf863ac967f34405f038201bac324c52fbbe89
---
M vdsm/storage/blockVolume.py
M vdsm/storage/volume.py
2 files changed, 39 insertions(+), 18 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/63/863/1
--
To view, visit http://gerrit.ovirt.org/863
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ieedf863ac967f34405f038201bac324c52fbbe89
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Eduardo <ewarszaw(a)redhat.com>
10 years, 1 month
Change in vdsm[master]: [WIP] Add the validateImage command to the SPM
by Federico Simoncelli
Federico Simoncelli has uploaded a new change for review.
Change subject: [WIP] Add the validateImage command to the SPM
......................................................................
[WIP] Add the validateImage command to the SPM
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
Change-Id: I095362e7d1eb91045569bd9526a102392e7adbe8
---
M vdsm/API.py
M vdsm/BindingXMLRPC.py
M vdsm/storage/hsm.py
M vdsm/storage/image.py
M vdsm/storage/sp.py
M vdsm/storage/volume.py
6 files changed, 60 insertions(+), 1 deletion(-)
git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/91/3491/1
--
To view, visit http://gerrit.ovirt.org/3491
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I095362e7d1eb91045569bd9526a102392e7adbe8
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce(a)redhat.com>
10 years, 1 month