Does anyone object to adding the vdsm lists to gmane?
by agl@us.ibm.com
Hi everyone. I just wanted to survey the group to see if anyone objects if I
add the VDSM mailing lists to the gmane mailing list archive. Any problems with
this?
--
Adam Litke <agl(a)us.ibm.com>
IBM Linux Technology Center
12 years, 7 months
Sep, 26 - VDSM Sync call meeting minutes
by abaron@redhat.com
Attendees: Jon Benedict, Adam Litke, Sanjay Mehrotra, Jon Choate, Andrew Cathrow, Tony Asleson, Dan Kenigsberg
(if I missed anyone, I apologize and feel free to reply)
Discussed plans for integration with Netapp.
Jon (Benedict) gave an overview of the various snapshot capabilities. Mainly discussed
a. Readonly snapshots - snapshot of an entire datastore, supported on all Netapp machines without additional licensing.
b. R/W snapshot (FlexClone) of entire volume (NFS export / 1 or more LUNs) or specific files and sub LUN cloning
- Need to determine how this maps to vdsm's imaging scheme.
Tony investigated the Netapp SDK and it appears as though there is a license compatibility issue. Jon will try to help with that.
Tony further investigated Netapp's SMI-S integration which only exposes one type of snapshots (readonly?), need to understand if there is a way to utilize both (looks like SMI-S will limit integration with Netapp)
Another question which arose was whether Netapp supports generating a list of changeblocks - need to ask Ricardo
Adam is working on a RESTful API which at least at first will try to resemble the oVirt REST API, need to get Michael (cc'd) to join the call and the discussions.
Sanjay asked about clustered storage support feature and will send further details on list.
Regards,
Ayal.
12 years, 7 months
fileSD: Fix remotePath in SD metadata (V2)
by agl@us.ibm.com
Changes since V1:
- Derive the remotePath from self.mountpoint instead of using the metadata
The current method for gathering a LOCALFS Storage Domain's remotePath property
does not work because these domains are connected with a symlink, not a mount.
Fix up the current code so that it handles links and mountpoints.
In the code I have noticed some sentiments that path information should be
removed from the storage domain metadata. I strongly disagree with this idea.
The path is a critical piece of information. End users will care a lot about
the path because it is where their images are located. This informaton is also
useful for calling connectStorageServer() and disconnectStorageServer().
Signed-off-by: Adam Litke <agl(a)us.ibm.com>
diff --git a/vdsm/storage/fileSD.py b/vdsm/storage/fileSD.py
index 35f7ab3..e904bfe 100644
--- a/vdsm/storage/fileSD.py
+++ b/vdsm/storage/fileSD.py
@@ -38,7 +38,9 @@ import time
REMOTE_PATH = "REMOTE_PATH"
FILE_SD_MD_FIELDS = sd.SD_MD_FIELDS.copy()
-# TBD: Do we really need this key?
+# Do we really need this key?
+# Answer: Yes. The path is an important part of the SD metadata and is useful
+# information for end-users. It can also be used to manage storage connections.
FILE_SD_MD_FIELDS[REMOTE_PATH] = (str, str)
def getDomUuidFromMetafilePath(metafile):
@@ -224,6 +226,19 @@ class FileStorageDomain(sd.StorageDomain):
def getRemotePath(self):
return self.remotePath
+ def mountToRemotePath(self):
+ """
+ Get the remote path based on the storage mountpoint.
+ Handle symlinks and mounts.
+ """
+ if os.path.islink(self.mountpoint):
+ return os.readlink(self.mountpoint)
+ elif os.path.isdir(self.mountpoint):
+ for mount in fileUtils.getMounts():
+ if self.mountpoint == mount[1]:
+ return mount[0]
+ return ''
+
def getInfo(self):
"""
Get storage domain info
@@ -232,13 +247,7 @@ class FileStorageDomain(sd.StorageDomain):
# First call parent getInfo() - it fills in all the common details
info = sd.StorageDomain.getInfo(self)
# Now add fileSD specific data
- info['remotePath'] = ''
- mounts = fileUtils.getMounts()
- for mount in mounts:
- if self.mountpoint == mount[1]:
- info['remotePath'] = mount[0]
- break
-
+ info['remotePath'] = self.mountToRemotePath()
return info
def getStats(self):
--
Adam Litke <agl(a)us.ibm.com>
IBM Linux Technology Center
12 years, 7 months
fileSD: Fix remotePath in SD metadata
by agl@us.ibm.com
The current method for gathering a LOCALFS Storage Domain's remotePath
property does not work because these domains are connected with a symlink,
not a mount. Since this info is already stored in the metadata, just get
it from there.
In the code I have noticed some sentiments that path information should be
removed from the storage domain metadata. I strongly disagree with this
idea. The path is a critical piece of information. End users will care a
lot about the path because it is where their images are located. This
information is also useful for calling connectStorageServer() and
disconnectStorageServer().
Signed-off-by: Adam Litke <agl(a)us.ibm.com>
diff --git a/vdsm/storage/fileSD.py b/vdsm/storage/fileSD.py
index 35f7ab3..01fabd1 100644
--- a/vdsm/storage/fileSD.py
+++ b/vdsm/storage/fileSD.py
@@ -24,7 +24,6 @@ import logging
import glob
import sd
-import fileUtils
import storage_exception as se
import fileVolume
import image
@@ -38,7 +37,9 @@ import time
REMOTE_PATH = "REMOTE_PATH"
FILE_SD_MD_FIELDS = sd.SD_MD_FIELDS.copy()
-# TBD: Do we really need this key?
+# Do we really need this key?
+# Answer: Yes. The path is an important part of the SD metadata and is useful
+# information for end-users. It can also be used to manage storage connections.
FILE_SD_MD_FIELDS[REMOTE_PATH] = (str, str)
def getDomUuidFromMetafilePath(metafile):
@@ -232,12 +233,7 @@ class FileStorageDomain(sd.StorageDomain):
# First call parent getInfo() - it fills in all the common details
info = sd.StorageDomain.getInfo(self)
# Now add fileSD specific data
- info['remotePath'] = ''
- mounts = fileUtils.getMounts()
- for mount in mounts:
- if self.mountpoint == mount[1]:
- info['remotePath'] = mount[0]
- break
+ info['remotePath'] = self.getMetaParam(REMOTE_PATH)
return info
--
Adam Litke <agl(a)us.ibm.com>
IBM Linux Technology Center
12 years, 7 months
Is it possible to remove a storage domain from vdsm
by agl@us.ibm.com
I have created several Storage domains that I no longer need/want. Does the
VDSM API provide a method for me to remove them? I see the APIs for
deactivateStorageDomain and detachStorageDomain, but these do not seem to cause
the domain to go away.
Thanks for your help.
--
Adam Litke <agl(a)us.ibm.com>
IBM Linux Technology Center
12 years, 7 months
Add new API: createImage()
by agl@us.ibm.com
As it currently stands, images are created implicitly when a volume is created
using a new imageUUID. Although this is practical for the current use cases, it
doesn't present a semetrical API. Other vdsm objects have create/delete
methods.i
Furthermore, it preents challenges when trying to model VDSM in a REST API.
Let's say I want to create a new disk for a VM. I would want to issue the
following sequence of commands:
POST -> /vdsm-api/storagedomains/<sdUUID>/images/create
<- HTTP/202 Accepted: /vdsm-api/tasks/<task-uuid>
GET -> /vdsm-api/tasks/<task-uuid>
<- HTTP/200 OK: /vdsm-api/storagedomains/<sdUUID>/images/<imgUUID>
POST -> /vdsm-api/storagedomains/<sdUUID>/images/<imgUUID>/volumes/create
<- HTTP/202 Accepted: /vdsm-api/tasks/<task-uuid>
GET -> /vdsm-api/tasks/<task-uuid>
<- HTTP/200 OK:
/vdsm-api/storagedomains/<sdUUID>/images/<imgUUID>/volumes/<volUUID>
The only way to support this pattern is to allow for the explicit creation of an
image with an empty volume chain.
Signed-off-by: Adam Litke <agl(a)us.ibm.com>
diff --git a/vdsm/storage/spm.py b/vdsm/storage/spm.py
index 849b9d2..1222752 100644
--- a/vdsm/storage/spm.py
+++ b/vdsm/storage/spm.py
@@ -820,6 +820,23 @@ class SPM:
repoPath = os.path.join(self.storage_repository, spUUID)
image.Image(repoPath).multiMove(srcDomUUID, dstDomUUID, imgDict, vmUUID, force)
+ def createImage(self, sdUUID, spUUID, imgUUID):
+ """
+ Create a new, empty image
+
+ :param sdUUID: The UUID of the storage domain that contains the images.
+ :type sdUUID: UUID
+ :param spUUID: The UUID of the storage pool that contains the images.
+ :type spUUID: UUID
+ :param imgUUID: The UUID of the image you want to delete.
+ :type imgUUID: UUID
+
+ """
+ hsm.HSM.getPool(spUUID) #Validates that the pool is connected. WHY?
+ hsm.HSM.validateSdUUID(sdUUID)
+ repoPath = os.path.join(self.storage_repository, spUUID)
+ image.Image(repoPath).create(sdUUID, imgUUID)
+ return dict(uuid=imgUUID)
def deleteImage(self, sdUUID, spUUID, imgUUID, postZero, force):
"""
@@ -1470,6 +1487,18 @@ class SPM:
imgUUID, volumes, misc.parseBool(postZero), misc.parseBool(force)
)
+ def public_createImage(self, sdUUID, spUUID, imgUUID):
+ """
+ Create a new, empty image
+ """
+ argsStr = "sdUUID=%s, spUUID=%s, imgUUID=%s" % \
+ (str(sdUUID), str(spUUID), str(imgUUID))
+ vars.task.setDefaultException(se.ImagePathError(argsStr))
+ hsm.HSM.getPool(spUUID) #Validates that the pool is connected. WHY?
+ hsm.HSM.validateSdUUID(sdUUID)
+ misc.validateUUID(imgUUID, 'imgUUID')
+ vars.task.getSharedLock(STORAGE, sdUUID)
+ self._schedule("createImage", self.createImage, sdUUID, spUUID, imgUUID)
def public_deleteImage(self, sdUUID, spUUID, imgUUID, postZero=False, force=False):
"""
--
Adam Litke <agl(a)us.ibm.com>
IBM Linux Technology Center
12 years, 7 months
[PATCH] deployUtils.py.in: fix typo
by Douglas Schilling Landgraf
- Replace Faild to Failed
Signed-off-by: Douglas Schilling Landgraf <dougsland(a)redhat.com>
---
vdsm_reg/deployUtil.py.in | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/vdsm_reg/deployUtil.py.in b/vdsm_reg/deployUtil.py.in
index b2e04dc..e8163b9 100644
--- a/vdsm_reg/deployUtil.py.in
+++ b/vdsm_reg/deployUtil.py.in
@@ -314,7 +314,7 @@ def getMGTIP(vdsmDir, vdcHostName):
sys.path.append(vdsmDir)
import netinfo # taken from vdsm rpm
except:
- logging.error("getMGTIP: Faild to find vdsm modules!")
+ logging.error("getMGTIP: Failed to find vdsm modules!")
return strReturn
arNICs = None
@@ -762,7 +762,7 @@ def makeBridge(vdcName, vdsmDir):
try:
imp.find_module('netinfo') # taken from vdsm rpm
except ImportError:
- logging.error("makeBridge Faild to find vdsm modules!")
+ logging.error("makeBridge Failed to find vdsm modules!")
return False
fReturn = True
--
1.7.1
12 years, 7 months
createVolume in recovered state
by Thang Pham
Hi,
I got VDSM to installed by building and installing the RPM. Thank you! I
am now trying out the CLI and trying to create a virtual machine. I was
able to create a storage domain, storage pool, and volume, but was
unsuccessful at creating the virtual machine.
I used the following code:
...
print "Creating volume..."
print s.createVolume(sdUUID, spUUID, imgUUID, sizeGiB, COW_FORMAT,
SPARSE_VOL, LEAF_VOL, volUUID, "My volume")
print "Creating VM..."
print s.create(dict(vmId=vmId, drives=[dict(poolID=spUUID,
domainID=sdUUID, imageID=imgUUID, volumeID=volUUID)],
...
I get the following output:
Creating volume...
{'status': {'message': 'OK', 'code': 0}, 'uuid':
'e38311cd-b872-48dd-a426-d4c4668a0cb5'}
Creating VM...
{'status': {'message': 'Recovering from crash or Initializing',
'code': 99}}
I browsed through the task list and found that the task to create the
volume is in "recovered state":
# cat e38311cd-b872-48dd-a426-d4c4668a0cb5.task
persistPolicy = auto
nrecoveries = 0
recoveryPolicy = auto
tag = spm
njobs = 1
id = e38311cd-b872-48dd-a426-d4c4668a0cb5
name = createVolume
cleanPolicy = manual
metadataVersion = 1
priority = low
state = recovered
store
= /rhev/data-center/ce132359-6405-4ab3-8d2b-7d1a27a625f9/tasks
Is there a reason why creating the volume would crash and recover, but not
complete? The task seems to be just hanging around.
Thank you,
-------------------------------------
Thang Pham
IBM Poughkeepsie
12 years, 7 months
Unwanted process creation when importing storage.sd
by agl@us.ibm.com
I have a python program that is making use of the vdscli and storage python
modules. I noticed that my program was creating a bunch of sub-processes that I
did not request. After some investigation I determined the chain of imports
that is causing these threads to be created:
Traceback (most recent call last):
File "./vdsmi", line 7, in <module>
from backend.Backend import Backend
File "/home/aglitke/src/vdsm-interface/backend/Backend.py", line 5, in <module>
from storage.sd import LOCALFS_DOMAIN, DATA_DOMAIN
File "/usr/share/vdsm/storage/sd.py", line 30, in <module>
import resourceFactories
File "/usr/share/vdsm/storage/resourceFactories.py", line 29, in <module>
import image
File "/usr/share/vdsm/storage/image.py", line 27, in <module>
import volume
File "/usr/share/vdsm/storage/volume.py", line 33, in <module>
import task
File "/usr/share/vdsm/storage/task.py", line 58, in <module>
import outOfProcess as oop
File "/usr/share/vdsm/storage/outOfProcess.py", line 38, in <module>
_globalPool = ProcessPool(MAX_HELPERS, GRACE_PERIOD, DEFAULT_TIMEOUT)
outOfProcess.py is initializing a global ProcessPool at module import which is
causing the unwanted threads. In order to prevent this, I believe we need to
either clean up this mess of cross-dependencies or require that users of
outOfProcess call a global processPoolInit() function if the ProcessPool is
needed. I pulled in an awful lot of code in an effort to load two integer
constants :)
--
Adam Litke <agl(a)us.ibm.com>
IBM Linux Technology Center
12 years, 7 months
Supporting persistent virtual machines
by agl@us.ibm.com
If VDSM is to be functional in a stand-alone configuration, one of the first
features that must be added is support for persistent domains. Since there will
not be a management server around to "remember" domain configurations, there
should be an option to have VDSM store it. Here is my idea for how it could be
implemented. Your comments and advice are greatly appreciated.
Selecting a location to store VM metadata:
------------------------------------------
The first design decision that needs to be made is where the VM metadata should
be stored. Based on my investigation, there are two potential locations: 1) In
a new directory under '/rhev/data-center', or 2) Inside the Master Storage
Domain ('/rhev/data-center/<spUUID>/mastersd/master/vms/').
Option #2 seems the most promising at first but I have a few concerns. The
'/rhev/data-center/<spUUID>/mastersd/master/vms/' directory already exists
today. What is it currently being used for? Even though I have some VMs
created I do not see any information here. Also, if the concept of Master
Storage Domain is going away, in the future how will we select a SD in which to
store VM definitions?
Option #1 may be an acceptable fallback if issues with #2 cannot be resolved.
However, I would prefer that VM metadata be stored on managed storage so that it
can be recovered if the host OS needs to be reinstalled.
New VDSM APIs required:
-----------------------
Once we have a place to store persistent VM configurations, we will need APIs to
define and undefine VMs. I propose the following:
define(vm_params): Create a persistent VM.
This API will create a persistent VM that will start in the 'Down' state.
The initial user parameters will be saved along with any VDSM derived
parameters to a metadata file.
undefine(vmUUID): Delete a persistent VM.
This API will un-persist a VM that has been previously created with 'define'.
This operation is valid only for Down VMs. The VM will be destroyed and its
persistent config removed.
Future APIs:
In the future, a new class of APIs for modifying the state of an existing VM
will be required. For example: add/remove disks, change memory, change
cpus/topology, etc. It would not be very user friendly to force redefinition
of a VM just to change some simple hardware details.
Other implications:
-------------------
Adding this feature will have implications on a number of other APIs. I am sure
I am missing many here so please feel free to add information if you know it.
create():
- If passed in a vmId corresponding to a persistent VM, parameters are loaded
from stored metadata.
list(), listNames(), getAllVmStats(), getVmsList():
- These functions must also list persistent VMs
migrate():
- After successful migration, remove local VM metadata
Comments? What have I missed?
--
Adam Litke <agl(a)us.ibm.com>
IBM Linux Technology Center
12 years, 7 months