Now that we have a demo UI put together Scott and I wanted to start serious discussion around what to implement next for the dloud portal. Here's a list of things to discuss, please comment in-line.
deltacloud-portal-design-points ===============================
Date: 2009-08-25 16:47:26 EDT
1 What's missing in the models ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + Storage (and the whole boatload of issues around image management) + Custom attributes + Actions
2 Polling and updating ~~~~~~~~~~~~~~~~~~~~~~ + Lots to do here figuring out how we're going to poll the different driver/providers for updates. + Also we need to work out a parallel monitoring API -- do we need separate monitoring drivers?
3 Permissions model ~~~~~~~~~~~~~~~~~~~ + We know portal users will need and be granted permissions on pools + Do we need single-user ownership of pools as well? + Do we need to track images and realms on the cloud providers and relate them back to our users, or is the user->pool->account mapping sufficient for this?
4 Validation infrastructure, going down to the drivers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + Names (validity, length, uniqueness)
5 Explicit mods to "Flavor" ~~~~~~~~~~~~~~~~~~~~~~~~~~~ + RAM + CPU + Storage
6 oVirt driver ~~~~~~~~~~~~~~ + dcloud pool would map to a single VM pool in oVirt
7 Quotas ~~~~~~~~ + We need to be aware of cloud-side quotas + We need to manage portal-side quotas on pools
8 Proxy vs. Portal ~~~~~~~~~~~~~~~~~~ + Is the proxy API the same as the deltacloud API, or is there more going on there
9 What additional non-REST APIs if any do we support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + QMF? + Web services (SOAP)?
10 Monitoring/stats display ~~~~~~~~~~~~~~~~~~~~~~~~~~~ How do we fit the widget in?
On 08/25/2009 04:50 PM, Hugh O. Brock wrote:
Now that we have a demo UI put together Scott and I wanted to start serious discussion around what to implement next for the dloud portal. Here's a list of things to discuss, please comment in-line.
deltacloud-portal-design-points ===============================
Date: 2009-08-25 16:47:26 EDT
1 What's missing in the models
+ Storage (and the whole boatload of issues around image management) + Custom attributes + Actions
- Error Code / Exception processing. - Validation rules to enable front end validation.
2 Polling and updating
+ Lots to do here figuring out how we're going to poll the different driver/providers for updates. + Also we need to work out a parallel monitoring API -- do we need separate monitoring drivers? 3 Permissions model ~~~~~~~~~~~~~~~~~~~ + We know portal users will need and be granted permissions on pools + Do we need single-user ownership of pools as well? + Do we need to track images and realms on the cloud providers and relate them back to our users, or is the user->pool->account mapping sufficient for this? 4 Validation infrastructure, going down to the drivers
- Names (validity, length, uniqueness)
Ok.. from above.
5 Explicit mods to "Flavor"
+ RAM + CPU + Storage 6 oVirt driver ~~~~~~~~~~~~~~ + dcloud pool would map to a single VM pool in oVirt
ovirt as hte backend?
7 Quotas
+ We need to be aware of cloud-side quotas + We need to manage portal-side quotas on pools 8 Proxy vs. Portal
- Is the proxy API the same as the deltacloud API, or is there more going on there
9 What additional non-REST APIs if any do we support
+ QMF? + Web services (SOAP)? 10 Monitoring/stats display ~~~~~~~~~~~~~~~~~~~~~~~~~~~ How do we fit the widget in?
- Support for creating items based on realms. - I would get another driver so that other issues can be found. - Do we talk about provisioning into the cloud?
On Aug 25, 2009, at 4:58 PM, Bryan Kearney wrote:
On 08/25/2009 04:50 PM, Hugh O. Brock wrote:
Now that we have a demo UI put together Scott and I wanted to start serious discussion around what to implement next for the dloud portal. Here's a list of things to discuss, please comment in-line.
deltacloud-portal-design-points ===============================
Date: 2009-08-25 16:47:26 EDT
1 What's missing in the models
+ Storage (and the whole boatload of issues around image management) + Custom attributes + Actions
- Error Code / Exception processing.
- Validation rules to enable front end validation.
2 Polling and updating
+ Lots to do here figuring out how we're going to poll the different driver/providers for updates. + Also we need to work out a parallel monitoring API -- do we need separate monitoring drivers? 3 Permissions model ~~~~~~~~~~~~~~~~~~~ + We know portal users will need and be granted permissions on pools + Do we need single-user ownership of pools as well? + Do we need to track images and realms on the cloud providers and relate them back to our users, or is the user->pool->account mapping sufficient for this? 4 Validation infrastructure, going down to the drivers
- Names (validity, length, uniqueness)
Ok.. from above.
5 Explicit mods to "Flavor"
+ RAM + CPU + Storage 6 oVirt driver ~~~~~~~~~~~~~~ + dcloud pool would map to a single VM pool in oVirt
ovirt as hte backend?
7 Quotas
+ We need to be aware of cloud-side quotas + We need to manage portal-side quotas on pools 8 Proxy vs. Portal
- Is the proxy API the same as the deltacloud API, or is there more going on there
9 What additional non-REST APIs if any do we support
+ QMF? + Web services (SOAP)? 10 Monitoring/stats display ~~~~~~~~~~~~~~~~~~~~~~~~~~~ How do we fit the widget in?
- Support for creating items based on realms.
- I would get another driver so that other issues can be found.
- Do we talk about provisioning into the cloud?
What about accounting/billing?
Libcloud-list mailing list Libcloud-list@redhat.com https://www.redhat.com/mailman/listinfo/libcloud-list
Bryan Kearney wrote:
5 Explicit mods to "Flavor"
+ RAM + CPU + Storage
i.e. per-Instance flavor mods -- since RHEV-m, oVirt both allow these to be customized on a per-instance basis.
6 oVirt driver
+ dcloud pool would map to a single VM pool in oVirt
ovirt as hte backend?
Not "the" backend but another driver in parallel to RHEV-M and ec2. I understood this point a bit different in the earlier discussion with Hugh though. When we built the model for dcloud we explicitly avoided the ovirt-like notion of separate resource pools for a single account (i.e. what oVirt calls VM Pools) since these didn't exist in RHEV-M and ec2. To support an oVirt driver as I understand we are doing now, they need to be accounted for somewhere. However, the dcloud portal pools are internal to the portal and not exposed to the driver. In addition, drivers handle resource limitations on an account basis, whereas oVirt handles them on a VM pool basis.
To get around this I was thinking for the ovirt driver we'd scope accounts around VM pools. We would add an optional "context ID" to the portal account object and make accounts unique on the (username, context_id) pair rather than just unique on username. For oVirt the context ID would be the VM pool. This would mean for an oVirt user with access to two VM pools there would be two Account objects in the portal.
10 Monitoring/stats display
How do we fit the widget in?
- Support for creating items based on realms.
I think we've got this already.
- I would get another driver so that other issues can be found.
i.e. oVirt driver.
- Do we talk about provisioning into the cloud?
So far the only provisioning we've done is image selection -- do we need more than this for version "1.0" ?
Scott
On Tue, Aug 25, 2009 at 05:28:36PM -0400, Scott Seago wrote:
Bryan Kearney wrote:
5 Explicit mods to "Flavor"
+ RAM + CPU + Storage
i.e. per-Instance flavor mods -- since RHEV-m, oVirt both allow these to be customized on a per-instance basis.
6 oVirt driver
+ dcloud pool would map to a single VM pool in oVirt
ovirt as hte backend?
Not "the" backend but another driver in parallel to RHEV-M and ec2. I understood this point a bit different in the earlier discussion with Hugh though. When we built the model for dcloud we explicitly avoided the ovirt-like notion of separate resource pools for a single account (i.e. what oVirt calls VM Pools) since these didn't exist in RHEV-M and ec2. To support an oVirt driver as I understand we are doing now, they need to be accounted for somewhere. However, the dcloud portal pools are internal to the portal and not exposed to the driver. In addition, drivers handle resource limitations on an account basis, whereas oVirt handles them on a VM pool basis.
To get around this I was thinking for the ovirt driver we'd scope accounts around VM pools. We would add an optional "context ID" to the portal account object and make accounts unique on the (username, context_id) pair rather than just unique on username. For oVirt the context ID would be the VM pool. This would mean for an oVirt user with access to two VM pools there would be two Account objects in the portal.
Providers for VMWare ESX and Rackspace are a higher priority, BTW. I left this out of my notes originally.
10 Monitoring/stats display
How do we fit the widget in?
- Support for creating items based on realms.
I think we've got this already.
- I would get another driver so that other issues can be found.
i.e. oVirt driver.
- Do we talk about provisioning into the cloud?
So far the only provisioning we've done is image selection -- do we need more than this for version "1.0" ?
Yeah, I think we do, but I'm not sure what. Certainly we need a local image library and a way to move those images around.
--Hugh
Hugh O. Brock wrote:
10 Monitoring/stats display
How do we fit the widget in?
- Support for creating items based on realms.
I think we've got this already.
- I would get another driver so that other issues can be found.
i.e. oVirt driver.
- Do we talk about provisioning into the cloud?
So far the only provisioning we've done is image selection -- do we need more than this for version "1.0" ?
Yeah, I think we do, but I'm not sure what. Certainly we need a local image library and a way to move those images around.
--Hugh
So to do this we'd need the drivers to support image upload, and the local image library would need to define the superset of required image attributes and metadata for all supported drivers. On the portal modeling side, this local image object will be a separate object type from the cloud image object we've already got. The local image object contains the actual image and can't be used directly to create an instance -- it's got to be uploaded into the cloud first. The cloud image object simply holds metadata for an image already hosted in a particular cloud provider -- with user permissions, etc.
Scott
So to do this we'd need the drivers to support image upload, and the local image library would need to define the superset of required image attributes and metadata for all supported drivers. On the portal modeling side, this local image object will be a separate object type from the cloud image object we've already got. The local image object contains the actual image and can't be used directly to create an instance -- it's got to be uploaded into the cloud first. The cloud image object simply holds metadata for an image already hosted in a particular cloud provider -- with user permissions, etc.
I also wonder if we need an ingestion pipeline, to crack open images and tweak copies before sending them to the final provider. Things like installing the ec2-tools RPM when pushing to amazon, vmware-tools when pushing to a VMware provider, etc.
-Bob
On Wed, Aug 26, 2009 at 09:55:42AM -0400, Bob McWhirter wrote:
So to do this we'd need the drivers to support image upload, and the local image library would need to define the superset of required image attributes and metadata for all supported drivers. On the portal modeling side, this local image object will be a separate object type from the cloud image object we've already got. The local image object contains the actual image and can't be used directly to create an instance -- it's got to be uploaded into the cloud first. The cloud image object simply holds metadata for an image already hosted in a particular cloud provider -- with user permissions, etc.
I also wonder if we need an ingestion pipeline, to crack open images and tweak copies before sending them to the final provider. Things like installing the ec2-tools RPM when pushing to amazon, vmware-tools when pushing to a VMware provider, etc.
Yeah, I think we do. I hope we can leverage libguestfs and virt-v2v for at least some of this.
--H
Yeah, I think we do. I hope we can leverage libguestfs and virt-v2v for at least some of this.
fwiw, Marek's been thinking along the lines of libguestfs, even for our normal appliance creation. Instead of starting from scratch with each appliance, just copy and crack a base JeOS/AOS appliance, then start adding packages to customize.
Sorta thinking this could go towards our response to susestudio.com also, to improve appliance creation times.
-Bob
On 08/26/2009 10:13 AM, Bob McWhirter wrote:
Yeah, I think we do. I hope we can leverage libguestfs and virt-v2v for at least some of this.
fwiw, Marek's been thinking along the lines of libguestfs, even for our normal appliance creation. Instead of starting from scratch with each appliance, just copy and crack a base JeOS/AOS appliance, then start adding packages to customize.
Sorta thinking this could go towards our response to susestudio.com also, to improve appliance creation times.
-Bob
Bob.. have him take a look at:
http://github.com/bkearney/adk2/tree/master
It ties together appliance creator, ec2, pungi, and a few others. if there are things missing I would love to hear about them.
I had a ruby version earlier.. but python was easier to integrate with the upstream tools. It currently runs on F11.
-- bk
Bob McWhirter wrote:
So to do this we'd need the drivers to support image upload, and the local image library would need to define the superset of required image attributes and metadata for all supported drivers. On the portal modeling side, this local image object will be a separate object type from the cloud image object we've already got. The local image object contains the actual image and can't be used directly to create an instance -- it's got to be uploaded into the cloud first. The cloud image object simply holds metadata for an image already hosted in a particular cloud provider -- with user permissions, etc.
I also wonder if we need an ingestion pipeline, to crack open images and tweak copies before sending them to the final provider. Things like installing the ec2-tools RPM when pushing to amazon, vmware-tools when pushing to a VMware provider, etc.
From my personal opinion and borrowing from a little grid computing experience, you'll need this approach and some generalized tools around it to be successful.
For some in-house lessons learned from MRG, take a look at Condor's file declaration in job submission, use of Stork to separate data transfer management with a URI/plugin system outside of job execution logic, use of job routing to transform job descriptions based on the new/different grid they're going into, etc. You might also want to look at other grant-funded research in datagrid systems like SRB or iRODS (irods.org). You'll need to either build or leverage existing technology for:
* A plug-in oriented data transfer management service that understands things like network bandwidth limiting, concurrency limits associated with various file service implementations, etc. * A rules engine for transforming the images. * A resource monitoring/management lingua franca that the rules engine can use. * An audit trail for proof of control over the disk images (auditing both transfers and changes made).
Again, my $.02. Just glad you guys are looking at addressing the problem set.
-- Lans Carstensen
deltacloud-devel@lists.fedorahosted.org