Some notes on initial integration of Aeolus and Matahari
by Perry Myers
cross-posting to Aeolus and Matahari upstream projects...
We discussed sort of a pilot project for integration of Matahari into
Aeolus usage. The driver for this is that we need a way for folks
upstream to be able to deploy a single node Aeolus environment managing
1 or more Fedora hosts using kvm/libvirt (or VMware)
The problem is that we don't have a way in this environment to
bootstrap, specifically for Aeolus to get the ip address of the guests
as they are booting on the libvirt/kvm/VMware hosts.
I thought that this would be a good simple first use case for running
Matahari in Aeolus managed guests. So here's what I propose:
* Aeolus uses Image Factory to build images... When those images are
either Fedora, RHEL or Windows IF will include Matahari and puppet in
the core images
* Aeolus will inject basic bootstrapping info for Matahari like broker
ip address and broker auth info (this ignores the more complex post
boot config process Carl has for the time being, this process is
quick and dirty just to get stuff working and then we can expand to
use Carl's process as a second step)
* This injection could happen in a few ways:
- During image build
- During image deployment (using guestfish to inject a config file)
- Other? Could rely on DNS SRV records w/ no broker auth (again this
is meant for developer environments, not production usage)
* When an instance is booted, matahari comes up uses the injected config
and connects to the broker and a "hey I'm a new guest on the bus"
event should be sent out which includes the ip address of the guest
* Simple QMF console written that runs as part of the Conductor
infrastructure that monitors the bus for new 'I'm here' events and
when it sees one it grabs the ip address and stores it in the database
* It then calls a processConfig method (on which Agent?) which gives a
puppet file to a guest agent to process via a call to puppet tool.
Return from that method should be success or failure (with logging
info probably) This means puppet needs to be baked into the image as
well along with matahari.
TODOs:
Matahari Team:
* Write host agent event for 'i'm here configure me, here's my ip
address'
* Write method (which agent?) for processConfig(puppet-conf)
Aeolus Team:
* Need to write a simple QMF Console app that is part of the Conductor
that registers for 'new guest' events and when it sees these reports
that to Conductor and calls the processConfig method with the right
puppet script
* Run a broker alongside the Conductor and other Aeolus core
infrastructure. The ip address of the host running this is what
needs to be pushed into images for bootstrapping
The end goal here should be making it EASY for someone to grab Aeolus
and set it up quickly to manage local resources (i.e. we don't want to
_require_ people playing with Aeolus to need an EC2 account)
Andrew/Adam: Random question, what links all of the top level objects on
a single host together? Is there a common/shared UUID exposed by each
Agent? If a guest comes up and the Host agent appears, I can query that
Host object for UUID. But how do I know which Network Agent is the one
on the same guest as the Host Agent with the UUID that I just got?
Perry
13 years
[PATCH configure] add timestamp to configure rpm release (rev 2)
by Mo Morsi
---
contrib/deltacloud-configure.spec | 2 +-
rake/rpmtask.rb | 14 +++++++++++---
2 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/contrib/deltacloud-configure.spec b/contrib/deltacloud-configure.spec
index 670d401..d88d43a 100644
--- a/contrib/deltacloud-configure.spec
+++ b/contrib/deltacloud-configure.spec
@@ -4,7 +4,7 @@
Summary: DeltaCloud Configure Puppet Recipe
Name: deltacloud-configure
Version: 2.0.0
-Release: 2%{?dist}
+Release: 2%{?dist}%{timestamp}
Group: Applications/Internet
License: GPLv2+
diff --git a/rake/rpmtask.rb b/rake/rpmtask.rb
index ff2ee66..0dd361c 100644
--- a/rake/rpmtask.rb
+++ b/rake/rpmtask.rb
@@ -18,6 +18,9 @@ module Rake
# RPM build dir
attr_accessor :topdir
+ # Include a timestamp in the rpm
+ attr_accessor :include_timestamp
+
def initialize(rpm_spec)
init(rpm_spec)
yield self if block_given?
@@ -25,6 +28,7 @@ module Rake
end
def init(rpm_spec)
+ @include_timestamp = false
@rpm_spec = rpm_spec
# parse this out of the rpmbuild macros,
@@ -63,13 +67,17 @@ module Rake
directory "#{@topdir}/SPECS"
desc "Build the rpms"
- task :rpms => [rpm_file]
+ task :rpms, [:include_timestamp] => [rpm_file]
# FIXME properly determine :package build artifact(s) to copy to sources dir
- file rpm_file => [:package, "#{@topdir}/SOURCES", "#{@topdir}/SPECS"] do
+ file rpm_file, [:include_timestamp] => [:package, "#{@topdir}/SOURCES", "#{@topdir}/SPECS"] do |t,args|
+ @include_timestamp = args.include_timestamp != "false"
cp "#{package_dir}/#{@name}-#{(a)version}.tgz", "#{@topdir}/SOURCES/"
cp @rpm_spec, "#{@topdir}/SPECS"
- sh "#{@rpmbuild_cmd} --define '_topdir #{@topdir}' -ba #{@rpm_spec}"
+ sh "#{@rpmbuild_cmd} " +
+ "--define '_topdir #{@topdir}' " +
+ "--define 'timestamp .#{@include_timestamp ? Time.now.strftime("%Y%m%d%k%M%s").gsub(/\s/, '') : ''}' " +
+ "-ba #{@rpm_spec}"
end
end
--
1.7.2.3
13 years, 1 month
Stateful Vs Stateless Instances Findings
by Ian Main
We had a meeting this morning and we managed to hash some things out on
this subject. Here is the result. I am posting this to both
aeolus-devel and deltacloud-devel as it impacts both.
Stateful vs Stateless instances and the cloud
---------------------------------------------
In an effort to support private cloud, we had to sort out what to do about
'stateful' instances. We're defining a stateful instance as one where
the image for the instance does not get destroyed when the instance is
stopped and it is possible to restart it with the same state.
Also, the model that Aeolus Conductor is using regarding images is the
same as ec2, where a given image can be used to launch as many instances
as needed. This is not the same as the model used by many private cloud
providers where an image is used directly by a instance/VM and it is
not possible for it to be used by multiple running instances.
We want to tackle these questions in two different phases. Stateless
cloud on supported providers is presently the most important requirement,
but we will need both soon.
STATELESS CLOUD
---------------
The way that some providers (such as rhevm and vmware) launch instances
now are not what we need. Cloud launch must start an instance in
such a way that it can perform multiple launches from the same image.
To support that what really needs to happen is that prior to launch the
image needs to be either cloned or snapshotted.
Below are a couple of ways we thought of to do this:
- Deltacloud API provider driver start call must clone disk on startup
if the provider does not already do that.
- On destroy, API must clean up cloned disk image if the provider does
not already do it.
- On shutdown, the API leaves the disk alone.
OR
- Deltacloud API provides instrospection into the driver and specifies
whether or not an instance start will clone the disk for us.
- It also needs to specify if the storage needs to be destroyed on
instance shutdown (some clouds do, some don't).
- Deltacloud API provides APIs for performing the clone and then cleanup
of cloned images.
- It is then up to the client to perform the right sequence of actions
to get the desired behavior.
Either of these will work and allow us to make the Conductor model
consistent with existing public cloud providers.
STATEFUL CLOUD INSTANCES
------------------------
Stateful instances have the ability to be stopped without destroying the
disk image and then resumed from the same state. In order to support
this we need:
Condor:
- Condor must have "suspend"/stop/restart support.
- Image must be built to target stateful - we need a toggle at image
build time and at launch time.
Deltacloud API:
- Deltacloud API must support stop that does not cleanup disk image but
only if available.
- Introspection as to whether or not a stop is valid. Currently
stop/destroy is the same op on ec2, but 'stop' should be invalid. Stop is
only valid for stateful instances. Destroy cleans up disk image. This is
to act as introspection as to whether the instance is stateful or not.
This may not be needed depending on how we support disk clone/cleanup as
defined above.
Conductor:
- If an instance on a stateful-only cloud is designated stateless, we must
track that extra condition in the conductor (because stateful/stateless
will act the same). This lets us know if we need to clean up disk
images etc.
- We could provide a 'stateful/stateless' toggle at launch time that
warned the user there is no match and they must build a stateful instance
for it to work.
Thanks!
Ian
13 years, 1 month
[PATCH conductor] Delayed Job
by Jan Provazník
From: Jan Provaznik <jprovazn(a)redhat.com>
Delayed_job is used for running background tasks (for now copying ssh key to
running instance). Because it runs as separate process we need to start it when
starting appliance.
Delayed_job package version 2.0.6 should be added to our rpm and can be
downloaded from koji: http://koji.fedoraproject.org/koji/taskinfo?taskID=2863673
---
aeolus-conductor.spec.in | 8 ++
conf/conductor-delayed_job | 72 ++++++++++++++++++++
src/Rakefile | 7 ++
src/config/environment.rb | 1 +
src/config/initializers/delayed_job.rb | 2 +
.../migrate/20110124103216_create_delayed_jobs.rb | 21 ++++++
src/script/delayed_job | 5 ++
7 files changed, 116 insertions(+), 0 deletions(-)
create mode 100755 conf/conductor-delayed_job
create mode 100644 src/config/initializers/delayed_job.rb
create mode 100644 src/db/migrate/20110124103216_create_delayed_jobs.rb
create mode 100755 src/script/delayed_job
diff --git a/aeolus-conductor.spec.in b/aeolus-conductor.spec.in
index 45bbe9f..7af22c1 100644
--- a/aeolus-conductor.spec.in
+++ b/aeolus-conductor.spec.in
@@ -33,6 +33,7 @@ Requires: rubygem(deltacloud-image-builder-agent)
Requires: rubygem(imagebuilder-console)
Requires: rubygem(rack-restful_submit)
Requires: rubygem(sunspot_rails)
+Requires: rubygem(delayed_job)
Requires: postgresql
Requires: postgresql-server
Requires: ruby-postgres
@@ -123,6 +124,7 @@ mv %{buildroot}/%{app_root}/doc %{buildroot}/%{app_root}/test %{buildroot}/%{doc
%{__cp} conf/conductor-dbomatic %{buildroot}%{_initrddir}
%{__cp} conf/conductor-condor_refreshd %{buildroot}%{_initrddir}
%{__cp} conf/conductor-image_builder_service %{buildroot}%{_initrddir}
+%{__cp} conf/conductor-delayed_job %{buildroot}%{_initrddir}
%{__cp} conf/aeolus-conductor-httpd.conf %{buildroot}%{_sysconfdir}/httpd/conf.d/aeolus-conductor.conf
%{__cp} conf/aeolus-conductor.logrotate %{buildroot}%{_sysconfdir}/logrotate.d/aeolus-conductor
%{__cp} conf/aeolus-conductor.sysconf %{buildroot}%{_sysconfdir}/sysconfig/aeolus-conductor
@@ -153,6 +155,7 @@ touch %{buildroot}%{_localstatedir}/log/%{name}/dbomatic.log
touch %{buildroot}%{_localstatedir}/run/%{name}/event_log_position
touch %{buildroot}%{_localstatedir}/log/%{name}/condor_refreshd.log
touch %{buildroot}%{_localstatedir}/log/%{name}/image_builder_service.log
+touch %{buildroot}%{app_root}/log/delayed_job.log
# remove the files not needed for the installation
%{__rm} -f %{buildroot}%{app_root}/vendor/plugins/will_paginate/.gitignore
@@ -186,6 +189,7 @@ getent passwd aeolus >/dev/null || \
/sbin/chkconfig --add conductor-dbomatic
/sbin/chkconfig --add conductor-condor_refreshd
/sbin/chkconfig --add conductor-image_builder_service
+/sbin/chkconfig --add conductor-delayed_job
%preun daemons
if [ $1 = 0 ]; then
@@ -197,6 +201,8 @@ if [ $1 = 0 ]; then
/sbin/chkconfig --del conductor-condor_refreshd
/sbin/service conductor-image_builder_service stop > /dev/null 2>&1
/sbin/chkconfig --del conductor-image_builder_service
+/sbin/service conductor-delayed_job stop > /dev/null 2>&1
+/sbin/chkconfig --del conductor-delayed_job
fi
%files
@@ -217,6 +223,7 @@ fi
%{_initrddir}/conductor-dbomatic
%{_initrddir}/conductor-condor_refreshd
%{_initrddir}/conductor-image_builder_service
+%{_initrddir}/conductor-delayed_job
%config(noreplace) %{_sysconfdir}/logrotate.d/%{name}
%config(noreplace) %{_sysconfdir}/sysconfig/aeolus-conductor
%config(noreplace) %{_sysconfdir}/sysconfig/conductor-rails
@@ -224,6 +231,7 @@ fi
%attr(-, aeolus, aeolus) %{_localstatedir}/lib/%{name}
%attr(-, aeolus, aeolus) %{_localstatedir}/run/%{name}
%attr(-, aeolus, aeolus) %{_localstatedir}/log/%{name}
+%attr(-, aeolus, aeolus) %{app_root}/log/delayed_job.log
%doc AUTHORS COPYING
%files doc
diff --git a/conf/conductor-delayed_job b/conf/conductor-delayed_job
new file mode 100755
index 0000000..b0fdc2f
--- /dev/null
+++ b/conf/conductor-delayed_job
@@ -0,0 +1,72 @@
+#!/bin/bash
+#
+#
+# conductor-delayed_job startup script for conductor-delayed_job
+#
+# chkconfig: - 99 01
+# description: conductor-delayed_job is service for running conductor
+# background jobs
+
+[ -r /etc/sysconfig/conductor-rails ] && . /etc/sysconfig/conductor-rails
+
+[ -r /etc/sysconfig/aeolus-conductor ] && . /etc/sysconfig/aeolus-conductor
+
+RAILS_ENV="${RAILS_ENV:-production}"
+CONDUCTOR_DIR="${CONDUCTOR_DIR:-/usr/share/aeolus-conductor}"
+AEOLUS_USER="${AEOLUS_USER:-aeolus}"
+DJOB_LOCKFILE="${DJOB_LOCKFILE:-/var/lock/subsys/conductor-delayed_job}"
+DJOB_PIDFILE="${DJOB_PIDFILE:-/var/run/aeolus-conductor/}"
+
+DJOB_PATH=$CONDUCTOR_DIR/script/
+DJOB_PROG=delayed_job
+
+. /etc/init.d/functions
+
+start() {
+ echo -n "Starting conductor-delayed_job: "
+
+ daemon --user=$AEOLUS_USER $DJOB_PATH/$DJOB_PROG start --pid-dir=$DJOB_PIDFILE
+ RETVAL=$?
+ if [ $RETVAL -eq 0 ] && touch $DJOB_LOCKFILE ; then
+ echo_success
+ echo
+ else
+ echo_failure
+ echo
+ fi
+}
+
+stop() {
+ echo -n "Shutting down conductor-delayed_job: "
+ $DJOB_PATH/$DJOB_PROG stop --pid-dir=$DJOB_PIDFILE
+ RETVAL=$?
+ if [ $RETVAL -eq 0 ] && rm -f $DJOB_LOCKFILE ; then
+ echo_success
+ echo
+ else
+ echo_failure
+ echo
+ fi
+}
+
+case "$1" in
+ start)
+ start
+ ;;
+ stop)
+ stop
+ ;;
+ restart)
+ stop
+ start
+ ;;
+ force-reload)
+ restart
+ ;;
+ *)
+ echo "Usage: conductor-delayed_job {start|stop|restart}"
+ exit 1
+ ;;
+esac
+
+exit $RETVAL
diff --git a/src/Rakefile b/src/Rakefile
index f2d8d21..ad7f788 100644
--- a/src/Rakefile
+++ b/src/Rakefile
@@ -12,4 +12,11 @@ begin
rescue LoadError
end
+begin
+ #gem 'delayed_job', :version => '~>2.0.4'
+ require 'delayed/tasks'
+rescue LoadError
+ STDERR.puts "Run `rake gems:install` to install delayed_job"
+end
+
require 'tasks/rails'
diff --git a/src/config/environment.rb b/src/config/environment.rb
index 986cfe4..cedcb32 100644
--- a/src/config/environment.rb
+++ b/src/config/environment.rb
@@ -54,6 +54,7 @@ Rails::Initializer.run do |config|
config.gem "rb-inotify"
config.gem 'rack-restful_submit', :version => '1.1.2'
config.gem 'sunspot_rails', :lib => 'sunspot/rails'
+ config.gem 'delayed_job', :version => '~>2.0.4'
config.middleware.swap Rack::MethodOverride, 'Rack::RestfulSubmit'
diff --git a/src/config/initializers/delayed_job.rb b/src/config/initializers/delayed_job.rb
new file mode 100644
index 0000000..a8684b4
--- /dev/null
+++ b/src/config/initializers/delayed_job.rb
@@ -0,0 +1,2 @@
+Delayed::Worker.backend = :active_record
+Delayed::Worker.max_attempts = 1
diff --git a/src/db/migrate/20110124103216_create_delayed_jobs.rb b/src/db/migrate/20110124103216_create_delayed_jobs.rb
new file mode 100644
index 0000000..943ff9b
--- /dev/null
+++ b/src/db/migrate/20110124103216_create_delayed_jobs.rb
@@ -0,0 +1,21 @@
+class CreateDelayedJobs < ActiveRecord::Migration
+ def self.up
+ create_table :delayed_jobs, :force => true do |table|
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
+ table.text :handler # YAML-encoded string of the object that will do work
+ table.text :last_error # reason for last failure (See Note below)
+ table.datetime :run_at # When to run. Could be Time.zone.now for immediately, or sometime in the future.
+ table.datetime :locked_at # Set when a client is working on this object
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
+ table.string :locked_by # Who is working on this object (if locked)
+ table.timestamps
+ end
+
+ add_index :delayed_jobs, [:priority, :run_at], :name => 'delayed_jobs_priority'
+ end
+
+ def self.down
+ drop_table :delayed_jobs
+ end
+end
diff --git a/src/script/delayed_job b/src/script/delayed_job
new file mode 100755
index 0000000..edf1959
--- /dev/null
+++ b/src/script/delayed_job
@@ -0,0 +1,5 @@
+#!/usr/bin/env ruby
+
+require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
+require 'delayed/command'
+Delayed::Command.new(ARGV).daemonize
--
1.7.4
13 years, 1 month
[PATCH aeolus] Completes test coverage on Redmine Feature 408
by steve linabery
From: Steve Linabery <slinabery(a)gmail.com>
I confirmed that existing rspec/cuke tests already cover new pool creation
as well as creating a new instance (mock) associated with a new pool. This
feature step just makes sure that it's possible to create more than one pool
at the same time.
---
src/features/pool.feature | 26 ++++++++++++++++++++++++++
1 files changed, 26 insertions(+), 0 deletions(-)
diff --git a/src/features/pool.feature b/src/features/pool.feature
index 1eced2a..23260a2 100644
--- a/src/features/pool.feature
+++ b/src/features/pool.feature
@@ -54,3 +54,29 @@ Feature: Manage Pools
And I should be on the resources pools page
And I should not see "Redhat Voyager Pool"
And I should not see "Amazon Startrek Pool"
+
+
+ Scenario: Create multiple pools
+ Given I am on the pools page
+ And there is not a pool named "mockpool"
+ And there is not a pool named "foopool"
+ When I follow "New Pool"
+ Then I should be on the new resources pool page
+ And I should see "Create a new Pool"
+ When I fill in "pool_name" with "mockpool"
+ And I press "Save"
+ Then I should be on the resources pool page
+ And I should see "Pool added"
+ And I should see "mockpool"
+ And I should have a pool named "mockpool"
+ When I follow "New Pool"
+ Then I should be on the new resources pool page
+ And I should see "Create a new Pool"
+ When I fill in "pool_name" with "foopool"
+ And I press "Save"
+ Then I should be on the resources pool page
+ And I should see "Pool added"
+ And I should see "mockpool"
+ And I should see "foopool"
+ And I should have a pool named "mockpool"
+ And I should have a pool named "foopool"
--
1.7.4
13 years, 2 months
[PATCH conductor] Group packages fix
by Jan Provazník
From: Jan Provaznik <jprovazn(a)redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=678597
When prepareing repos, packages which don't exist in repo are removed from group
packages list
---
.../util/repository_manager/comps_repository.rb | 8 +++++++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/src/app/util/repository_manager/comps_repository.rb b/src/app/util/repository_manager/comps_repository.rb
index 494fbac..2d8ab37 100644
--- a/src/app/util/repository_manager/comps_repository.rb
+++ b/src/app/util/repository_manager/comps_repository.rb
@@ -23,10 +23,11 @@ class CompsRepository < AbstractRepository
end
def prepare_repo
+ @all_packages = get_packages
Dir.mkdir(@cache_dir) unless File.directory?(@cache_dir)
File.open(@cache_file, 'w') do |f|
Marshal.dump({
- :packages => get_packages,
+ :packages => @all_packages,
:groups => get_groups,
:categories => get_categories
}, f)
@@ -46,6 +47,9 @@ class CompsRepository < AbstractRepository
end
def get_groups
+ @all_packages_hash = {}
+ @all_packages.each {|p| @all_packages_hash[p[:name]] = true}
+
group_nodes.map do |g|
pkgs = group_packages(g)
next if pkgs.empty?
@@ -138,6 +142,8 @@ class CompsRepository < AbstractRepository
pkgs = {}
group_node.xpath('packagelist/packagereq').each do |p|
pkg_name = p.text
+ # skip pkg if it's not in all packages list
+ next unless @all_packages_hash[pkg_name]
(pkgs[pkg_name] ||= {})[:type] = p.attr('type')
end
return pkgs
--
1.7.4
13 years, 2 months
[PATCH] Image Warehouse API library for conductor
by Maros Zatko
From: Maros Zatko <mzatko(a)redhat.com>
---
src/lib/warehouse_client.rb | 163 +++++++++++++++++++++++++++++++++++++++++++
1 files changed, 163 insertions(+), 0 deletions(-)
create mode 100644 src/lib/warehouse_client.rb
diff --git a/src/lib/warehouse_client.rb b/src/lib/warehouse_client.rb
new file mode 100644
index 0000000..55affdf
--- /dev/null
+++ b/src/lib/warehouse_client.rb
@@ -0,0 +1,163 @@
+require 'rubygems'
+
+require 'rest-client'
+require 'nokogiri'
+
+#TODO: perform iwhd version-dependent URI mapping
+
+module Warehouse
+
+ class BucketObject
+ attr_reader :key
+
+ def initialize(connection, key, bucket)
+ @connection = connection
+ @key = key
+ @bucket = bucket
+ @path = "/#{@bucket.name}/#{(a)key}"
+ end
+
+ def self.create(connection, key, bucket, body, attrs = {})
+ obj = new(connection, key, bucket)
+ obj.set_body(body)
+ obj.set_attrs(attrs)
+ obj
+ end
+
+ def body
+ @connection.do_request @path, :plain => true
+ end
+
+ def set_body(body)
+ @connection.do_request @path, :content => body, :method => :put
+ end
+
+ def attr_list
+ result = @connection.do_request @path, :content => 'op=parts', :method => :post
+ return result.xpath('/object/object_attr/@name').to_a.map {|item| item.value}
+ end
+
+ def attrs(list)
+ attrs = {}
+ list.each do |att|
+ attrs[att] = (@connection.do_request("#{@path}/#{att}", :plain => true) rescue nil)
+ end
+ attrs
+ end
+
+ def attr(name)
+ attrs([name]).first
+ end
+
+ def set_attrs(hash)
+ hash.each do |name, content|
+ set_attr(name, content)
+ end
+ end
+
+ def set_attr(name, content)
+ path = "#{@path}/#{name}"
+ @connection.do_request path, :content => content, :method => :put
+ end
+
+ def delete!
+ @connection.do_request @path, :method => :delete
+ true
+ end
+
+ end
+
+ class Bucket
+ attr_accessor :name
+
+ def initialize(name, connection)
+ @name = name
+ @connection = connection
+ end
+
+ def to_s
+ "Bucket: #{@name}"
+ end
+
+ def object_names
+ result = @connection.do_request "/#{@name}"
+ result.xpath('/objects/object').map do |obj|
+ obj.at_xpath('./key/text()').to_s
+ end
+ end
+
+ def objects
+ object_names.map do |name|
+ object(name)
+ end
+ end
+
+ def object(key)
+ BucketObject.new @connection, key, self
+ end
+
+ def create_object(key, body, attrs)
+ BucketObject.create(@connection, key, self, body, attrs)
+ end
+ end
+
+ class Connection
+ attr_accessor :uri
+
+ def initialize(uri)
+ @uri = uri
+ end
+
+ def do_request(path = '', opts={})
+ method = opts[:method] || :get
+ content = opts[:content] || ''
+ plain = opts[:plain] || false
+
+ #$stderr.puts "Connetion.do_request #{method} @ #{uri}#{path}"
+
+ result = RestClient.get @uri + path, :accept => '*/xml' if method == :get
+ result = RestClient.put @uri + path, content if method == :put #:accept => '*/xml' if method == :put
+ result = RestClient.post @uri + path, content if method == :post
+ result = RestClient.delete @uri + path if method == :delete
+
+ return Nokogiri::XML result unless plain
+ return result
+ end
+
+ end
+
+ class Client
+
+ def initialize(uri)
+ @connection = Connection.new(uri)
+ end
+
+ def create_bucket(bucket)
+ @connection.do_request "/#{bucket}", :method => :put
+ Bucket.new(bucket, @connection)
+ rescue RestClient::ExceptionWithResponse => e
+# puts "Error #{e.http_code.to_s}"
+ raise
+ end
+
+ def bucket(bucket)
+# result = @connection.do_request "/#{bucket}"
+ Bucket.new bucket, @connection
+ end
+
+ def buckets
+ @connection.do_request.xpath('/api/link[@rel="bucket"]').map do |obj|
+ obj.at_xpath('./@href').to_s.gsub(/.*\//, '')
+ end
+ end
+
+ def get_iwhd_version
+ result = @connection.do_request.at_xpath('/api[@service="image_warehouse"]/@version')
+ # raise "Found more than one <api> tag" if r.size > 1
+ raise "Response does not contain <api> tag or version information" if result == nil
+ return result.value
+ end
+
+ end
+
+end
--
1.7.4
13 years, 2 months
Provider Type model
by jzigmund@redhat.com
Patchset implements new model for Provider Types, does all refactoring according Provider Type model
and fixes tests after refactoring.
13 years, 2 months
(no subject)
by jzigmund@redhat.com
Patchset implements new model for Provider Types, does all refactoring according Provider Type model
and fixes testes after refactoring.
13 years, 2 months
[PATCH] Rename deltacloud->conductor in the classad plugin.
by Chris Lalancette
Signed-off-by: Chris Lalancette <clalance(a)redhat.com>
---
src/classad_plugin/conductor_classad_plugin.cpp | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/src/classad_plugin/conductor_classad_plugin.cpp b/src/classad_plugin/conductor_classad_plugin.cpp
index c7e8487..c44c65a 100644
--- a/src/classad_plugin/conductor_classad_plugin.cpp
+++ b/src/classad_plugin/conductor_classad_plugin.cpp
@@ -162,7 +162,7 @@ conductor_quota_check(const char *name, const ArgumentList &arglist,
rest_call << "resources/instances/" << instance_id << "/can_start/" << account_id;
// Call rest API to get answer on quota..
- proxy = rest_proxy_new ("http://localhost:3000/deltacloud", FALSE);
+ proxy = rest_proxy_new ("http://localhost:3000/conductor", FALSE);
call = rest_proxy_new_call (proxy);
rest_proxy_call_set_function (call, rest_call.str().c_str());
--
1.7.4
13 years, 2 months