[PATCH aeolus] Add search for Users
by Tomas Sedovic
From: Tomas Sedovic <tsedovic(a)redhat.com>
---
src/app/controllers/admin/users_controller.rb | 16 +++++++++++++++-
src/app/models/user.rb | 7 +++++++
2 files changed, 22 insertions(+), 1 deletions(-)
diff --git a/src/app/controllers/admin/users_controller.rb b/src/app/controllers/admin/users_controller.rb
index 0e854a9..1c48dcb 100644
--- a/src/app/controllers/admin/users_controller.rb
+++ b/src/app/controllers/admin/users_controller.rb
@@ -1,7 +1,21 @@
class Admin::UsersController < ApplicationController
before_filter :require_user
before_filter :only_admin, :only => [:index, :multi_destroy]
- before_filter :load_users, :only => [:index, :show]
+ before_filter :load_users, :only => [:show]
+
+ def index
+ @params = params
+ @search_term = params[:q]
+ if @search_term.blank?
+ load_users
+ return
+ end
+
+ search = User.search do
+ keywords(params[:q])
+ end
+ @users = search.results
+ end
def new
@user = User.new
diff --git a/src/app/models/user.rb b/src/app/models/user.rb
index 7942ae4..cc77d9d 100644
--- a/src/app/models/user.rb
+++ b/src/app/models/user.rb
@@ -19,7 +19,14 @@
# Filters added to this controller apply to all controllers in the application.
# Likewise, all the methods added will be available for all controllers.
+require 'sunspot_rails'
class User < ActiveRecord::Base
+ searchable do
+ text :login, :as => :code_substring
+ text :last_name, :as => :code_substring
+ text :first_name, :as => :code_substring
+ text :email, :as => :code_substring
+ end
acts_as_authentic
has_many :permissions
--
1.7.3.4
13 years, 4 months
Re: [deltacloud-devel] Rename of deltacloud aggregator
by Chris Lalancette
On 01/16/11 - 11:23:51AM, Peter Robinson wrote:
> On Wed, Jan 12, 2011 at 6:26 PM, Chris Lalancette <clalance(a)redhat.com> wrote:
> > All,
> > A few months ago we transferred the deltacloud API and core over to the
> > apache foundation. Along with that we have also transferred the name
> > "deltacloud" to apache (at least in spirit, I don't know what the legal
> > situation is).
> > To reduce confusion between the API and what is currently called the
> > deltacloud aggregator, and to give a new home to the sub-projects that the
> > aggregator depends on, we are launching the http://www.aeolusproject.org
> > website. Aeolus is the Greek god of wind, so we think it fits in nicely with
> > the cloud theme. The aggregator will be formally renamed to
> > "Aeolus Conductor", and several of the subprojects that the Conductor depends
> > on will be added to the website. The old deltacloud git repositories will be
> > marked read-only, and new development will happen in the aeolus repositories.
> > Additionally, http://deltacloud.org will be modified to redirect to the
> > apache incubator page.
> > We plan on making the changes to the mailing lists, git repositories, and
> > website on Monday, Jan 17. The code inside the git repositories will probably
> > continue to use the old names for a little while longer, but we will patch them
> > as quickly as possible.
> > As usual, any questions, comments, or criticisms are welcome.
>
> Is there any packages already in Fedora that will need to be renamed
> as a result of this?
There shouldn't be. Due to our long chain of dependencies (along with some
not-yet-upstream code), we have not yet packaged Aeolus Conductor (nee
Deltacloud Aggregator) for Fedora. Given the Fedora timeline, it is
unlikely that we will make it in for F-15, but it is our goal to get the thing
pacakged and included in F-16.
Thanks,
--
Chris Lalancette
13 years, 4 months
[PATCH aeolus] Hardware Profiles UI
by Martyn Taylor
From: Martyn Taylor <mtaylor(a)redhat.com>
---
.../admin/hardware_profiles_controller.rb | 114 +++++++++++++++++++-
src/app/models/hardware_profile.rb | 13 ++-
src/app/models/hardware_profile_property.rb | 2 +-
src/app/models/property_enum_entry.rb | 3 +
src/app/stylesheets/newui.scss | 5 +
src/app/views/admin/hardware_profiles/_form.haml | 28 +++++
src/app/views/admin/hardware_profiles/_list.haml | 37 ++++---
.../_matching_provider_hardware_profiles.haml | 8 +-
.../views/admin/hardware_profiles/_properties.haml | 1 +
src/app/views/admin/hardware_profiles/create.haml | 6 +
src/app/views/admin/hardware_profiles/edit.haml | 7 ++
src/app/views/admin/hardware_profiles/new.haml | 7 ++
src/app/views/admin/provider_accounts/new.haml | 2 +-
src/config/routes.rb | 3 +-
.../20090804135630_create_hardware_profiles.rb | 2 +-
src/features/hardware_profile.feature | 63 +++++++++++-
.../step_definitions/hardware_profile_steps.rb | 21 ++++
src/features/support/paths.rb | 6 +
18 files changed, 295 insertions(+), 33 deletions(-)
create mode 100644 src/app/views/admin/hardware_profiles/_form.haml
create mode 100644 src/app/views/admin/hardware_profiles/create.haml
create mode 100644 src/app/views/admin/hardware_profiles/edit.haml
create mode 100644 src/app/views/admin/hardware_profiles/new.haml
diff --git a/src/app/controllers/admin/hardware_profiles_controller.rb b/src/app/controllers/admin/hardware_profiles_controller.rb
index 67c655d..b57bd37 100644
--- a/src/app/controllers/admin/hardware_profiles_controller.rb
+++ b/src/app/controllers/admin/hardware_profiles_controller.rb
@@ -2,6 +2,9 @@ class Admin::HardwareProfilesController < ApplicationController
before_filter :require_user
before_filter :load_hardware_profiles, :only => [:index, :show]
before_filter :load_hardware_profile, :only => [:show]
+ before_filter :setup_new_hardware_profile, :only => [:new]
+ before_filter :setup_hardware_profile, :only => [:new, :create, :edit, :update]
+
def index
end
@@ -29,12 +32,73 @@ class Admin::HardwareProfilesController < ApplicationController
end
def create
+ build_hardware_profile(params[:hardware_profile])
+ if params[:commit] == 'Save'
+ if @hardware_profile.save!
+ redirect_to admin_hardware_profiles_path
+ else
+ params.delete :commit
+ render :action => 'create'
+ end
+ else
+ matching_provider_hardware_profiles
+ render :action => 'new'
+ end
end
def delete
end
+ def edit
+ unless @hardware_profile
+ @hardware_profile = HardwareProfile.find(params[:id])
+ end
+ matching_provider_hardware_profiles
+ end
+
+ def update
+ if params[:commit] == "Reset"
+ redirect_to edit_admin_hardware_profile_url(@hardware_profile) and return
+ end
+
+ if params[:id]
+ @hardware_profile = HardwareProfile.find(params[:id])
+ build_hardware_profile(params[:hardware_profile])
+ end
+
+ if params[:commit] == "Check Matches"
+ matching_provider_hardware_profiles
+ render :edit and return
+ end
+
+ unless @hardware_profile.save!
+ render :action => 'edit' and return
+ else
+ flash[:notice] = "Hardware Profile updated!"
+ redirect_to admin_hardware_profiles_path
+ end
+ end
+
+ def multi_destroy
+ HardwareProfile.destroy(params[:hardware_profile_selected])
+ redirect_to admin_hardware_profiles_path
+ end
+
private
+ def setup_new_hardware_profile
+ if params[:hardware_profile]
+ begin
+ @hardware_profile = HardwareProfile.new(remove_irrelevant_params(params[:hardware_profile]))
+ end
+ else
+ @hardware_profile = HardwareProfile.new(:memory => HardwareProfileProperty.new(:name => "memory", :unit => "MB"),
+ :cpu => HardwareProfileProperty.new(:name => "cpu", :unit => "count"),
+ :storage => HardwareProfileProperty.new(:name => "storage", :unit => "GB"),
+ :architecture => HardwareProfileProperty.new(:name => "architecture", :unit => "label"))
+ end
+ matching_provider_hardware_profiles
+ end
+
def properties
@properties_header = [
{ :name => "Name", :sort_attr => :name},
@@ -58,8 +122,26 @@ class Admin::HardwareProfilesController < ApplicationController
{ :name => "Storage", :sort_attr => :storage },
{ :name => "Virtual CPU", :sort_attr => :cpus}
]
- @matching_hwps = HardwareProfile.all(:include => "aggregator_hardware_profiles",
- :conditions => {:hardware_profile_map => { :aggregator_hardware_profile_id => params[:id] }})
+
+ begin
+ @matching_hwps = HardwareProfile.matching_hwps((a)hardware_profile).map { |hwp| hwp[:hardware_profile] }
+ rescue
+ @matching_hwps = []
+ end
+ end
+
+ def setup_hardware_profile
+ @tab_captions = ['Matched Provider Hardware Profiles']
+ @details_tab = 'matching_provider_hardware_profiles'
+ @url_params = params
+ @header = [
+ { :name => "Name", :sort_attr => :name},
+ { :name => "Unit", :sort_attr => :unit},
+ { :name => "Kind", :sort_attr => :kind },
+ { :name => "Value (Default)", :sort_attr => :value},
+ { :name => "Enum Entries", :sort_attr => :false },
+ { :name => "Range First", :sort_attr => :range_first},
+ { :name => "Range Last", :sort_attr => :range_last }]
end
def load_hardware_profiles
@@ -75,7 +157,33 @@ class Admin::HardwareProfilesController < ApplicationController
end
def load_hardware_profile
- @hardware_profile = HardwareProfile.find((params[:id] || []).first)
+ @hardware_profile = HardwareProfile.find(params[:id])
end
+ def build_hardware_profile(params)
+ hwpps = [:memory_attributes, :cpu_attributes, :storage_attributes, :architecture_attributes]
+ enum_values = {}
+ hwpps.each do |attr|
+ unless params[attr][:kind] == "range"
+ params[attr].delete(:range_first)
+ params[attr].delete(:range_last)
+ end
+
+ unless params[attr][:kind] == "enum"
+ params[attr].delete(:enum)
+ else
+ enum_values[params[attr][:name]] = params[attr][:property_enum_entries].split(%r{,\s*})
+ end
+ params[attr].delete(:property_enum_entries)
+ end
+
+ @hardware_profile.nil? ? @hardware_profile = HardwareProfile.new(params) : @hardware_profile.update_attributes(params)
+
+ # Set Property Enum Entries on enum types
+ [@hardware_profile.memory, @hardware_profile.cpu, @hardware_profile.architecture, @hardware_profile.storage].each do |hwpp|
+ if hwpp.kind == "enum"
+ hwpp.property_enum_entries = enum_values[hwpp.name].map { |value| PropertyEnumEntry.new(:value => value) }
+ end
+ end
+ end
end
\ No newline at end of file
diff --git a/src/app/models/hardware_profile.rb b/src/app/models/hardware_profile.rb
index d7ae995..10f1283 100644
--- a/src/app/models/hardware_profile.rb
+++ b/src/app/models/hardware_profile.rb
@@ -29,13 +29,18 @@ class HardwareProfile < ActiveRecord::Base
belongs_to :memory, :class_name => "HardwareProfileProperty",
:dependent => :destroy
+
belongs_to :storage, :class_name => "HardwareProfileProperty",
:dependent => :destroy
+
belongs_to :cpu, :class_name => "HardwareProfileProperty",
:dependent => :destroy
+
belongs_to :architecture, :class_name => "HardwareProfileProperty",
:dependent => :destroy
+ accepts_nested_attributes_for :memory, :cpu, :storage, :architecture
+
has_and_belongs_to_many :aggregator_hardware_profiles,
:class_name => "HardwareProfile",
:join_table => "hardware_profile_map",
@@ -48,8 +53,8 @@ class HardwareProfile < ActiveRecord::Base
:foreign_key => "aggregator_hardware_profile_id",
:association_foreign_key => "provider_hardware_profile_id"
- validates_presence_of :external_key
- validates_uniqueness_of :external_key, :scope => [:provider_id]
+ #validates_presence_of :external_key
+ #validates_uniqueness_of :external_key, :scope => [:provider_id]
validates_presence_of :name
validates_uniqueness_of :name, :scope => [:provider_id]
@@ -67,8 +72,8 @@ class HardwareProfile < ActiveRecord::Base
def validate
if provider.nil?
if !aggregator_hardware_profiles.empty?
- errors.add(:aggregator_hardware_profiles,
- "Aggregator profiles only allowed for provider profiles")
+ #errors.add(:aggregator_hardware_profiles,
+ #"Aggregator profiles only allowed for provider profiles")
end
else
if !provider_hardware_profiles.empty?
diff --git a/src/app/models/hardware_profile_property.rb b/src/app/models/hardware_profile_property.rb
index 46e4b8e..3f6bdfe 100644
--- a/src/app/models/hardware_profile_property.rb
+++ b/src/app/models/hardware_profile_property.rb
@@ -101,7 +101,7 @@ class HardwareProfileProperty < ActiveRecord::Base
def to_s
case kind
when FIXED
- value
+ value.to_s
when RANGE
range_first.to_s + " - " + range_last.to_s
when ENUM
diff --git a/src/app/models/property_enum_entry.rb b/src/app/models/property_enum_entry.rb
index ff497ef..81bb67e 100644
--- a/src/app/models/property_enum_entry.rb
+++ b/src/app/models/property_enum_entry.rb
@@ -32,4 +32,7 @@ class PropertyEnumEntry < ActiveRecord::Base
HardwareProfileProperty::STORAGE or
p.hardware_profile_property.name ==
HardwareProfileProperty::CPU }
+ def to_s
+ value.to_s + ", "
+ end
end
diff --git a/src/app/stylesheets/newui.scss b/src/app/stylesheets/newui.scss
index 513d510..d0fbcc9 100644
--- a/src/app/stylesheets/newui.scss
+++ b/src/app/stylesheets/newui.scss
@@ -1371,6 +1371,11 @@ $content-left: 180px;
float: left;
}
+#list {
+ float: left;
+ width: 100%;
+}
+
#details-view {
border: 1px solid;
position: absolute;
diff --git a/src/app/views/admin/hardware_profiles/_form.haml b/src/app/views/admin/hardware_profiles/_form.haml
new file mode 100644
index 0000000..65278f2
--- /dev/null
+++ b/src/app/views/admin/hardware_profiles/_form.haml
@@ -0,0 +1,28 @@
+=hwp_form.label :name
+=hwp_form.text_field :name
+%table
+ = sortable_table_header @header
+ - [:memory, :cpu, :storage, :architecture].each do |type|
+ - hwp_form.fields_for type do |hwpp_form|
+ %tr
+ %td
+ =hwpp_form.text_field(:name, :readonly => "readonly")
+ %td
+ =hwpp_form.text_field(:unit, :size => 5, :readonly => "readonly")
+ %td
+ -unless type == :architecture
+ =hwpp_form.select("kind", ["fixed", "range", "enum"], {})
+ -else
+ =hwpp_form.select("kind", ["fixed", "enum"], {})
+ %td
+ =hwpp_form.text_field(:value)
+ %td
+ =hwpp_form.text_field(:property_enum_entries)
+ %td
+ -unless type == :architecture
+ =hwpp_form.text_field(:range_first)
+ %td
+ -unless type == :architecture
+ =hwpp_form.text_field(:range_last)
+= hwp_form.submit 'Check Matches', :class => "submit formbutton"
+= hwp_form.submit 'Save', :class => 'submit formbutton'
\ No newline at end of file
diff --git a/src/app/views/admin/hardware_profiles/_list.haml b/src/app/views/admin/hardware_profiles/_list.haml
index 7399549..e2f344e 100644
--- a/src/app/views/admin/hardware_profiles/_list.haml
+++ b/src/app/views/admin/hardware_profiles/_list.haml
@@ -1,7 +1,7 @@
- form_tag do
#object-actions
- = restful_submit_tag "Create", "create", admin_hardware_profiles_path, "PUT"
- = restful_submit_tag "Delete", "delete", admin_hardware_profiles_path, "DELETE"
+ = link_to "New Hardware Profile", new_admin_hardware_profile_path, :class => 'button'
+ = restful_submit_tag "Delete", "destroy", multi_destroy_admin_hardware_profiles_path, "DELETE", :id => 'delete_button'
#selections
%p
@@ -10,19 +10,20 @@
%span> ,
= link_to "None", @url_params.merge(:select => 'none')
-%table
- = sortable_table_header @header
- - @hardware_profiles.each do |hwp|
- %tr
- %td
- - selected = @url_params[:select] == 'all'
- = check_box(:pool, "selected[#{hwp.id}]", :checked => selected)
- = link_to hwp.name, admin_hardware_profile_path(hwp)
- %td
- =hwp.architecture.to_s
- %td
- =hwp.memory.to_s
- %td
- =hwp.storage.to_s
- %td
- =hwp.cpu.to_s
+ #list
+ %table
+ = sortable_table_header @header
+ - @hardware_profiles.each do |hwp|
+ %tr
+ %td
+ - selected = @url_params[:select] == 'all'
+ %input{:name => "hardware_profile_selected[]", :type => "checkbox", :value => hwp.id, :id => "hardware_profile_checkbox_#{hwp.id}", :checked => selected }
+ = link_to hwp.name, admin_hardware_profile_path(hwp)
+ %td
+ =hwp.architecture.to_s
+ %td
+ =hwp.memory.to_s
+ %td
+ =hwp.storage.to_s
+ %td
+ =hwp.cpu.to_s
diff --git a/src/app/views/admin/hardware_profiles/_matching_provider_hardware_profiles.haml b/src/app/views/admin/hardware_profiles/_matching_provider_hardware_profiles.haml
index cfbb856..0f4ba7b 100644
--- a/src/app/views/admin/hardware_profiles/_matching_provider_hardware_profiles.haml
+++ b/src/app/views/admin/hardware_profiles/_matching_provider_hardware_profiles.haml
@@ -1,11 +1,13 @@
-%h3
- = @hardware_profile.name
+- if @hardware_profile
+ %h3
+ =(a)hardware_profile.name
%table
= sortable_table_header @provider_hwps_header
- @matching_hwps.each do |hwp|
%tr
%td
- = link_to hwp.provider.name, admin_provider_path(hwp.provider)
+ - if hwp.provider
+ = link_to hwp.provider.name, admin_provider_path(hwp.provider)
%td
= link_to hwp.name, admin_hardware_profile_path(hwp)
%td
diff --git a/src/app/views/admin/hardware_profiles/_properties.haml b/src/app/views/admin/hardware_profiles/_properties.haml
index 590edc5..bc91219 100644
--- a/src/app/views/admin/hardware_profiles/_properties.haml
+++ b/src/app/views/admin/hardware_profiles/_properties.haml
@@ -1,5 +1,6 @@
%h3
= @hardware_profile.name + "(" + (@hardware_profile.provider_id.nil? ? "Front End" : "Provider" ) + ")"
+= link_to 'Edit', edit_admin_hardware_profile_path(@hardware_profile), :class => 'button'
%table
= sortable_table_header @properties_header
- @hwp_properties.each do |hwpp|
diff --git a/src/app/views/admin/hardware_profiles/create.haml b/src/app/views/admin/hardware_profiles/create.haml
new file mode 100644
index 0000000..c26bbe9
--- /dev/null
+++ b/src/app/views/admin/hardware_profiles/create.haml
@@ -0,0 +1,6 @@
+%h3
+ Check Matching Hardware Profiles
+- content_for :list do
+ = render :partial => "form"
+- content_for :details do
+ = render :partial => 'layouts/details_pane'
\ No newline at end of file
diff --git a/src/app/views/admin/hardware_profiles/edit.haml b/src/app/views/admin/hardware_profiles/edit.haml
new file mode 100644
index 0000000..e32eebd
--- /dev/null
+++ b/src/app/views/admin/hardware_profiles/edit.haml
@@ -0,0 +1,7 @@
+- content_for :list do
+ %h3
+ Edit Hardware Profile
+ -form_for @hardware_profile, :url => admin_hardware_profile_path(@hardware_profile), :html => { :multipart => true } do |hwp_form|
+ = render :partial => "form", :locals => { :hwp_form => hwp_form }
+- content_for :details do
+ = render :partial => 'layouts/details_pane'
diff --git a/src/app/views/admin/hardware_profiles/new.haml b/src/app/views/admin/hardware_profiles/new.haml
new file mode 100644
index 0000000..3355c87
--- /dev/null
+++ b/src/app/views/admin/hardware_profiles/new.haml
@@ -0,0 +1,7 @@
+- content_for :list do
+ %h3
+ New Hardware Profile
+ -form_for @hardware_profile, :url => admin_hardware_profiles_path, :html => { :multipart => true } do |hwp_form|
+ = render :partial => "form", :locals => { :hwp_form => hwp_form }
+- content_for :details do
+ = render :partial => 'layouts/details_pane'
\ No newline at end of file
diff --git a/src/app/views/admin/provider_accounts/new.haml b/src/app/views/admin/provider_accounts/new.haml
index e567bce..55e469c 100644
--- a/src/app/views/admin/provider_accounts/new.haml
+++ b/src/app/views/admin/provider_accounts/new.haml
@@ -13,4 +13,4 @@
%p.requirement
%span.required *
\-
- = t('cloud_accounts.new.required_field')
+ = t('cloud_accounts.new.required_field')
\ No newline at end of file
diff --git a/src/config/routes.rb b/src/config/routes.rb
index 6d9b5cf..463031d 100644
--- a/src/config/routes.rb
+++ b/src/config/routes.rb
@@ -47,7 +47,8 @@ ActionController::Routing::Routes.draw do |map|
map.connect '/set_layout', :controller => 'application', :action => 'set_layout'
map.namespace 'admin' do |r|
- r.resources :hardware_profiles, :pool_families, :realms
+ r.resources :hardware_profiles, :collection => { :multi_destroy => :delete }
+ r.resources :pool_families, :realms
r.resources :providers, :collection => { :multi_destroy => :delete }
r.resources :users, :collection => { :multi_destroy => :delete }
r.resources :provider_accounts, :collection => { :multi_destroy => :delete }
diff --git a/src/db/migrate/20090804135630_create_hardware_profiles.rb b/src/db/migrate/20090804135630_create_hardware_profiles.rb
index 5ad0435..ec4a50f 100644
--- a/src/db/migrate/20090804135630_create_hardware_profiles.rb
+++ b/src/db/migrate/20090804135630_create_hardware_profiles.rb
@@ -40,7 +40,7 @@ class CreateHardwareProfiles < ActiveRecord::Migration
end
create_table :hardware_profiles do |t|
- t.string :external_key, :null => false
+ t.string :external_key
t.string :name, :null => false, :limit => 1024
t.integer :memory_id
t.integer :storage_id
diff --git a/src/features/hardware_profile.feature b/src/features/hardware_profile.feature
index 6f1cc81..e6fa89e 100644
--- a/src/features/hardware_profile.feature
+++ b/src/features/hardware_profile.feature
@@ -48,4 +48,65 @@ Feature: Manage Pools
And I follow "Matching Provider Hardware Profiles"
Then I should see the following:
| Name | Memory | CPU | Storage | Architecture |
- | m1-small | 1740 | 2 | 160 | i386 |
\ No newline at end of file
+ | m1-small | 1740 | 2 | 160 | i386 |
+
+ Scenario: Create a new Hardware Profile
+ Given I am an authorised user
+ And I am on the hardware profiles page
+ When I follow "New Hardware Profile"
+ Then I should be on the new hardware profile page
+ When I fill in "name" with "Test Hardware Profile"
+ And I enter the following details for the Hardware Profile Properties
+ | name | kind | range_first | range_last | property_enum_entries | value | unit |
+ | memory | fixed | | | | 1740 | MB |
+ | cpu | range | 1 | 4 | | 2 | count |
+ | storage | range | 250 | 500 | | 300 | GB |
+ | architecture | enum | | | i386, x86_64 | i386 | label |
+ And I press "Save"
+ Then I should be on the hardware profiles page
+ And I should see the following:
+ | Name | memory | cpu | storage | architecture |
+ | Test Hardware Profile | 1740 | 1 - 4 | 250 - 500 | i386, x86_64 |
+
+ Scenario: Check New Hardware Profile matching Provider Hardware Profiles
+ Given I am an authorised user
+ And there are the following provider hardware profiles:
+ | name | memory | cpu |storage | architecture |
+ | small | 1740 | 1 | 250 | i386 |
+ | medium | 1740 | 2 | 500 | i386 |
+ | large | 2048 | 4 | 850 | x86_64 |
+ And I am on the new hardware profile page
+ When I fill in "name" with "Test Hardware Profile"
+ And I enter the following details for the Hardware Profile Properties
+ | name | kind | range_first | range_last | property_enum_entries | value | unit |
+ | memory | fixed | | | | 1740 | MB |
+ | cpu | range | 1 | 4 | | 2 | count |
+ | storage | range | 250 | 500 | | 300 | GB |
+ | architecture | enum | | | i386, x86_64 | i386 | label |
+ And I press "Check Matches"
+ Then I should see the following:
+ | Name | Memory | CPU | Storage | Architecture |
+ | small | 1740 | 1 | 300 | i386 |
+ | medium | 1740 | 2 | 500 | i386 |
+
+ Scenario: Update a HardwareProfile
+ Given I am an authorised user
+ And there are the following aggregator hardware profiles:
+ | name | memory | cpu |storage | architecture |
+ | m1-small | 1740 | 2 | 160 | i386 |
+ And I am on the hardware profiles page
+ When I follow "m1-small"
+ Then I should see "Properties"
+ When I follow "edit"
+ Then I should be on the edit hardware profiles page
+ When I enter the following details for the Hardware Profile Properties
+ | name | kind | range_first | range_last | property_enum_entries | value |
+ | memory | fixed | | | | 1740 |
+ | cpu | range | 1 | 4 | | 1 |
+ | storage | range | 250 | 500 | | 300 |
+ | architecture | enum | | | i386, x86_64 | i386 |
+ And I press "Save"
+ Then I should be on the hardware profiles page
+ Then I should see the following:
+ | Name | Memory | CPU | Storage | Architecture |
+ | m1-small | 1740 | 1 - 4 | 250 - 500 | i386, x86_64 |
\ No newline at end of file
diff --git a/src/features/step_definitions/hardware_profile_steps.rb b/src/features/step_definitions/hardware_profile_steps.rb
index d29d136..8965517 100644
--- a/src/features/step_definitions/hardware_profile_steps.rb
+++ b/src/features/step_definitions/hardware_profile_steps.rb
@@ -19,4 +19,25 @@ def create_hwp(hash, provider=nil)
cpu = Factory(:mock_hwp1_cpu, :value => hash[:cpu])
arch = Factory(:mock_hwp1_arch, :value => hash[:architecture])
Factory(:mock_hwp1, :name => hash[:name], :memory => memory, :cpu => cpu, :storage => storage, :architecture => arch, :provider => provider)
+end
+
+When /^I enter the following details for the Hardware Profile Properties$/ do |table|
+ table.hashes.each do |hash|
+ hash.each_pair do |key, value|
+ unless (hash[:name] == "architecture" && (key == "range_first" || key == "range_last")) || key == "name"
+ When "I fill in \"#{"hardware_profile_" + hash[:name] + "_attributes_" + key}\" with \"#{value}\""
+ end
+ end
+ end
+end
+
+Given /^there are the following provider hardware profiles:$/ do |table|
+ provider = Factory :mock_provider
+ table.hashes do |hash|
+ memory = Factory(:mock_hwp1_memory, :value => hash[:memory])
+ cpu = Factory(:mock_hwp1_cpu, :value => hash[:cpu])
+ storage = Factory(:mock_hwp1_storage, :value => hash[:storage])
+ architecture = Factory(:mock_hwp1_architecture, :value => hash[:architecture])
+ Factory(:hardware_profile, :provider => provider, :memory => memory, :storage => storage, :cpu => cpu, :architecture => architecture)
+ end
end
\ No newline at end of file
diff --git a/src/features/support/paths.rb b/src/features/support/paths.rb
index 66de0d2..5dd1116 100644
--- a/src/features/support/paths.rb
+++ b/src/features/support/paths.rb
@@ -98,6 +98,12 @@ module NavigationHelpers
when /the hardware profiles page/
url_for admin_hardware_profiles_path
+ when /the new hardware profile page/
+ url_for new_admin_hardware_profile_path
+
+ when /the edit hardware profiles page/
+ url_for :action => 'edit', :controller => 'hardware_profiles', :only_path => true
+
when /^(.*)'s provider account page$/
admin_provider_account_path(CloudAccount.find_by_label($1))
--
1.7.2.3
13 years, 4 months
Search
by Tomas Sedovic
This is a new revision of the previous patches.
It features simpler models, views and controllers plus some Cucumber tests.
13 years, 4 months
[PATCH 0/2]: Remove bogus references
by Chris Lalancette
As mother always said, late is better than never. Since we are in the process
of renaming the whole project, I figured that I would remove any references
to absolutely ancient things. This series does that.
I did a run of these through the cucumber tests, and no new errors popped up.
That doesn't mean that they are perfect, just that they should be relatively
safe to commit. Please review and ACK.
13 years, 4 months
[PATCH aggregator] Implement Assemblies basic UI
by lmartinc@redhat.com
From: Ladislav Martincik <lmartinc(a)redhat.com>
---
.../image_factory/assemblies_controller.rb | 63 ++++++++++++++++++++
src/app/models/assembly.rb | 2 +
src/app/views/image_factory/assemblies/_form.haml | 7 ++
src/app/views/image_factory/assemblies/_list.haml | 28 +++++++++
.../image_factory/assemblies/_properties.haml | 4 +
src/app/views/image_factory/assemblies/edit.haml | 4 +
src/app/views/image_factory/assemblies/index.haml | 3 +-
src/app/views/image_factory/assemblies/new.haml | 3 +
src/app/views/image_factory/assemblies/show.haml | 5 ++
src/config/routes.rb | 2 +-
src/db/migrate/20110114111158_create_assemblies.rb | 13 ++++
src/features/assembly.feature | 50 ++++++++++++++++
src/features/step_definitions/assembly_steps.rb | 20 ++++++
src/features/support/paths.rb | 3 +
14 files changed, 205 insertions(+), 2 deletions(-)
create mode 100644 src/app/models/assembly.rb
create mode 100644 src/app/views/image_factory/assemblies/_form.haml
create mode 100644 src/app/views/image_factory/assemblies/_list.haml
create mode 100644 src/app/views/image_factory/assemblies/_properties.haml
create mode 100644 src/app/views/image_factory/assemblies/edit.haml
create mode 100644 src/app/views/image_factory/assemblies/new.haml
create mode 100644 src/app/views/image_factory/assemblies/show.haml
create mode 100644 src/db/migrate/20110114111158_create_assemblies.rb
create mode 100644 src/features/assembly.feature
create mode 100644 src/features/step_definitions/assembly_steps.rb
diff --git a/src/app/controllers/image_factory/assemblies_controller.rb b/src/app/controllers/image_factory/assemblies_controller.rb
index ce67f14..6627656 100644
--- a/src/app/controllers/image_factory/assemblies_controller.rb
+++ b/src/app/controllers/image_factory/assemblies_controller.rb
@@ -1,6 +1,69 @@
class ImageFactory::AssembliesController < ApplicationController
before_filter :require_user
+ before_filter :load_assemblies, :only => [:index, :show]
def index
end
+
+ def show
+ @assembly = Assembly.find(params[:id])
+ @url_params = params.clone
+ @tab_captions = ['Properties']
+ @details_tab = params[:details_tab].blank? ? 'properties' : params[:details_tab]
+ respond_to do |format|
+ format.js do
+ if @url_params.delete :details_pane
+ render :partial => 'layouts/details_pane' and return
+ end
+ render :partial => @details_tab and return
+ end
+ format.html { render :action => 'show'}
+ end
+ end
+
+ def new
+ @assembly = Assembly.new
+ end
+
+ def create
+ @assembly = Assembly.new(params[:assembly])
+ if @assembly.save
+ flash[:notice] = "Assembly added."
+ redirect_to image_factory_assembly_url(@assembly)
+ else
+ render :action => :new
+ end
+ end
+
+ def edit
+ @assembly = Assembly.find(params[:id])
+ end
+
+ def update
+ @assembly = Assembly.find(params[:id])
+ if @assembly.update_attributes(params[:assembly])
+ flash[:notice] = "Assembly updated."
+ redirect_to image_factory_assembly_url(@assembly)
+ else
+ render :action => :edit
+ end
+ end
+
+ def multi_destroy
+ Assembly.destroy(params[:assemblies_selected])
+ redirect_to image_factory_assemblies_url
+ end
+
+ protected
+
+ def load_assemblies
+ @header = [
+ { :name => "Assembly name", :sort_attr => :name }
+ ]
+ @assemblies = Assembly.paginate(:all,
+ :page => params[:page] || 1,
+ :order => (params[:order_field] || 'name') +' '+ (params[:order_dir] || 'asc')
+ )
+ @url_params = params.clone
+ end
end
diff --git a/src/app/models/assembly.rb b/src/app/models/assembly.rb
new file mode 100644
index 0000000..5eac9de
--- /dev/null
+++ b/src/app/models/assembly.rb
@@ -0,0 +1,2 @@
+class Assembly < ActiveRecord::Base
+end
diff --git a/src/app/views/image_factory/assemblies/_form.haml b/src/app/views/image_factory/assemblies/_form.haml
new file mode 100644
index 0000000..39556e5
--- /dev/null
+++ b/src/app/views/image_factory/assemblies/_form.haml
@@ -0,0 +1,7 @@
+= form.error_messages
+%fieldset.clear
+ = form.label :name, t(:name), :class => "grid_3 alpha"
+ = form.text_field :name, :class => "grid_5"
+%fieldset.clearfix
+ = form.submit "Save", :class => "submit formbutton"
+ = link_to t(:cancel), image_factory_assemblies_path, :class => 'button formbutton'
diff --git a/src/app/views/image_factory/assemblies/_list.haml b/src/app/views/image_factory/assemblies/_list.haml
new file mode 100644
index 0000000..f29b322
--- /dev/null
+++ b/src/app/views/image_factory/assemblies/_list.haml
@@ -0,0 +1,28 @@
+- form_tag do
+ = link_to "Create", new_image_factory_assembly_url, :class => 'button'
+ = restful_submit_tag "Delete", 'destroy', multi_destroy_image_factory_assemblies_path, 'DELETE', :id => 'delete_button'
+
+ %table#assemblies_table
+ %thead
+ %tr
+ %th
+ %th= link_to "Name", image_factory_assemblies_url(:sort_by => "name")
+ -(a)assemblies.each do |assembly|
+ %tr
+ %td
+ %input{:name => "assemblies_selected[]", :type => "checkbox", :value => assembly.id, :id => "assembly_checkbox_#{assembly.id}" }
+ %td= link_to assembly.name, image_factory_assembly_path(assembly)
+
+:javascript
+ $(document).ready(function () {
+ $('#delete_button').click(function(e) {
+ if ($("#assemblies_table input[@type=radio]:checked").length == 0) {
+ alert('Please select any assembly to be deleted before clicking Delete button.');
+ e.preventDefault();
+ } else {
+ if (!confirm("Are you sure you want to delete this assembly?")) {
+ e.preventDefault();
+ }
+ }
+ });
+ });
diff --git a/src/app/views/image_factory/assemblies/_properties.haml b/src/app/views/image_factory/assemblies/_properties.haml
new file mode 100644
index 0000000..5f91f6e
--- /dev/null
+++ b/src/app/views/image_factory/assemblies/_properties.haml
@@ -0,0 +1,4 @@
+.grid_13
+ %h2 #{(a)assembly.name}
+
+ = link_to 'Edit', edit_image_factory_assembly_path(@assembly), :class => 'button'
diff --git a/src/app/views/image_factory/assemblies/edit.haml b/src/app/views/image_factory/assemblies/edit.haml
new file mode 100644
index 0000000..04871c3
--- /dev/null
+++ b/src/app/views/image_factory/assemblies/edit.haml
@@ -0,0 +1,4 @@
+%h2 Editing Assembly: #{(a)assembly.name}
+
+- form_for @assembly, :url => image_factory_assembly_path(@assembly), :html => { :method => :put } do |f|
+ = render :partial => "form", :locals => { :form => f }
diff --git a/src/app/views/image_factory/assemblies/index.haml b/src/app/views/image_factory/assemblies/index.haml
index 766d92c..62ccbc6 100644
--- a/src/app/views/image_factory/assemblies/index.haml
+++ b/src/app/views/image_factory/assemblies/index.haml
@@ -1 +1,2 @@
-image_factory/assemblies/index.haml
+- content_for :list do
+ = render :partial => 'list'
diff --git a/src/app/views/image_factory/assemblies/new.haml b/src/app/views/image_factory/assemblies/new.haml
new file mode 100644
index 0000000..4763405
--- /dev/null
+++ b/src/app/views/image_factory/assemblies/new.haml
@@ -0,0 +1,3 @@
+%h2 New Assembly
+- form_for @assembly, :url => image_factory_assemblies_path do |f|
+ = render :partial => "form", :locals => { :form => f }
diff --git a/src/app/views/image_factory/assemblies/show.haml b/src/app/views/image_factory/assemblies/show.haml
new file mode 100644
index 0000000..05eeedd
--- /dev/null
+++ b/src/app/views/image_factory/assemblies/show.haml
@@ -0,0 +1,5 @@
+- content_for :list do
+ = render :partial => 'list'
+
+- content_for :details do
+ = render :partial => 'layouts/details_pane'
diff --git a/src/config/routes.rb b/src/config/routes.rb
index ad9fb67..00aaade 100644
--- a/src/config/routes.rb
+++ b/src/config/routes.rb
@@ -39,7 +39,7 @@ ActionController::Routing::Routes.draw do |map|
end
map.namespace 'image_factory' do |r|
- r.resources :assemblies
+ r.resources :assemblies, :collection => { :multi_destroy => :delete }
r.resources :deployables, :collection => { :multi_destroy => :delete }
r.resources :templates, :collection => {:collections => :get, :add_selected => :get, :metagroup_packages => :get, :remove_package => :get, :multi_destroy => :delete}
r.resources :builds
diff --git a/src/db/migrate/20110114111158_create_assemblies.rb b/src/db/migrate/20110114111158_create_assemblies.rb
new file mode 100644
index 0000000..25aaf7b
--- /dev/null
+++ b/src/db/migrate/20110114111158_create_assemblies.rb
@@ -0,0 +1,13 @@
+class CreateAssemblies < ActiveRecord::Migration
+ def self.up
+ create_table :assemblies do |t|
+ t.string :name
+
+ t.timestamps
+ end
+ end
+
+ def self.down
+ drop_table :assemblies
+ end
+end
diff --git a/src/features/assembly.feature b/src/features/assembly.feature
new file mode 100644
index 0000000..970f1b5
--- /dev/null
+++ b/src/features/assembly.feature
@@ -0,0 +1,50 @@
+Feature: Manage assemblies
+ In order to manage my cloud infrastructure
+ As a user
+ I want to manage assemblies
+
+ Background:
+ Given I am an authorised user
+ And I am logged in
+ And I am using new UI
+
+ Scenario: List assemblies
+ Given I am on the homepage
+ And there is a assembly named "MySQL cluster"
+ When I go to the image factory assemblies page
+ Then I should see "MySQL cluster"
+
+ Scenario: Create a new Assembly
+ Given there is a assembly named "MySQL cluster"
+ And I am on the image factory assemblies page
+ When I follow "Create"
+ Then I should be on the new image factory assembly page
+ And I should see "New Assembly"
+ When I fill in "assembly[name]" with "App"
+ And I press "Save"
+ Then I should be on App's image factory assembly page
+ And I should see "Assembly added"
+ And I should have a assembly named "App"
+ And I should see "App"
+
+ Scenario: Edit a assembly
+ Given there is a assembly named "MySQL cluster"
+ And I am on the image factory assemblies page
+ When I follow "MySQL cluster"
+ And I follow "Edit"
+ Then I should be on the edit image factory assembly page
+ And I should see "Editing Assembly"
+ When I fill in "assembly[name]" with "AppModified"
+ And I press "Save"
+ Then I should be on AppModified's image factory assembly page
+ And I should see "Assembly updated"
+ And I should have a assembly named "AppModified"
+ And I should see "AppModified"
+
+ Scenario: Delete a assembly
+ Given there is a assembly named "App"
+ And I am on the image factory assemblies page
+ When I check the "App" assembly
+ And I press "Delete"
+ Then I should be on the image factory assemblies page
+ And there should be no assemblies
diff --git a/src/features/step_definitions/assembly_steps.rb b/src/features/step_definitions/assembly_steps.rb
new file mode 100644
index 0000000..cb9c5f0
--- /dev/null
+++ b/src/features/step_definitions/assembly_steps.rb
@@ -0,0 +1,20 @@
+Then /^there should be no assemblies$/ do
+ Assembly.count.should == 0
+end
+
+Given /^there are no assemblies$/ do
+ Assembly.count.should == 0
+end
+
+Then /^I should have a assembly named "([^"]*)"$/ do |name|
+ Assembly.find_by_name(name).should_not be_nil
+end
+
+Given /^there is a assembly named "([^"]*)"$/ do |name|
+ Assembly.create!(:name => name)
+end
+
+When /^I check the "([^"]*)" assembly$/ do |name|
+ assembly = Assembly.find_by_name(name)
+ check("assembly_checkbox_#{assembly.id}")
+end
diff --git a/src/features/support/paths.rb b/src/features/support/paths.rb
index c0c77dc..dbe0077 100644
--- a/src/features/support/paths.rb
+++ b/src/features/support/paths.rb
@@ -104,6 +104,9 @@ module NavigationHelpers
when /^(.*)'s image factory deployable page$/
image_factory_deployable_path(Deployable.find_by_name($1))
+ when /^(.*)'s image factory assembly page$/
+ image_factory_assembly_path(Assembly.find_by_name($1))
+
# Add more mappings here.
# Here is an example that pulls values out of the Regexp:
#
--
1.7.3.5
13 years, 4 months
[PATCH appliance] postgres/ssl support for the aeolus database
by Mo Morsi
---
bin/deltacloud-cleanup | 1 +
bin/deltacloud-configure | 1 +
contrib/deltacloud-configure.spec | 6 +-
recipes/deltacloud_recipe/files/pg_hba-ssl.conf | 5 +
recipes/deltacloud_recipe/files/postgresql.conf | 503 +++++++++++++++++++++
recipes/deltacloud_recipe/manifests/aggregator.pp | 34 ++-
recipes/deltacloud_recipe/manifests/deltacloud.pp | 7 +
recipes/openssl/manifests/init.pp | 34 ++
8 files changed, 587 insertions(+), 4 deletions(-)
create mode 100644 recipes/deltacloud_recipe/files/pg_hba-ssl.conf
create mode 100644 recipes/deltacloud_recipe/files/postgresql.conf
create mode 100644 recipes/openssl/manifests/init.pp
diff --git a/bin/deltacloud-cleanup b/bin/deltacloud-cleanup
index a359fce..a988f52 100755
--- a/bin/deltacloud-cleanup
+++ b/bin/deltacloud-cleanup
@@ -1,4 +1,5 @@
#!/bin/sh
+export FACTER_DELTACLOUD_ENABLE_SECURITY=true
puppet /usr/share/deltacloud-configure/deltacloud_uninstall.pp \
--modulepath=/usr/share/deltacloud-configure/modules/
diff --git a/bin/deltacloud-configure b/bin/deltacloud-configure
index c034d4c..bfd340e 100755
--- a/bin/deltacloud-configure
+++ b/bin/deltacloud-configure
@@ -1,4 +1,5 @@
#!/bin/sh
+export FACTER_DELTACLOUD_ENABLE_SECURITY=true
puppet /usr/share/deltacloud-configure/deltacloud_recipe.pp \
--modulepath=/usr/share/deltacloud-configure/modules/
diff --git a/contrib/deltacloud-configure.spec b/contrib/deltacloud-configure.spec
index 04ff82d..e49877f 100644
--- a/contrib/deltacloud-configure.spec
+++ b/contrib/deltacloud-configure.spec
@@ -4,7 +4,7 @@
Summary: DeltaCloud Configure Puppet Recipe
Name: deltacloud-configure
Version: 2.0.0
-Release: 1%{?dist}
+Release: 2%{?dist}
Group: Applications/Internet
License: GPLv2+
@@ -36,6 +36,7 @@ rm -rf %{buildroot}
%{__cp} -R %{pbuild}/recipes/firewall/ %{buildroot}/%{dchome}/modules/firewall
%{__cp} -R %{pbuild}/recipes/ntp/ %{buildroot}/%{dchome}/modules/ntp
%{__cp} -R %{pbuild}/recipes/postgres/ %{buildroot}/%{dchome}/modules/postgres
+%{__cp} -R %{pbuild}/recipes/openssl/ %{buildroot}/%{dchome}/modules/openssl
%{__cp} -R %{pbuild}/bin/deltacloud-configure %{buildroot}/%{_sbindir}/
%{__cp} -R %{pbuild}/bin/deltacloud-cleanup %{buildroot}/%{_sbindir}/
@@ -49,6 +50,9 @@ rm -rf %{buildroot}
%{dchome}
%changelog
+* Thu Jan 14 2011 Mohammed Morsi <mmorsi(a)redhat.com> 2.0.0-2
+- include openssl module
+
* Mon Jan 10 2011 Mike Orazi <morazi(a)redhat.com> 2.0.0-1
- Make this a drop in replacement for the old deltacloud-configure scripts
diff --git a/recipes/deltacloud_recipe/files/pg_hba-ssl.conf b/recipes/deltacloud_recipe/files/pg_hba-ssl.conf
new file mode 100644
index 0000000..94a64d0
--- /dev/null
+++ b/recipes/deltacloud_recipe/files/pg_hba-ssl.conf
@@ -0,0 +1,5 @@
+# we are still leaving Unix-domain sockets open, if we want to disable
+# make sure to append "sslmode=require" and "-h localhost" to all psql
+# commands
+local all all trust
+hostssl all all 127.0.0.1/32 md5
diff --git a/recipes/deltacloud_recipe/files/postgresql.conf b/recipes/deltacloud_recipe/files/postgresql.conf
new file mode 100644
index 0000000..cf97fce
--- /dev/null
+++ b/recipes/deltacloud_recipe/files/postgresql.conf
@@ -0,0 +1,503 @@
+# -----------------------------
+# PostgreSQL configuration file
+# -----------------------------
+#
+# This file consists of lines of the form:
+#
+# name = value
+#
+# (The "=" is optional.) Whitespace may be used. Comments are introduced with
+# "#" anywhere on a line. The complete list of parameter names and allowed
+# values can be found in the PostgreSQL documentation.
+#
+# The commented-out settings shown in this file represent the default values.
+# Re-commenting a setting is NOT sufficient to revert it to the default value;
+# you need to reload the server.
+#
+# This file is read on server startup and when the server receives a SIGHUP
+# signal. If you edit the file on a running system, you have to SIGHUP the
+# server for the changes to take effect, or use "pg_ctl reload". Some
+# parameters, which are marked below, require a server shutdown and restart to
+# take effect.
+#
+# Any parameter can also be given as a command-line option to the server, e.g.,
+# "postgres -c log_connections=on". Some parameters can be changed at run time
+# with the "SET" SQL command.
+#
+# Memory units: kB = kilobytes Time units: ms = milliseconds
+# MB = megabytes s = seconds
+# GB = gigabytes min = minutes
+# h = hours
+# d = days
+
+
+#------------------------------------------------------------------------------
+# FILE LOCATIONS
+#------------------------------------------------------------------------------
+
+# The default values of these variables are driven from the -D command-line
+# option or PGDATA environment variable, represented here as ConfigDir.
+
+#data_directory = 'ConfigDir' # use data in another directory
+ # (change requires restart)
+#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
+ # (change requires restart)
+#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
+ # (change requires restart)
+
+# If external_pid_file is not explicitly set, no extra PID file is written.
+#external_pid_file = '(none)' # write an extra PID file
+ # (change requires restart)
+
+
+#------------------------------------------------------------------------------
+# CONNECTIONS AND AUTHENTICATION
+#------------------------------------------------------------------------------
+
+# - Connection Settings -
+
+#listen_addresses = 'localhost' # what IP address(es) to listen on;
+ # comma-separated list of addresses;
+ # defaults to 'localhost', '*' = all
+ # (change requires restart)
+#port = 5432 # (change requires restart)
+max_connections = 100 # (change requires restart)
+# Note: Increasing max_connections costs ~400 bytes of shared memory per
+# connection slot, plus lock space (see max_locks_per_transaction).
+#superuser_reserved_connections = 3 # (change requires restart)
+#unix_socket_directory = '' # (change requires restart)
+#unix_socket_group = '' # (change requires restart)
+#unix_socket_permissions = 0777 # begin with 0 to use octal notation
+ # (change requires restart)
+#bonjour_name = '' # defaults to the computer name
+ # (change requires restart)
+
+# - Security and Authentication -
+
+#authentication_timeout = 1min # 1s-600s
+#ssl = off # (change requires restart)
+#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers
+ # (change requires restart)
+#ssl_renegotiation_limit = 512MB # amount of data between renegotiations
+#password_encryption = on
+#db_user_namespace = off
+
+# Kerberos and GSSAPI
+#krb_server_keyfile = ''
+#krb_srvname = 'postgres' # (Kerberos only)
+#krb_caseins_users = off
+
+# - TCP Keepalives -
+# see "man 7 tcp" for details
+
+#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
+ # 0 selects the system default
+#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
+ # 0 selects the system default
+#tcp_keepalives_count = 0 # TCP_KEEPCNT;
+ # 0 selects the system default
+
+
+#------------------------------------------------------------------------------
+# RESOURCE USAGE (except WAL)
+#------------------------------------------------------------------------------
+
+# - Memory -
+
+shared_buffers = 24MB # min 128kB
+ # (change requires restart)
+#temp_buffers = 8MB # min 800kB
+#max_prepared_transactions = 0 # zero disables the feature
+ # (change requires restart)
+# Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory
+# per transaction slot, plus lock space (see max_locks_per_transaction).
+# It is not advisable to set max_prepared_transactions nonzero unless you
+# actively intend to use prepared transactions.
+#work_mem = 1MB # min 64kB
+#maintenance_work_mem = 16MB # min 1MB
+#max_stack_depth = 2MB # min 100kB
+
+# - Kernel Resource Usage -
+
+#max_files_per_process = 1000 # min 25
+ # (change requires restart)
+#shared_preload_libraries = '' # (change requires restart)
+
+# - Cost-Based Vacuum Delay -
+
+#vacuum_cost_delay = 0ms # 0-100 milliseconds
+#vacuum_cost_page_hit = 1 # 0-10000 credits
+#vacuum_cost_page_miss = 10 # 0-10000 credits
+#vacuum_cost_page_dirty = 20 # 0-10000 credits
+#vacuum_cost_limit = 200 # 1-10000 credits
+
+# - Background Writer -
+
+#bgwriter_delay = 200ms # 10-10000ms between rounds
+#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round
+#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round
+
+# - Asynchronous Behavior -
+
+#effective_io_concurrency = 1 # 1-1000. 0 disables prefetching
+
+
+#------------------------------------------------------------------------------
+# WRITE AHEAD LOG
+#------------------------------------------------------------------------------
+
+# - Settings -
+
+#fsync = on # turns forced synchronization on or off
+#synchronous_commit = on # immediate fsync at commit
+#wal_sync_method = fsync # the default is the first option
+ # supported by the operating system:
+ # open_datasync
+ # fdatasync
+ # fsync
+ # fsync_writethrough
+ # open_sync
+#full_page_writes = on # recover from partial page writes
+#wal_buffers = 64kB # min 32kB
+ # (change requires restart)
+#wal_writer_delay = 200ms # 1-10000 milliseconds
+
+#commit_delay = 0 # range 0-100000, in microseconds
+#commit_siblings = 5 # range 1-1000
+
+# - Checkpoints -
+
+#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
+#checkpoint_timeout = 5min # range 30s-1h
+#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
+#checkpoint_warning = 30s # 0 disables
+
+# - Archiving -
+
+#archive_mode = off # allows archiving to be done
+ # (change requires restart)
+#archive_command = '' # command to use to archive a logfile segment
+#archive_timeout = 0 # force a logfile segment switch after this
+ # number of seconds; 0 disables
+
+
+#------------------------------------------------------------------------------
+# QUERY TUNING
+#------------------------------------------------------------------------------
+
+# - Planner Method Configuration -
+
+#enable_bitmapscan = on
+#enable_hashagg = on
+#enable_hashjoin = on
+#enable_indexscan = on
+#enable_mergejoin = on
+#enable_nestloop = on
+#enable_seqscan = on
+#enable_sort = on
+#enable_tidscan = on
+
+# - Planner Cost Constants -
+
+#seq_page_cost = 1.0 # measured on an arbitrary scale
+#random_page_cost = 4.0 # same scale as above
+#cpu_tuple_cost = 0.01 # same scale as above
+#cpu_index_tuple_cost = 0.005 # same scale as above
+#cpu_operator_cost = 0.0025 # same scale as above
+#effective_cache_size = 128MB
+
+# - Genetic Query Optimizer -
+
+#geqo = on
+#geqo_threshold = 12
+#geqo_effort = 5 # range 1-10
+#geqo_pool_size = 0 # selects default based on effort
+#geqo_generations = 0 # selects default based on effort
+#geqo_selection_bias = 2.0 # range 1.5-2.0
+
+# - Other Planner Options -
+
+#default_statistics_target = 100 # range 1-10000
+#constraint_exclusion = partition # on, off, or partition
+#cursor_tuple_fraction = 0.1 # range 0.0-1.0
+#from_collapse_limit = 8
+#join_collapse_limit = 8 # 1 disables collapsing of explicit
+ # JOIN clauses
+
+
+#------------------------------------------------------------------------------
+# ERROR REPORTING AND LOGGING
+#------------------------------------------------------------------------------
+
+# - Where to Log -
+
+#log_destination = 'stderr' # Valid values are combinations of
+ # stderr, csvlog, syslog and eventlog,
+ # depending on platform. csvlog
+ # requires logging_collector to be on.
+
+# This is used when logging to stderr:
+logging_collector = on # Enable capturing of stderr and csvlog
+ # into log files. Required to be on for
+ # csvlogs.
+ # (change requires restart)
+
+# These are only used if logging_collector is on:
+log_directory = 'pg_log' # directory where log files are written,
+ # can be absolute or relative to PGDATA
+log_filename = 'postgresql-%a.log' # log file name pattern,
+ # can include strftime() escapes
+log_truncate_on_rotation = on # If on, an existing log file of the
+ # same name as the new log file will be
+ # truncated rather than appended to.
+ # But such truncation only occurs on
+ # time-driven rotation, not on restarts
+ # or size-driven rotation. Default is
+ # off, meaning append to existing files
+ # in all cases.
+log_rotation_age = 1d # Automatic rotation of logfiles will
+ # happen after that time. 0 disables.
+log_rotation_size = 0 # Automatic rotation of logfiles will
+ # happen after that much log output.
+ # 0 disables.
+
+# These are relevant when logging to syslog:
+#syslog_facility = 'LOCAL0'
+#syslog_ident = 'postgres'
+
+#silent_mode = off # Run server silently.
+ # DO NOT USE without syslog or
+ # logging_collector
+ # (change requires restart)
+
+
+# - When to Log -
+
+#client_min_messages = notice # values in order of decreasing detail:
+ # debug5
+ # debug4
+ # debug3
+ # debug2
+ # debug1
+ # log
+ # notice
+ # warning
+ # error
+
+#log_min_messages = warning # values in order of decreasing detail:
+ # debug5
+ # debug4
+ # debug3
+ # debug2
+ # debug1
+ # info
+ # notice
+ # warning
+ # error
+ # log
+ # fatal
+ # panic
+
+#log_error_verbosity = default # terse, default, or verbose messages
+
+#log_min_error_statement = error # values in order of decreasing detail:
+ # debug5
+ # debug4
+ # debug3
+ # debug2
+ # debug1
+ # info
+ # notice
+ # warning
+ # error
+ # log
+ # fatal
+ # panic (effectively off)
+
+#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
+ # and their durations, > 0 logs only
+ # statements running at least this number
+ # of milliseconds
+
+
+# - What to Log -
+
+#debug_print_parse = off
+#debug_print_rewritten = off
+#debug_print_plan = off
+#debug_pretty_print = on
+#log_checkpoints = off
+#log_connections = off
+#log_disconnections = off
+#log_duration = off
+#log_hostname = off
+#log_line_prefix = '' # special values:
+ # %u = user name
+ # %d = database name
+ # %r = remote host and port
+ # %h = remote host
+ # %p = process ID
+ # %t = timestamp without milliseconds
+ # %m = timestamp with milliseconds
+ # %i = command tag
+ # %c = session ID
+ # %l = session line number
+ # %s = session start timestamp
+ # %v = virtual transaction ID
+ # %x = transaction ID (0 if none)
+ # %q = stop here in non-session
+ # processes
+ # %% = '%'
+ # e.g. '<%u%%%d> '
+#log_lock_waits = off # log lock waits >= deadlock_timeout
+#log_statement = 'none' # none, ddl, mod, all
+#log_temp_files = -1 # log temporary files equal or larger
+ # than the specified size in kilobytes;
+ # -1 disables, 0 logs all temp files
+#log_timezone = unknown # actually, defaults to TZ environment
+ # setting
+
+
+#------------------------------------------------------------------------------
+# RUNTIME STATISTICS
+#------------------------------------------------------------------------------
+
+# - Query/Index Statistics Collector -
+
+#track_activities = on
+#track_counts = on
+#track_functions = none # none, pl, all
+#track_activity_query_size = 1024
+#update_process_title = on
+#stats_temp_directory = 'pg_stat_tmp'
+
+
+# - Statistics Monitoring -
+
+#log_parser_stats = off
+#log_planner_stats = off
+#log_executor_stats = off
+#log_statement_stats = off
+
+
+#------------------------------------------------------------------------------
+# AUTOVACUUM PARAMETERS
+#------------------------------------------------------------------------------
+
+#autovacuum = on # Enable autovacuum subprocess? 'on'
+ # requires track_counts to also be on.
+#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
+ # their durations, > 0 logs only
+ # actions running at least this number
+ # of milliseconds.
+#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
+#autovacuum_naptime = 1min # time between autovacuum runs
+#autovacuum_vacuum_threshold = 50 # min number of row updates before
+ # vacuum
+#autovacuum_analyze_threshold = 50 # min number of row updates before
+ # analyze
+#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
+#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
+#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
+ # (change requires restart)
+#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
+ # autovacuum, in milliseconds;
+ # -1 means use vacuum_cost_delay
+#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
+ # autovacuum, -1 means use
+ # vacuum_cost_limit
+
+
+#------------------------------------------------------------------------------
+# CLIENT CONNECTION DEFAULTS
+#------------------------------------------------------------------------------
+
+# - Statement Behavior -
+
+#search_path = '"$user",public' # schema names
+#default_tablespace = '' # a tablespace name, '' uses the default
+#temp_tablespaces = '' # a list of tablespace names, '' uses
+ # only default tablespace
+#check_function_bodies = on
+#default_transaction_isolation = 'read committed'
+#default_transaction_read_only = off
+#session_replication_role = 'origin'
+#statement_timeout = 0 # in milliseconds, 0 is disabled
+#vacuum_freeze_min_age = 50000000
+#vacuum_freeze_table_age = 150000000
+#xmlbinary = 'base64'
+#xmloption = 'content'
+
+# - Locale and Formatting -
+
+datestyle = 'iso, mdy'
+#intervalstyle = 'postgres'
+#timezone = unknown # actually, defaults to TZ environment
+ # setting
+#timezone_abbreviations = 'Default' # Select the set of available time zone
+ # abbreviations. Currently, there are
+ # Default
+ # Australia
+ # India
+ # You can create your own file in
+ # share/timezonesets/.
+#extra_float_digits = 0 # min -15, max 2
+#client_encoding = sql_ascii # actually, defaults to database
+ # encoding
+
+# These settings are initialized by initdb, but they can be changed.
+lc_messages = 'en_US.UTF-8' # locale for system error message
+ # strings
+lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
+lc_numeric = 'en_US.UTF-8' # locale for number formatting
+lc_time = 'en_US.UTF-8' # locale for time formatting
+
+# default configuration for text search
+default_text_search_config = 'pg_catalog.english'
+
+# - Other Defaults -
+
+#dynamic_library_path = '$libdir'
+#local_preload_libraries = ''
+
+
+#------------------------------------------------------------------------------
+# LOCK MANAGEMENT
+#------------------------------------------------------------------------------
+
+#deadlock_timeout = 1s
+#max_locks_per_transaction = 64 # min 10
+ # (change requires restart)
+# Note: Each lock table slot uses ~270 bytes of shared memory, and there are
+# max_locks_per_transaction * (max_connections + max_prepared_transactions)
+# lock table slots.
+
+
+#------------------------------------------------------------------------------
+# VERSION/PLATFORM COMPATIBILITY
+#------------------------------------------------------------------------------
+
+# - Previous PostgreSQL Versions -
+
+#add_missing_from = off
+#array_nulls = on
+#backslash_quote = safe_encoding # on, off, or safe_encoding
+#default_with_oids = off
+#escape_string_warning = on
+#regex_flavor = advanced # advanced, extended, or basic
+#sql_inheritance = on
+#standard_conforming_strings = off
+#synchronize_seqscans = on
+
+# - Other Platforms and Clients -
+
+#transform_null_equals = off
+
+
+#------------------------------------------------------------------------------
+# CUSTOMIZED OPTIONS
+#------------------------------------------------------------------------------
+
+#custom_variable_classes = '' # list of custom variable class names
+
+ssl = on
diff --git a/recipes/deltacloud_recipe/manifests/aggregator.pp b/recipes/deltacloud_recipe/manifests/aggregator.pp
index 2be247c..b86b0e0 100644
--- a/recipes/deltacloud_recipe/manifests/aggregator.pp
+++ b/recipes/deltacloud_recipe/manifests/aggregator.pp
@@ -42,9 +42,37 @@ class deltacloud::aggregator inherits deltacloud {
# Right now we configure and start postgres, at some point I want
# to make the db that gets setup configurable
include postgres::server
- file { "/var/lib/pgsql/data/pg_hba.conf":
- source => "puppet:///modules/deltacloud_recipe/pg_hba.conf",
- require => Exec["pginitdb"] }
+ if $enable_security {
+ openssl::certificate{"/var/lib/pgsql/data/server":
+ user => 'postgres',
+ group => 'postgres',
+ require => Exec["pginitdb"],
+ notify => Service['postgresql']}
+ # since we're self signing for now, use the same certificate for the root
+ file { "/var/lib/pgsql/data/root.crt":
+ require => Openssl::Certificate["/var/lib/pgsql/data/server"],
+ source => "/var/lib/pgsql/data/server.crt",
+ owner => 'postgres',
+ group => 'postgres',
+ notify => Service['postgresql'] }
+ file { "/var/lib/pgsql/data/pg_hba.conf":
+ source => "puppet:///modules/deltacloud_recipe/pg_hba-ssl.conf",
+ require => Exec["pginitdb"],
+ owner => 'postgres',
+ group => 'postgres',
+ notify => Service['postgresql']}
+ file { "/var/lib/pgsql/data/postgresql.conf":
+ source => "puppet:///modules/deltacloud_recipe/postgresql.conf",
+ require => Exec["pginitdb"],
+ owner => 'postgres',
+ group => 'postgres',
+ notify => Service['postgresql']}
+ } else {
+ file { "/var/lib/pgsql/data/pg_hba.conf":
+ source => "puppet:///modules/deltacloud_recipe/pg_hba.conf",
+ require => Exec["pginitdb"],
+ notify => Service['postgresql']}
+ }
postgres::user{"dcloud":
password => "v23zj59an",
roles => "CREATEDB",
diff --git a/recipes/deltacloud_recipe/manifests/deltacloud.pp b/recipes/deltacloud_recipe/manifests/deltacloud.pp
index e892df5..a39d2dc 100644
--- a/recipes/deltacloud_recipe/manifests/deltacloud.pp
+++ b/recipes/deltacloud_recipe/manifests/deltacloud.pp
@@ -6,12 +6,19 @@ import "postgres"
import "rails"
import "selinux"
import "ntp"
+import "openssl"
import "aggregator"
import "core"
import "iwhd"
import "image-factory"
+if $deltacloud_enable_security == "true" or $deltacloud_enable_security == "1" {
+ $enable_security = true
+} else {
+ $enable_security = false
+}
+
# Base deltacloud class
class deltacloud {
# Setup repos which to pull deltacloud components
diff --git a/recipes/openssl/manifests/init.pp b/recipes/openssl/manifests/init.pp
new file mode 100644
index 0000000..8249feb
--- /dev/null
+++ b/recipes/openssl/manifests/init.pp
@@ -0,0 +1,34 @@
+class openssl {
+ package { "openssl":
+ ensure => installed
+ }
+}
+
+define openssl::key($user='root', $group='root'){
+ exec{"create_${name}_key":
+ command => "/usr/bin/openssl genrsa -des3 -passout pass:foobar -out ${name}.key 1024"
+ }
+ exec{"remove_${name}_key_password":
+ command => "/usr/bin/openssl rsa -passin pass:foobar -in ${name}.key -out ${name}.key",
+ require => Exec["create_${name}_key"]
+ }
+ exec{"chmod_${name}.key":
+ command => "/bin/chmod 400 ${name}.key",
+ require => Exec["remove_${name}_key_password"]
+ }
+ exec{"chown_${name}.key":
+ command => "/bin/chown ${user}.${group} ${name}.key",
+ require => Exec["remove_${name}_key_password"]
+ }
+}
+
+define openssl::certificate($user='root', $group='root'){
+ openssl::key{$name:
+ user => $user,
+ group => $group
+ }
+ exec{"create_${name}_certificate":
+ command => "/usr/bin/openssl req -new -key ${name}.key -days 3650 -out ${name}.crt -x509 -subj '/'",
+ require => Exec["remove_${name}_key_password"]
+ }
+}
--
1.7.2.3
13 years, 4 months
Rename of deltacloud aggregator
by Chris Lalancette
All,
A few months ago we transferred the deltacloud API and core over to the
apache foundation. Along with that we have also transferred the name
"deltacloud" to apache (at least in spirit, I don't know what the legal
situation is).
To reduce confusion between the API and what is currently called the
deltacloud aggregator, and to give a new home to the sub-projects that the
aggregator depends on, we are launching the http://www.aeolusproject.org
website. Aeolus is the Greek god of wind, so we think it fits in nicely with
the cloud theme. The aggregator will be formally renamed to
"Aeolus Conductor", and several of the subprojects that the Conductor depends
on will be added to the website. The old deltacloud git repositories will be
marked read-only, and new development will happen in the aeolus repositories.
Additionally, http://deltacloud.org will be modified to redirect to the
apache incubator page.
We plan on making the changes to the mailing lists, git repositories, and
website on Monday, Jan 17. The code inside the git repositories will probably
continue to use the old names for a little while longer, but we will patch them
as quickly as possible.
As usual, any questions, comments, or criticisms are welcome.
--
Chris Lalancette
13 years, 4 months
[PATCH aeolus 1/2] Add Solr/Sunspot search support
by Tomas Sedovic
From: Tomas Sedovic <tsedovic(a)redhat.com>
This gets us ready for adding the search. The Solr server is bundled with the
sunspot gem so you don't have to install it by hand. On the dev machine, not
in production of course.
Run `rake sunspot:solr:start` to start the Solr server.
When you make change to the solr/conf files, you must stop and start it again.
Run `rake sunspot:reindex` to rebuild the index -- do this when you change the
search setup in the models.
---
.gitignore | 3 +
src/Rakefile | 1 +
src/config/environment.rb | 1 +
src/config/sunspot.yml | 17 +
src/solr/conf/elevate.xml | 36 ++
src/solr/conf/schema.xml | 252 ++++++++++++
src/solr/conf/solrconfig.xml | 934 ++++++++++++++++++++++++++++++++++++++++++
src/solr/conf/spellings.txt | 2 +
src/solr/conf/stopwords.txt | 57 +++
src/solr/conf/synonyms.txt | 30 ++
10 files changed, 1333 insertions(+), 0 deletions(-)
create mode 100644 src/config/sunspot.yml
create mode 100644 src/solr/conf/elevate.xml
create mode 100644 src/solr/conf/schema.xml
create mode 100644 src/solr/conf/solrconfig.xml
create mode 100644 src/solr/conf/spellings.txt
create mode 100644 src/solr/conf/stopwords.txt
create mode 100644 src/solr/conf/synonyms.txt
diff --git a/.gitignore b/.gitignore
index b70174c..4785d79 100644
--- a/.gitignore
+++ b/.gitignore
@@ -29,3 +29,6 @@ src/public/stylesheets/compiled
development.sqlite3
production.sqlite3
test.sqlite3
+
+# the search index generated by Solr
+src/solr/data
diff --git a/src/Rakefile b/src/Rakefile
index 37c683b..6164883 100644
--- a/src/Rakefile
+++ b/src/Rakefile
@@ -7,5 +7,6 @@ require(File.join(File.dirname(__FILE__), 'config', 'boot'))
require 'rake'
require 'rake/testtask'
require 'rake/rdoctask'
+require 'sunspot/rails/tasks'
require 'tasks/rails'
diff --git a/src/config/environment.rb b/src/config/environment.rb
index 09ec85b..dadcea3 100644
--- a/src/config/environment.rb
+++ b/src/config/environment.rb
@@ -53,6 +53,7 @@ Rails::Initializer.run do |config|
config.gem "typhoeus"
config.gem "rb-inotify"
config.gem 'rack-restful_submit', :version => '1.1.2'
+ config.gem 'sunspot_rails', :lib => 'sunspot/rails'
config.middleware.swap Rack::MethodOverride, 'Rack::RestfulSubmit'
diff --git a/src/config/sunspot.yml b/src/config/sunspot.yml
new file mode 100644
index 0000000..729f5e3
--- /dev/null
+++ b/src/config/sunspot.yml
@@ -0,0 +1,17 @@
+production:
+ solr:
+ hostname: localhost
+ port: 8983
+ log_level: WARNING
+
+development:
+ solr:
+ hostname: localhost
+ port: 8982
+ log_level: INFO
+
+test:
+ solr:
+ hostname: localhost
+ port: 8981
+ log_level: WARNING
diff --git a/src/solr/conf/elevate.xml b/src/solr/conf/elevate.xml
new file mode 100644
index 0000000..0472508
--- /dev/null
+++ b/src/solr/conf/elevate.xml
@@ -0,0 +1,36 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!-- If this file is found in the config directory, it will only be
+ loaded once at startup. If it is found in Solr's data
+ directory, it will be re-loaded every commit.
+-->
+
+<elevate>
+ <query text="foo bar">
+ <doc id="1" />
+ <doc id="2" />
+ <doc id="3" />
+ </query>
+
+ <query text="ipod">
+ <doc id="MA147LL/A" /> <!-- put the actual ipod at the top -->
+ <doc id="IW-02" exclude="true" /> <!-- exclude this cable -->
+ </query>
+
+</elevate>
diff --git a/src/solr/conf/schema.xml b/src/solr/conf/schema.xml
new file mode 100644
index 0000000..a36ea0f
--- /dev/null
+++ b/src/solr/conf/schema.xml
@@ -0,0 +1,252 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<!--
+ This is the Solr schema file. This file should be named "schema.xml" and
+ should be in the conf directory under the solr home
+ (i.e. ./solr/conf/schema.xml by default)
+ or located where the classloader for the Solr webapp can find it.
+
+ This example schema is the recommended starting point for users.
+ It should be kept correct and concise, usable out-of-the-box.
+
+ For more information, on how to customize this file, please see
+ http://wiki.apache.org/solr/SchemaXml
+
+ PERFORMANCE NOTE: this schema includes many optional features and should not
+ be used for benchmarking. To improve performance one could
+ - set stored="false" for all fields possible (esp large fields) when you
+ only need to search on the field but don't need to return the original
+ value.
+ - set indexed="false" if you don't need to search on the field, but only
+ return the field as a result of searching on other indexed fields.
+ - remove all unneeded copyField statements
+ - for best index size and searching performance, set "index" to false
+ for all general text fields, use copyField to copy them to the
+ catchall "text" field, and use that for searching.
+ - For maximum indexing performance, use the StreamingUpdateSolrServer
+ java client.
+ - Remember to run the JVM in server mode, and use a higher logging level
+ that avoids logging every request
+-->
+<schema name="sunspot" version="1.0">
+ <types>
+ <!-- field type definitions. The "name" attribute is
+ just a label to be used by field definitions. The "class"
+ attribute and any other attributes determine the real
+ behavior of the fieldType.
+ Class names starting with "solr" refer to java classes in the
+ org.apache.solr.analysis package.
+ -->
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="string" class="solr.StrField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="tdouble" class="solr.TrieDoubleField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="rand" class="solr.RandomSortField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="text" class="solr.TextField" omitNorms="false">
+ <analyzer>
+ <tokenizer class="solr.StandardTokenizerFactory"/>
+ <filter class="solr.StandardFilterFactory"/>
+ <filter class="solr.LowerCaseFilterFactory"/>
+ </analyzer>
+ </fieldType>
+ <!-- fieldType for substring matching -->
+ <fieldType class="solr.TextField" name="text_sub" positionIncrementGap="100">
+ <analyzer type="index">
+ <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+ <filter class="solr.LowerCaseFilterFactory"/>
+ <filter class="solr.NGramFilterFactory" minGramSize="2" maxGramSize="15"/>
+ </analyzer>
+ <analyzer type="query">
+ <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+ <filter class="solr.LowerCaseFilterFactory"/>
+ </analyzer>
+ </fieldType>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="boolean" class="solr.BoolField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="date" class="solr.DateField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="sdouble" class="solr.SortableDoubleField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="sfloat" class="solr.SortableFloatField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="sint" class="solr.SortableIntField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="slong" class="solr.SortableLongField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="tint" class="solr.TrieIntField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="tfloat" class="solr.TrieFloatField" omitNorms="true"/>
+ <!-- *** This fieldType is used by Sunspot! *** -->
+ <fieldType name="tdate" class="solr.TrieDateField" omitNorms="true"/>
+ </types>
+ <fields>
+ <!-- Valid attributes for fields:
+ name: mandatory - the name for the field
+ type: mandatory - the name of a previously defined type from the
+ <types> section
+ indexed: true if this field should be indexed (searchable or sortable)
+ stored: true if this field should be retrievable
+ compressed: [false] if this field should be stored using gzip compression
+ (this will only apply if the field type is compressable; among
+ the standard field types, only TextField and StrField are)
+ multiValued: true if this field may contain multiple values per document
+ omitNorms: (expert) set to true to omit the norms associated with
+ this field (this disables length normalization and index-time
+ boosting for the field, and saves some memory). Only full-text
+ fields or fields that need an index-time boost need norms.
+ termVectors: [false] set to true to store the term vector for a
+ given field.
+ When using MoreLikeThis, fields used for similarity should be
+ stored for best performance.
+ termPositions: Store position information with the term vector.
+ This will increase storage costs.
+ termOffsets: Store offset information with the term vector. This
+ will increase storage costs.
+ default: a value that should be used if no value is specified
+ when adding a document.
+ -->
+ <!-- *** This field is used by Sunspot! *** -->
+ <field name="id" stored="true" type="string" multiValued="false" indexed="true"/>
+ <!-- *** This field is used by Sunspot! *** -->
+ <field name="type" stored="false" type="string" multiValued="true" indexed="true"/>
+ <!-- *** This field is used by Sunspot! *** -->
+ <field name="class_name" stored="false" type="string" multiValued="false" indexed="true"/>
+ <!-- *** This field is used by Sunspot! *** -->
+ <field name="text" stored="false" type="string" multiValued="true" indexed="true"/>
+ <!-- *** This field is used by Sunspot! *** -->
+ <field name="lat" stored="true" type="tdouble" multiValued="false" indexed="true"/>
+ <!-- *** This field is used by Sunspot! *** -->
+ <field name="lng" stored="true" type="tdouble" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="random_*" stored="false" type="rand" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="_local*" stored="false" type="tdouble" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_text" stored="false" type="text" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_texts" stored="true" type="text" multiValued="true" indexed="true"/>
+ <!-- this is for substring matching -->
+ <dynamicField name="*_substring" stored="false" type="text_sub" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_b" stored="false" type="boolean" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_bm" stored="false" type="boolean" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_bs" stored="true" type="boolean" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_bms" stored="true" type="boolean" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_d" stored="false" type="date" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_dm" stored="false" type="date" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_ds" stored="true" type="date" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_dms" stored="true" type="date" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_e" stored="false" type="sdouble" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_em" stored="false" type="sdouble" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_es" stored="true" type="sdouble" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_ems" stored="true" type="sdouble" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_f" stored="false" type="sfloat" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_fm" stored="false" type="sfloat" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_fs" stored="true" type="sfloat" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_fms" stored="true" type="sfloat" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_i" stored="false" type="sint" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_im" stored="false" type="sint" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_is" stored="true" type="sint" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_ims" stored="true" type="sint" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_l" stored="false" type="slong" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_lm" stored="false" type="slong" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_ls" stored="true" type="slong" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_lms" stored="true" type="slong" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_s" stored="false" type="string" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_sm" stored="false" type="string" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_ss" stored="true" type="string" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_sms" stored="true" type="string" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_it" stored="false" type="tint" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_itm" stored="false" type="tint" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_its" stored="true" type="tint" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_itms" stored="true" type="tint" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_ft" stored="false" type="tfloat" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_ftm" stored="false" type="tfloat" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_fts" stored="true" type="tfloat" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_ftms" stored="true" type="tfloat" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_dt" stored="false" type="tdate" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_dtm" stored="false" type="tdate" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_dts" stored="true" type="tdate" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_dtms" stored="true" type="tdate" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_textv" stored="false" termVectors="true" type="text" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_textsv" stored="true" termVectors="true" type="text" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_et" stored="false" termVectors="true" type="tdouble" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_etm" stored="false" termVectors="true" type="tdouble" multiValued="true" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_ets" stored="true" termVectors="true" type="tdouble" multiValued="false" indexed="true"/>
+ <!-- *** This dynamicField is used by Sunspot! *** -->
+ <dynamicField name="*_etms" stored="true" termVectors="true" type="tdouble" multiValued="true" indexed="true"/>
+ </fields>
+ <!-- Field to use to determine and enforce document uniqueness.
+ Unless this field is marked with required="false", it will be a required field
+ -->
+ <uniqueKey>id</uniqueKey>
+ <!-- field for the QueryParser to use when an explicit fieldname is absent -->
+ <defaultSearchField>text</defaultSearchField>
+ <!-- SolrQueryParser configuration: defaultOperator="AND|OR" -->
+ <solrQueryParser defaultOperator="AND"/>
+ <!-- copyField commands copy one field to another at the time a document
+ is added to the index. It's used either to index the same field differently,
+ or to add multiple fields to the same field for easier/faster searching. -->
+</schema>
diff --git a/src/solr/conf/solrconfig.xml b/src/solr/conf/solrconfig.xml
new file mode 100644
index 0000000..2bca955
--- /dev/null
+++ b/src/solr/conf/solrconfig.xml
@@ -0,0 +1,934 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<!--
+ For more details about configurations options that may appear in this
+ file, see http://wiki.apache.org/solr/SolrConfigXml.
+
+ Specifically, the Solr Config can support XInclude, which may make it easier to manage
+ the configuration. See https://issues.apache.org/jira/browse/SOLR-1167
+-->
+<config>
+ <!-- Set this to 'false' if you want solr to continue working after it has
+ encountered an severe configuration error. In a production environment,
+ you may want solr to keep working even if one handler is mis-configured.
+
+ You may also set this to false using by setting the system property:
+ -Dsolr.abortOnConfigurationError=false
+ -->
+ <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>
+ <!-- lib directives can be used to instruct Solr to load an Jars identified
+ and use them to resolve any "plugins" specified in your solrconfig.xml or
+ schema.xml (ie: Analyzers, Request Handlers, etc...).
+
+ All directories and paths are resolved relative the instanceDir.
+
+ If a "./lib" directory exists in your instanceDir, all files found in it
+ are included as if you had used the following syntax...
+
+ <lib dir="./lib" />
+ -->
+ <!-- A dir option by itself adds any files found in the directory to the
+ classpath, this is useful for including all jars in a directory.
+ -->
+ <lib dir="../../contrib/extraction/lib"/>
+ <!-- When a regex is specified in addition to a directory, only the files in that
+ directory which completely match the regex (anchored on both ends)
+ will be included.
+ -->
+ <lib dir="../../dist/" regex="apache-solr-cell-\d.*\.jar"/>
+ <lib dir="../../dist/" regex="apache-solr-clustering-\d.*\.jar"/>
+ <!-- If a dir option (with or without a regex) is used and nothing is found
+ that matches, it will be ignored
+ -->
+ <lib dir="../../contrib/clustering/lib/downloads/"/>
+ <lib dir="../../contrib/clustering/lib/"/>
+ <lib dir="/total/crap/dir/ignored"/>
+ <!-- an exact path can be used to specify a specific file. This will cause
+ a serious error to be logged if it can't be loaded.
+ <lib path="../a-jar-that-does-not-exist.jar" />
+ -->
+ <!-- Used to specify an alternate directory to hold all index data
+ other than the default ./data under the Solr home.
+ If replication is in use, this should match the replication configuration. -->
+ <dataDir>${solr.data.dir:./solr/data}</dataDir>
+ <!-- WARNING: this <indexDefaults> section only provides defaults for index writers
+ in general. See also the <mainIndex> section after that when changing parameters
+ for Solr's main Lucene index. -->
+ <indexDefaults>
+ <!-- Values here affect all index writers and act as a default unless overridden. -->
+ <useCompoundFile>false</useCompoundFile>
+ <mergeFactor>10</mergeFactor>
+ <!-- If both ramBufferSizeMB and maxBufferedDocs is set, then Lucene will flush
+ based on whichever limit is hit first. -->
+ <!--<maxBufferedDocs>1000</maxBufferedDocs>-->
+ <!-- Sets the amount of RAM that may be used by Lucene indexing
+ for buffering added documents and deletions before they are
+ flushed to the Directory. -->
+ <ramBufferSizeMB>32</ramBufferSizeMB>
+ <!-- <maxMergeDocs>2147483647</maxMergeDocs> -->
+ <maxFieldLength>10000</maxFieldLength>
+ <writeLockTimeout>1000</writeLockTimeout>
+ <commitLockTimeout>10000</commitLockTimeout>
+ <!--
+ Expert: Turn on Lucene's auto commit capability. This causes intermediate
+ segment flushes to write a new lucene index descriptor, enabling it to be
+ opened by an external IndexReader. This can greatly slow down indexing
+ speed. NOTE: Despite the name, this value does not have any relation to
+ Solr's autoCommit functionality
+ -->
+ <!--<luceneAutoCommit>false</luceneAutoCommit>-->
+ <!--
+ Expert: The Merge Policy in Lucene controls how merging is handled by
+ Lucene. The default in 2.3 is the LogByteSizeMergePolicy, previous
+ versions used LogDocMergePolicy.
+
+ LogByteSizeMergePolicy chooses segments to merge based on their size. The
+ Lucene 2.2 default, LogDocMergePolicy chose when to merge based on number
+ of documents
+
+ Other implementations of MergePolicy must have a no-argument constructor
+ -->
+ <!--<mergePolicy class="org.apache.lucene.index.LogByteSizeMergePolicy"/>-->
+ <!--
+ Expert:
+ The Merge Scheduler in Lucene controls how merges are performed. The
+ ConcurrentMergeScheduler (Lucene 2.3 default) can perform merges in the
+ background using separate threads. The SerialMergeScheduler (Lucene 2.2
+ default) does not.
+ -->
+ <!--<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>-->
+ <!--
+ This option specifies which Lucene LockFactory implementation to use.
+
+ single = SingleInstanceLockFactory - suggested for a read-only index
+ or when there is no possibility of another process trying
+ to modify the index.
+ native = NativeFSLockFactory - uses OS native file locking
+ simple = SimpleFSLockFactory - uses a plain file for locking
+
+ (For backwards compatibility with Solr 1.2, 'simple' is the default
+ if not specified.)
+ -->
+ <lockType>native</lockType>
+ <!--
+ Expert:
+ Controls how often Lucene loads terms into memory -->
+ <!--<termIndexInterval>256</termIndexInterval>-->
+ </indexDefaults>
+ <mainIndex>
+ <!-- options specific to the main on-disk lucene index -->
+ <useCompoundFile>false</useCompoundFile>
+ <ramBufferSizeMB>32</ramBufferSizeMB>
+ <mergeFactor>10</mergeFactor>
+ <!-- Deprecated -->
+ <!--<maxBufferedDocs>1000</maxBufferedDocs>-->
+ <!--<maxMergeDocs>2147483647</maxMergeDocs>-->
+ <!-- inherit from indexDefaults <maxFieldLength>10000</maxFieldLength> -->
+ <!-- If true, unlock any held write or commit locks on startup.
+ This defeats the locking mechanism that allows multiple
+ processes to safely access a lucene index, and should be
+ used with care.
+ This is not needed if lock type is 'none' or 'single'
+ -->
+ <unlockOnStartup>false</unlockOnStartup>
+ <!-- If true, IndexReaders will be reopened (often more efficient) instead
+ of closed and then opened. -->
+ <reopenReaders>true</reopenReaders>
+ <!--
+ Expert:
+ Controls how often Lucene loads terms into memory. Default is 128 and is likely good for most everyone. -->
+ <!--<termIndexInterval>256</termIndexInterval>-->
+ <!--
+ Custom deletion policies can specified here. The class must
+ implement org.apache.lucene.index.IndexDeletionPolicy.
+
+ http://lucene.apache.org/java/2_3_2/api/org/apache/lucene/index/IndexDele...
+
+ The standard Solr IndexDeletionPolicy implementation supports deleting
+ index commit points on number of commits, age of commit point and
+ optimized status.
+
+ The latest commit point should always be preserved regardless
+ of the criteria.
+ -->
+ <deletionPolicy class="solr.SolrDeletionPolicy">
+ <!-- The number of commit points to be kept -->
+ <str name="maxCommitsToKeep">1</str>
+ <!-- The number of optimized commit points to be kept -->
+ <str name="maxOptimizedCommitsToKeep">0</str>
+ <!--
+ Delete all commit points once they have reached the given age.
+ Supports DateMathParser syntax e.g.
+
+ <str name="maxCommitAge">30MINUTES</str>
+ <str name="maxCommitAge">1DAY</str>
+ -->
+ </deletionPolicy>
+ <!-- To aid in advanced debugging, you may turn on IndexWriter debug logging.
+ Setting to true will set the file that the underlying Lucene IndexWriter
+ will write its debug infostream to. -->
+ <infoStream file="INFOSTREAM.txt">false</infoStream>
+ </mainIndex>
+ <!-- Enables JMX if and only if an existing MBeanServer is found, use this
+ if you want to configure JMX through JVM parameters. Remove this to disable
+ exposing Solr configuration and statistics to JMX.
+
+ If you want to connect to a particular server, specify the agentId
+ e.g. <jmx agentId="myAgent" />
+
+ If you want to start a new MBeanServer, specify the serviceUrl
+ e.g <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr"/>
+
+ For more details see http://wiki.apache.org/solr/SolrJmx
+ -->
+ <jmx/>
+ <!-- the default high-performance update handler -->
+ <updateHandler class="solr.DirectUpdateHandler2">
+ <!-- A prefix of "solr." for class names is an alias that
+ causes solr to search appropriate packages, including
+ org.apache.solr.(search|update|request|core|analysis)
+ -->
+ <!-- Perform a <commit/> automatically under certain conditions:
+ maxDocs - number of updates since last commit is greater than this
+ maxTime - oldest uncommited update (in ms) is this long ago
+ Instead of enabling autoCommit, consider using "commitWithin"
+ when adding documents. http://wiki.apache.org/solr/UpdateXmlMessages
+ <autoCommit>
+ <maxDocs>10000</maxDocs>
+ <maxTime>1000</maxTime>
+ </autoCommit>
+ -->
+ <!-- The RunExecutableListener executes an external command from a
+ hook such as postCommit or postOptimize.
+ exe - the name of the executable to run
+ dir - dir to use as the current working directory. default="."
+ wait - the calling thread waits until the executable returns. default="true"
+ args - the arguments to pass to the program. default=nothing
+ env - environment variables to set. default=nothing
+ -->
+ <!-- A postCommit event is fired after every commit or optimize command
+ <listener event="postCommit" class="solr.RunExecutableListener">
+ <str name="exe">solr/bin/snapshooter</str>
+ <str name="dir">.</str>
+ <bool name="wait">true</bool>
+ <arr name="args"> <str>arg1</str> <str>arg2</str> </arr>
+ <arr name="env"> <str>MYVAR=val1</str> </arr>
+ </listener>
+ -->
+ <!-- A postOptimize event is fired only after every optimize command
+ <listener event="postOptimize" class="solr.RunExecutableListener">
+ <str name="exe">snapshooter</str>
+ <str name="dir">solr/bin</str>
+ <bool name="wait">true</bool>
+ </listener>
+ -->
+ </updateHandler>
+ <!-- Use the following format to specify a custom IndexReaderFactory - allows for alternate
+ IndexReader implementations.
+
+ ** Experimental Feature **
+ Please note - Using a custom IndexReaderFactory may prevent certain other features
+ from working. The API to IndexReaderFactory may change without warning or may even
+ be removed from future releases if the problems cannot be resolved.
+
+ ** Features that may not work with custom IndexReaderFactory **
+ The ReplicationHandler assumes a disk-resident index. Using a custom
+ IndexReader implementation may cause incompatibility with ReplicationHandler and
+ may cause replication to not work correctly. See SOLR-1366 for details.
+
+ <indexReaderFactory name="IndexReaderFactory" class="package.class">
+ Parameters as required by the implementation
+ </indexReaderFactory >
+ -->
+ <!-- To set the termInfosIndexDivisor, do this: -->
+ <!--<indexReaderFactory name="IndexReaderFactory" class="org.apache.solr.core.StandardIndexReaderFactory">
+ <int name="termInfosIndexDivisor">12</int>
+ </indexReaderFactory >-->
+ <query>
+ <!-- Maximum number of clauses in a boolean query... in the past, this affected
+ range or prefix queries that expanded to big boolean queries - built in Solr
+ query parsers no longer create queries with this limitation.
+ An exception is thrown if exceeded. -->
+ <maxBooleanClauses>1024</maxBooleanClauses>
+ <!-- There are two implementations of cache available for Solr,
+ LRUCache, based on a synchronized LinkedHashMap, and
+ FastLRUCache, based on a ConcurrentHashMap. FastLRUCache has faster gets
+ and slower puts in single threaded operation and thus is generally faster
+ than LRUCache when the hit ratio of the cache is high (> 75%), and may be
+ faster under other scenarios on multi-cpu systems. -->
+ <!-- Cache used by SolrIndexSearcher for filters (DocSets),
+ unordered sets of *all* documents that match a query.
+ When a new searcher is opened, its caches may be prepopulated
+ or "autowarmed" using data from caches in the old searcher.
+ autowarmCount is the number of items to prepopulate. For LRUCache,
+ the autowarmed items will be the most recently accessed items.
+ Parameters:
+ class - the SolrCache implementation LRUCache or FastLRUCache
+ size - the maximum number of entries in the cache
+ initialSize - the initial capacity (number of entries) of
+ the cache. (seel java.util.HashMap)
+ autowarmCount - the number of entries to prepopulate from
+ and old cache.
+ -->
+ <filterCache class="solr.FastLRUCache" size="512" initialSize="512" autowarmCount="0"/>
+ <!-- Cache used to hold field values that are quickly accessible
+ by document id. The fieldValueCache is created by default
+ even if not configured here.
+ <fieldValueCache
+ class="solr.FastLRUCache"
+ size="512"
+ autowarmCount="128"
+ showItems="32"
+ />
+ -->
+ <!-- queryResultCache caches results of searches - ordered lists of
+ document ids (DocList) based on a query, a sort, and the range
+ of documents requested. -->
+ <queryResultCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="0"/>
+ <!-- documentCache caches Lucene Document objects (the stored fields for each document).
+ Since Lucene internal document ids are transient, this cache will not be autowarmed. -->
+ <documentCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="0"/>
+ <!-- If true, stored fields that are not requested will be loaded lazily.
+ This can result in a significant speed improvement if the usual case is to
+ not load all stored fields, especially if the skipped fields are large
+ compressed text fields.
+ -->
+ <enableLazyFieldLoading>true</enableLazyFieldLoading>
+ <!-- Example of a generic cache. These caches may be accessed by name
+ through SolrIndexSearcher.getCache(),cacheLookup(), and cacheInsert().
+ The purpose is to enable easy caching of user/application level data.
+ The regenerator argument should be specified as an implementation
+ of solr.search.CacheRegenerator if autowarming is desired. -->
+ <!--
+ <cache name="myUserCache"
+ class="solr.LRUCache"
+ size="4096"
+ initialSize="1024"
+ autowarmCount="1024"
+ regenerator="org.mycompany.mypackage.MyRegenerator"
+ />
+ -->
+ <!-- An optimization that attempts to use a filter to satisfy a search.
+ If the requested sort does not include score, then the filterCache
+ will be checked for a filter matching the query. If found, the filter
+ will be used as the source of document ids, and then the sort will be
+ applied to that.
+ <useFilterForSortedQuery>true</useFilterForSortedQuery>
+ -->
+ <!-- An optimization for use with the queryResultCache. When a search
+ is requested, a superset of the requested number of document ids
+ are collected. For example, if a search for a particular query
+ requests matching documents 10 through 19, and queryWindowSize is 50,
+ then documents 0 through 49 will be collected and cached. Any further
+ requests in that range can be satisfied via the cache. -->
+ <queryResultWindowSize>20</queryResultWindowSize>
+ <!-- Maximum number of documents to cache for any entry in the
+ queryResultCache. -->
+ <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
+ <!-- a newSearcher event is fired whenever a new searcher is being prepared
+ and there is a current searcher handling requests (aka registered).
+ It can be used to prime certain caches to prevent long request times for
+ certain requests.
+ -->
+ <!-- QuerySenderListener takes an array of NamedList and executes a
+ local query request for each NamedList in sequence. -->
+ <listener event="newSearcher" class="solr.QuerySenderListener">
+ <arr name="queries">
+ <!--
+ <lst> <str name="q">solr</str> <str name="start">0</str> <str name="rows">10</str> </lst>
+ <lst> <str name="q">rocks</str> <str name="start">0</str> <str name="rows">10</str> </lst>
+ <lst><str name="q">static newSearcher warming query from solrconfig.xml</str></lst>
+ -->
+ </arr>
+ </listener>
+ <!-- a firstSearcher event is fired whenever a new searcher is being
+ prepared but there is no current registered searcher to handle
+ requests or to gain autowarming data from. -->
+ <listener event="firstSearcher" class="solr.QuerySenderListener">
+ <arr name="queries">
+ <lst>
+ <str name="q">solr rocks</str>
+ <str name="start">0</str>
+ <str name="rows">10</str>
+ </lst>
+ <lst>
+ <str name="q">static firstSearcher warming query from solrconfig.xml</str>
+ </lst>
+ </arr>
+ </listener>
+ <!-- If a search request comes in and there is no current registered searcher,
+ then immediately register the still warming searcher and use it. If
+ "false" then all requests will block until the first searcher is done
+ warming. -->
+ <useColdSearcher>false</useColdSearcher>
+ <!-- Maximum number of searchers that may be warming in the background
+ concurrently. An error is returned if this limit is exceeded. Recommend
+ 1-2 for read-only slaves, higher for masters w/o cache warming. -->
+ <maxWarmingSearchers>2</maxWarmingSearchers>
+ </query>
+ <!--
+ Let the dispatch filter handler /select?qt=XXX
+ handleSelect=true will use consistent error handling for /select and /update
+ handleSelect=false will use solr1.1 style error formatting
+ -->
+ <requestDispatcher handleSelect="true">
+ <!--Make sure your system has some authentication before enabling remote streaming! -->
+ <requestParsers enableRemoteStreaming="true" multipartUploadLimitInKB="2048000"/>
+ <!-- Set HTTP caching related parameters (for proxy caches and clients).
+
+ To get the behaviour of Solr 1.2 (ie: no caching related headers)
+ use the never304="true" option and do not specify a value for
+ <cacheControl>
+ -->
+ <!-- <httpCaching never304="true"> -->
+ <httpCaching lastModifiedFrom="openTime" etagSeed="Solr">
+ <!-- lastModFrom="openTime" is the default, the Last-Modified value
+ (and validation against If-Modified-Since requests) will all be
+ relative to when the current Searcher was opened.
+ You can change it to lastModFrom="dirLastMod" if you want the
+ value to exactly corrispond to when the physical index was last
+ modified.
+
+ etagSeed="..." is an option you can change to force the ETag
+ header (and validation against If-None-Match requests) to be
+ differnet even if the index has not changed (ie: when making
+ significant changes to your config file)
+
+ lastModifiedFrom and etagSeed are both ignored if you use the
+ never304="true" option.
+ -->
+ <!-- If you include a <cacheControl> directive, it will be used to
+ generate a Cache-Control header, as well as an Expires header
+ if the value contains "max-age="
+
+ By default, no Cache-Control header is generated.
+
+ You can use the <cacheControl> option even if you have set
+ never304="true"
+ -->
+ <!-- <cacheControl>max-age=30, public</cacheControl> -->
+ </httpCaching>
+ </requestDispatcher>
+ <!-- requestHandler plugins... incoming queries will be dispatched to the
+ correct handler based on the path or the qt (query type) param.
+ Names starting with a '/' are accessed with the a path equal to the
+ registered name. Names without a leading '/' are accessed with:
+ http://host/app/select?qt=name
+ If no qt is defined, the requestHandler that declares default="true"
+ will be used.
+ -->
+ <requestHandler name="standard" class="solr.SearchHandler" default="true">
+ <!-- default values for query parameters -->
+ <lst name="defaults">
+ <str name="echoParams">explicit</str>
+ <!--
+ <int name="rows">10</int>
+ <str name="fl">*</str>
+ <str name="version">2.1</str>
+ -->
+ </lst>
+ </requestHandler>
+ <!-- Please refer to http://wiki.apache.org/solr/SolrReplication for details on configuring replication -->
+ <!-- remove the <lst name="master"> section if this is just a slave -->
+ <!-- remove the <lst name="slave"> section if this is just a master -->
+ <!--
+<requestHandler name="/replication" class="solr.ReplicationHandler" >
+ <lst name="master">
+ <str name="replicateAfter">commit</str>
+ <str name="replicateAfter">startup</str>
+ <str name="confFiles">schema.xml,stopwords.txt</str>
+ </lst>
+ <lst name="slave">
+ <str name="masterUrl">http://localhost:8983/solr/replication</str>
+ <str name="pollInterval">00:00:60</str>
+ </lst>
+</requestHandler>-->
+ <!-- DisMaxRequestHandler allows easy searching across multiple fields
+ for simple user-entered phrases. It's implementation is now
+ just the standard SearchHandler with a default query type
+ of "dismax".
+ see http://wiki.apache.org/solr/DisMaxRequestHandler
+ -->
+ <requestHandler name="dismax" class="solr.SearchHandler">
+ <lst name="defaults">
+ <str name="defType">dismax</str>
+ <str name="echoParams">explicit</str>
+ <float name="tie">0.01</float>
+ <str name="qf">
+ text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
+ </str>
+ <str name="pf">
+ text^0.2 features^1.1 name^1.5 manu^1.4 manu_exact^1.9
+ </str>
+ <str name="bf">
+ popularity^0.5 recip(price,1,1000,1000)^0.3
+ </str>
+ <str name="fl">
+ id,name,price,score
+ </str>
+ <str name="mm">
+ 2<-1 5<-2 6<90%
+ </str>
+ <int name="ps">100</int>
+ <str name="q.alt">*:*</str>
+ <!-- example highlighter config, enable per-query with hl=true -->
+ <str name="hl.fl">text features name</str>
+ <!-- for this field, we want no fragmenting, just highlighting -->
+ <str name="f.name.hl.fragsize">0</str>
+ <!-- instructs Solr to return the field itself if no query terms are
+ found -->
+ <str name="f.name.hl.alternateField">name</str>
+ <str name="f.text.hl.fragmenter">regex</str>
+ <!-- defined below -->
+ </lst>
+ </requestHandler>
+ <!-- Note how you can register the same handler multiple times with
+ different names (and different init parameters)
+ -->
+ <requestHandler name="partitioned" class="solr.SearchHandler">
+ <lst name="defaults">
+ <str name="defType">dismax</str>
+ <str name="echoParams">explicit</str>
+ <str name="qf">text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0</str>
+ <str name="mm">2<-1 5<-2 6<90%</str>
+ <!-- This is an example of using Date Math to specify a constantly
+ moving date range in a config...
+ -->
+ <str name="bq">incubationdate_dt:[* TO NOW/DAY-1MONTH]^2.2</str>
+ </lst>
+ <!-- In addition to defaults, "appends" params can be specified
+ to identify values which should be appended to the list of
+ multi-val params from the query (or the existing "defaults").
+
+ In this example, the param "fq=instock:true" will be appended to
+ any query time fq params the user may specify, as a mechanism for
+ partitioning the index, independent of any user selected filtering
+ that may also be desired (perhaps as a result of faceted searching).
+
+ NOTE: there is *absolutely* nothing a client can do to prevent these
+ "appends" values from being used, so don't use this mechanism
+ unless you are sure you always want it.
+ -->
+ <lst name="appends">
+ <str name="fq">inStock:true</str>
+ </lst>
+ <!-- "invariants" are a way of letting the Solr maintainer lock down
+ the options available to Solr clients. Any params values
+ specified here are used regardless of what values may be specified
+ in either the query, the "defaults", or the "appends" params.
+
+ In this example, the facet.field and facet.query params are fixed,
+ limiting the facets clients can use. Faceting is not turned on by
+ default - but if the client does specify facet=true in the request,
+ these are the only facets they will be able to see counts for;
+ regardless of what other facet.field or facet.query params they
+ may specify.
+
+ NOTE: there is *absolutely* nothing a client can do to prevent these
+ "invariants" values from being used, so don't use this mechanism
+ unless you are sure you always want it.
+ -->
+ <lst name="invariants">
+ <str name="facet.field">cat</str>
+ <str name="facet.field">manu_exact</str>
+ <str name="facet.query">price:[* TO 500]</str>
+ <str name="facet.query">price:[500 TO *]</str>
+ </lst>
+ </requestHandler>
+ <!--
+ Search components are registered to SolrCore and used by Search Handlers
+
+ By default, the following components are avaliable:
+
+ <searchComponent name="query" class="org.apache.solr.handler.component.QueryComponent" />
+ <searchComponent name="facet" class="org.apache.solr.handler.component.FacetComponent" />
+ <searchComponent name="mlt" class="org.apache.solr.handler.component.MoreLikeThisComponent" />
+ <searchComponent name="highlight" class="org.apache.solr.handler.component.HighlightComponent" />
+ <searchComponent name="stats" class="org.apache.solr.handler.component.StatsComponent" />
+ <searchComponent name="debug" class="org.apache.solr.handler.component.DebugComponent" />
+
+ Default configuration in a requestHandler would look like:
+ <arr name="components">
+ <str>query</str>
+ <str>facet</str>
+ <str>mlt</str>
+ <str>highlight</str>
+ <str>stats</str>
+ <str>debug</str>
+ </arr>
+
+ If you register a searchComponent to one of the standard names, that will be used instead.
+ To insert components before or after the 'standard' components, use:
+
+ <arr name="first-components">
+ <str>myFirstComponentName</str>
+ </arr>
+
+ <arr name="last-components">
+ <str>myLastComponentName</str>
+ </arr>
+ -->
+ <!-- The spell check component can return a list of alternative spelling
+ suggestions. -->
+ <searchComponent name="spellcheck" class="solr.SpellCheckComponent">
+ <str name="queryAnalyzerFieldType">textSpell</str>
+ <lst name="spellchecker">
+ <str name="name">default</str>
+ <str name="field">name</str>
+ <str name="spellcheckIndexDir">./spellchecker</str>
+ </lst>
+ <!-- a spellchecker that uses a different distance measure
+ <lst name="spellchecker">
+ <str name="name">jarowinkler</str>
+ <str name="field">spell</str>
+ <str name="distanceMeasure">org.apache.lucene.search.spell.JaroWinklerDistance</str>
+ <str name="spellcheckIndexDir">./spellchecker2</str>
+ </lst>
+ -->
+ <!-- a file based spell checker
+ <lst name="spellchecker">
+ <str name="classname">solr.FileBasedSpellChecker</str>
+ <str name="name">file</str>
+ <str name="sourceLocation">spellings.txt</str>
+ <str name="characterEncoding">UTF-8</str>
+ <str name="spellcheckIndexDir">./spellcheckerFile</str>
+ </lst>
+ -->
+ </searchComponent>
+ <!-- A request handler utilizing the spellcheck component.
+ #############################################################################
+ NOTE: This is purely as an example. The whole purpose of the
+ SpellCheckComponent is to hook it into the request handler that handles (i.e.
+ the standard or dismax SearchHandler) queries such that a separate request is
+ not needed to get suggestions.
+
+ IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS NOT WHAT YOU
+ WANT FOR YOUR PRODUCTION SYSTEM!
+ #############################################################################
+ -->
+ <requestHandler name="/spell" class="solr.SearchHandler" lazy="true">
+ <lst name="defaults">
+ <!-- omp = Only More Popular -->
+ <str name="spellcheck.onlyMorePopular">false</str>
+ <!-- exr = Extended Results -->
+ <str name="spellcheck.extendedResults">false</str>
+ <!-- The number of suggestions to return -->
+ <str name="spellcheck.count">1</str>
+ </lst>
+ <arr name="last-components">
+ <str>spellcheck</str>
+ </arr>
+ </requestHandler>
+ <searchComponent name="tvComponent" class="org.apache.solr.handler.component.TermVectorComponent"/>
+ <!-- A Req Handler for working with the tvComponent. This is purely as an example.
+ You will likely want to add the component to your already specified request handlers. -->
+ <requestHandler name="tvrh" class="org.apache.solr.handler.component.SearchHandler">
+ <lst name="defaults">
+ <bool name="tv">true</bool>
+ </lst>
+ <arr name="last-components">
+ <str>tvComponent</str>
+ </arr>
+ </requestHandler>
+ <!-- Clustering Component
+ http://wiki.apache.org/solr/ClusteringComponent
+ This relies on third party jars which are not included in the release.
+ To use this component (and the "/clustering" handler)
+ Those jars will need to be downloaded, and you'll need to set the
+ solr.cluster.enabled system property when running solr...
+ java -Dsolr.clustering.enabled=true -jar start.jar
+ -->
+ <searchComponent name="clusteringComponent" enable="${solr.clustering.enabled:false}" class="org.apache.solr.handler.clustering.ClusteringComponent">
+ <!-- Declare an engine -->
+ <lst name="engine">
+ <!-- The name, only one can be named "default" -->
+ <str name="name">default</str>
+ <!--
+ Class name of Carrot2 clustering algorithm. Currently available algorithms are:
+
+ * org.carrot2.clustering.lingo.LingoClusteringAlgorithm
+ * org.carrot2.clustering.stc.STCClusteringAlgorithm
+
+ See http://project.carrot2.org/algorithms.html for the algorithm's characteristics.
+ -->
+ <str name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm</str>
+ <!--
+ Overriding values for Carrot2 default algorithm attributes. For a description
+ of all available attributes, see: http://download.carrot2.org/stable/manual/#chapter.components.
+ Use attribute key as name attribute of str elements below. These can be further
+ overridden for individual requests by specifying attribute key as request
+ parameter name and attribute value as parameter value.
+ -->
+ <str name="LingoClusteringAlgorithm.desiredClusterCountBase">20</str>
+ </lst>
+ <lst name="engine">
+ <str name="name">stc</str>
+ <str name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm</str>
+ </lst>
+ </searchComponent>
+ <requestHandler name="/clustering" enable="${solr.clustering.enabled:false}" class="solr.SearchHandler">
+ <lst name="defaults">
+ <bool name="clustering">true</bool>
+ <str name="clustering.engine">default</str>
+ <bool name="clustering.results">true</bool>
+ <!-- The title field -->
+ <str name="carrot.title">name</str>
+ <str name="carrot.url">id</str>
+ <!-- The field to cluster on -->
+ <str name="carrot.snippet">features</str>
+ <!-- produce summaries -->
+ <bool name="carrot.produceSummary">true</bool>
+ <!-- the maximum number of labels per cluster -->
+ <!--<int name="carrot.numDescriptions">5</int>-->
+ <!-- produce sub clusters -->
+ <bool name="carrot.outputSubClusters">false</bool>
+ </lst>
+ <arr name="last-components">
+ <str>clusteringComponent</str>
+ </arr>
+ </requestHandler>
+ <!-- Solr Cell: http://wiki.apache.org/solr/ExtractingRequestHandler -->
+ <requestHandler name="/update/extract" class="org.apache.solr.handler.extraction.ExtractingRequestHandler" startup="lazy">
+ <lst name="defaults">
+ <!-- All the main content goes into "text"... if you need to return
+ the extracted text or do highlighting, use a stored field. -->
+ <str name="fmap.content">text</str>
+ <str name="lowernames">true</str>
+ <str name="uprefix">ignored_</str>
+ <!-- capture link hrefs but ignore div attributes -->
+ <str name="captureAttr">true</str>
+ <str name="fmap.a">links</str>
+ <str name="fmap.div">ignored_</str>
+ </lst>
+ </requestHandler>
+ <!-- A component to return terms and document frequency of those terms.
+ This component does not yet support distributed search. -->
+ <searchComponent name="termsComponent" class="org.apache.solr.handler.component.TermsComponent"/>
+ <requestHandler name="/terms" class="org.apache.solr.handler.component.SearchHandler">
+ <lst name="defaults">
+ <bool name="terms">true</bool>
+ </lst>
+ <arr name="components">
+ <str>termsComponent</str>
+ </arr>
+ </requestHandler>
+ <!-- a search component that enables you to configure the top results for
+ a given query regardless of the normal lucene scoring.-->
+ <searchComponent name="elevator" class="solr.QueryElevationComponent">
+ <!-- pick a fieldType to analyze queries -->
+ <str name="queryFieldType">string</str>
+ <str name="config-file">elevate.xml</str>
+ </searchComponent>
+ <!-- a request handler utilizing the elevator component -->
+ <requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy">
+ <lst name="defaults">
+ <str name="echoParams">explicit</str>
+ </lst>
+ <arr name="last-components">
+ <str>elevator</str>
+ </arr>
+ </requestHandler>
+ <!-- Update request handler.
+
+ Note: Since solr1.1 requestHandlers requires a valid content type header if posted in
+ the body. For example, curl now requires: -H 'Content-type:text/xml; charset=utf-8'
+ The response format differs from solr1.1 formatting and returns a standard error code.
+ To enable solr1.1 behavior, remove the /update handler or change its path
+ -->
+ <requestHandler name="/update" class="solr.XmlUpdateRequestHandler"/>
+ <requestHandler name="/update/javabin" class="solr.BinaryUpdateRequestHandler"/>
+ <!--
+ Analysis request handler. Since Solr 1.3. Use to return how a document is analyzed. Useful
+ for debugging and as a token server for other types of applications.
+
+ This is deprecated in favor of the improved DocumentAnalysisRequestHandler and FieldAnalysisRequestHandler
+
+ <requestHandler name="/analysis" class="solr.AnalysisRequestHandler" />
+ -->
+ <!--
+ An analysis handler that provides a breakdown of the analysis process of provided docuemnts. This handler expects a
+ (single) content stream with the following format:
+
+ <docs>
+ <doc>
+ <field name="id">1</field>
+ <field name="name">The Name</field>
+ <field name="text">The Text Value</field>
+ <doc>
+ <doc>...</doc>
+ <doc>...</doc>
+ ...
+ </docs>
+
+ Note: Each document must contain a field which serves as the unique key. This key is used in the returned
+ response to assoicate an analysis breakdown to the analyzed document.
+
+ Like the FieldAnalysisRequestHandler, this handler also supports query analysis by
+ sending either an "analysis.query" or "q" request paraemter that holds the query text to be analyized. It also
+ supports the "analysis.showmatch" parameter which when set to true, all field tokens that match the query
+ tokens will be marked as a "match".
+ -->
+ <requestHandler name="/analysis/document" class="solr.DocumentAnalysisRequestHandler"/>
+ <!--
+ RequestHandler that provides much the same functionality as analysis.jsp. Provides the ability
+ to specify multiple field types and field names in the same request and outputs index-time and
+ query-time analysis for each of them.
+
+ Request parameters are:
+ analysis.fieldname - The field name whose analyzers are to be used
+ analysis.fieldtype - The field type whose analyzers are to be used
+ analysis.fieldvalue - The text for index-time analysis
+ q (or analysis.q) - The text for query time analysis
+ analysis.showmatch (true|false) - When set to true and when query analysis is performed, the produced
+ tokens of the field value analysis will be marked as "matched" for every
+ token that is produces by the query analysis
+ -->
+ <requestHandler name="/analysis/field" class="solr.FieldAnalysisRequestHandler"/>
+ <!-- CSV update handler, loaded on demand -->
+ <requestHandler name="/update/csv" class="solr.CSVRequestHandler" startup="lazy"/>
+ <!--
+ Admin Handlers - This will register all the standard admin RequestHandlers. Adding
+ this single handler is equivalent to registering:
+
+ <requestHandler name="/admin/luke" class="org.apache.solr.handler.admin.LukeRequestHandler" />
+ <requestHandler name="/admin/system" class="org.apache.solr.handler.admin.SystemInfoHandler" />
+ <requestHandler name="/admin/plugins" class="org.apache.solr.handler.admin.PluginInfoHandler" />
+ <requestHandler name="/admin/threads" class="org.apache.solr.handler.admin.ThreadDumpHandler" />
+ <requestHandler name="/admin/properties" class="org.apache.solr.handler.admin.PropertiesRequestHandler" />
+ <requestHandler name="/admin/file" class="org.apache.solr.handler.admin.ShowFileRequestHandler" >
+
+ If you wish to hide files under ${solr.home}/conf, explicitly register the ShowFileRequestHandler using:
+ <requestHandler name="/admin/file" class="org.apache.solr.handler.admin.ShowFileRequestHandler" >
+ <lst name="invariants">
+ <str name="hidden">synonyms.txt</str>
+ <str name="hidden">anotherfile.txt</str>
+ </lst>
+ </requestHandler>
+ -->
+ <requestHandler name="/admin/" class="org.apache.solr.handler.admin.AdminHandlers"/>
+ <!-- ping/healthcheck -->
+ <requestHandler name="/admin/ping" class="PingRequestHandler">
+ <lst name="defaults">
+ <str name="qt">standard</str>
+ <str name="q">solrpingquery</str>
+ <str name="echoParams">all</str>
+ </lst>
+ </requestHandler>
+ <!-- Echo the request contents back to the client -->
+ <requestHandler name="/debug/dump" class="solr.DumpRequestHandler">
+ <lst name="defaults">
+ <str name="echoParams">explicit</str>
+ <!-- for all params (including the default etc) use: 'all' -->
+ <str name="echoHandler">true</str>
+ </lst>
+ </requestHandler>
+ <highlighting>
+ <!-- Configure the standard fragmenter -->
+ <!-- This could most likely be commented out in the "default" case -->
+ <fragmenter name="gap" class="org.apache.solr.highlight.GapFragmenter" default="true">
+ <lst name="defaults">
+ <int name="hl.fragsize">100</int>
+ </lst>
+ </fragmenter>
+ <!-- A regular-expression-based fragmenter (f.i., for sentence extraction) -->
+ <fragmenter name="regex" class="org.apache.solr.highlight.RegexFragmenter">
+ <lst name="defaults">
+ <!-- slightly smaller fragsizes work better because of slop -->
+ <int name="hl.fragsize">70</int>
+ <!-- allow 50% slop on fragment sizes -->
+ <float name="hl.regex.slop">0.5</float>
+ <!-- a basic sentence pattern -->
+ <str name="hl.regex.pattern">[-\w ,/\n\"']{20,200}</str>
+ </lst>
+ </fragmenter>
+ <!-- Configure the standard formatter -->
+ <formatter name="html" class="org.apache.solr.highlight.HtmlFormatter" default="true">
+ <lst name="defaults">
+ <str name="hl.simple.pre"><![CDATA[<em>]]></str>
+ <str name="hl.simple.post"><![CDATA[</em>]]></str>
+ </lst>
+ </formatter>
+ </highlighting>
+ <!-- An example dedup update processor that creates the "id" field on the fly
+ based on the hash code of some other fields. This example has overwriteDupes
+ set to false since we are using the id field as the signatureField and Solr
+ will maintain uniqueness based on that anyway.
+
+ You have to link the chain to an update handler above to use it ie:
+ <requestHandler name="/update "class="solr.XmlUpdateRequestHandler">
+ <lst name="defaults">
+ <str name="update.processor">dedupe</str>
+ </lst>
+ </requestHandler>
+ -->
+ <!--
+ <updateRequestProcessorChain name="dedupe">
+ <processor class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory">
+ <bool name="enabled">true</bool>
+ <str name="signatureField">id</str>
+ <bool name="overwriteDupes">false</bool>
+ <str name="fields">name,features,cat</str>
+ <str name="signatureClass">org.apache.solr.update.processor.Lookup3Signature</str>
+ </processor>
+ <processor class="solr.LogUpdateProcessorFactory" />
+ <processor class="solr.RunUpdateProcessorFactory" />
+ </updateRequestProcessorChain>
+ -->
+ <!-- queryResponseWriter plugins... query responses will be written using the
+ writer specified by the 'wt' request parameter matching the name of a registered
+ writer.
+ The "default" writer is the default and will be used if 'wt' is not specified
+ in the request. XMLResponseWriter will be used if nothing is specified here.
+ The json, python, and ruby writers are also available by default.
+
+ <queryResponseWriter name="xml" class="org.apache.solr.request.XMLResponseWriter" default="true"/>
+ <queryResponseWriter name="json" class="org.apache.solr.request.JSONResponseWriter"/>
+ <queryResponseWriter name="python" class="org.apache.solr.request.PythonResponseWriter"/>
+ <queryResponseWriter name="ruby" class="org.apache.solr.request.RubyResponseWriter"/>
+ <queryResponseWriter name="php" class="org.apache.solr.request.PHPResponseWriter"/>
+ <queryResponseWriter name="phps" class="org.apache.solr.request.PHPSerializedResponseWriter"/>
+
+ <queryResponseWriter name="custom" class="com.example.MyResponseWriter"/>
+ -->
+ <!-- XSLT response writer transforms the XML output by any xslt file found
+ in Solr's conf/xslt directory. Changes to xslt files are checked for
+ every xsltCacheLifetimeSeconds.
+ -->
+ <queryResponseWriter name="xslt" class="org.apache.solr.request.XSLTResponseWriter">
+ <int name="xsltCacheLifetimeSeconds">5</int>
+ </queryResponseWriter>
+ <!-- example of registering a query parser
+ <queryParser name="lucene" class="org.apache.solr.search.LuceneQParserPlugin"/>
+ -->
+ <!-- example of registering a custom function parser
+ <valueSourceParser name="myfunc" class="com.mycompany.MyValueSourceParser" />
+ -->
+ <!-- config for the admin interface -->
+ <admin>
+ <defaultQuery>solr</defaultQuery>
+ <!-- configure a healthcheck file for servers behind a loadbalancer
+ <healthcheck type="file">server-enabled</healthcheck>
+ -->
+ </admin>
+ <requestHandler class="solr.MoreLikeThisHandler" name="/mlt">
+ <lst name="defaults">
+ <str name="mlt.mintf">1</str>
+ <str name="mlt.mindf">2</str>
+ </lst>
+ </requestHandler>
+</config>
diff --git a/src/solr/conf/spellings.txt b/src/solr/conf/spellings.txt
new file mode 100644
index 0000000..d7ede6f
--- /dev/null
+++ b/src/solr/conf/spellings.txt
@@ -0,0 +1,2 @@
+pizza
+history
\ No newline at end of file
diff --git a/src/solr/conf/stopwords.txt b/src/solr/conf/stopwords.txt
new file mode 100644
index 0000000..0a23ec2
--- /dev/null
+++ b/src/solr/conf/stopwords.txt
@@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#-----------------------------------------------------------------------
+# a couple of test stopwords to test that the words are really being
+# configured from this file:
+stopworda
+stopwordb
+
+#Standard english stop words taken from Lucene's StopAnalyzer
+a
+an
+and
+are
+as
+at
+be
+but
+by
+for
+if
+in
+into
+is
+it
+no
+not
+of
+on
+or
+s
+such
+t
+that
+the
+their
+then
+there
+these
+they
+this
+to
+was
+will
+with
diff --git a/src/solr/conf/synonyms.txt b/src/solr/conf/synonyms.txt
new file mode 100644
index 0000000..fa4755d
--- /dev/null
+++ b/src/solr/conf/synonyms.txt
@@ -0,0 +1,30 @@
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#-----------------------------------------------------------------------
+#some test synonym mappings unlikely to appear in real input text
+aaa => aaaa
+bbb => bbbb1 bbbb2
+ccc => cccc1,cccc2
+a\=>a => b\=>b
+a\,a => b\,b
+fooaaa,baraaa,bazaaa
+
+# Some synonym groups specific to this example
+GB,gib,gigabyte,gigabytes
+MB,mib,megabyte,megabytes
+Television, Televisions, TV, TVs
+#notice we use "gib" instead of "GiB" so any WordDelimiterFilter coming
+#after us won't split it into two words.
+
+# Synonym mappings can be used for spelling correction too
+pixima => pixma
--
1.7.3.4
13 years, 4 months
[PATCH aeolus] New UI - Admin/Pool_families
by jzigmund@redhat.com
From: Jozef Zigmund <jzigmund(a)redhat.com>
This patch contains updated controller for basic CRUD operations, views for actions and cucumber tests.
---
.../controllers/admin/pool_families_controller.rb | 67 ++++++++++++++++++++
src/app/controllers/admin/providers_controller.rb | 2 +-
src/app/models/pool_family.rb | 2 +-
src/app/views/admin/pool_families/_form.haml | 6 ++
src/app/views/admin/pool_families/_list.haml | 23 +++++++
src/app/views/admin/pool_families/_properties.haml | 4 +
src/app/views/admin/pool_families/edit.haml | 7 ++
src/app/views/admin/pool_families/index.haml | 3 +-
src/app/views/admin/pool_families/new.haml | 3 +
src/app/views/admin/pool_families/show.haml | 5 ++
src/config/routes.rb | 3 +-
src/features/pool_family.feature | 47 ++++++++++++++
src/features/step_definitions/pool_family_steps.rb | 28 ++++++++
13 files changed, 196 insertions(+), 4 deletions(-)
create mode 100644 src/app/views/admin/pool_families/_form.haml
create mode 100644 src/app/views/admin/pool_families/_list.haml
create mode 100644 src/app/views/admin/pool_families/_properties.haml
create mode 100644 src/app/views/admin/pool_families/edit.haml
create mode 100644 src/app/views/admin/pool_families/new.haml
create mode 100644 src/app/views/admin/pool_families/show.haml
create mode 100644 src/features/pool_family.feature
create mode 100644 src/features/step_definitions/pool_family_steps.rb
diff --git a/src/app/controllers/admin/pool_families_controller.rb b/src/app/controllers/admin/pool_families_controller.rb
index a1a3a9a..cfe7025 100644
--- a/src/app/controllers/admin/pool_families_controller.rb
+++ b/src/app/controllers/admin/pool_families_controller.rb
@@ -1,6 +1,73 @@
class Admin::PoolFamiliesController < ApplicationController
before_filter :require_user
+ before_filter :load_pool_families, :only =>[:index,:show]
def index
end
+
+ def new
+ @pool_family = PoolFamily.new
+ end
+
+ def create
+ @pool_family = PoolFamily.new(params[:pool_family])
+ unless @pool_family.save
+ flash.now[:warning] = "Pool family's creation failed."
+ render :new and return
+ else
+ flash[:notice] = "Pool family was added."
+ redirect_to admin_pool_families_path
+ end
+ end
+
+ def edit
+ @pool_family = PoolFamily.find(params[:id])
+ end
+
+ def update
+ @pool_family = PoolFamily.find(params[:id])
+ unless @pool_family.update_attributes(params[:pool_family])
+ flash[:error] = "Pool Family wasn't updated!"
+ render :action => 'edit' and return
+ else
+ flash[:notice] = "Pool Family was updated!"
+ redirect_to admin_pool_families_path
+ end
+ end
+
+ def show
+ @pool_family = PoolFamily.find(params[:id])
+ @url_params = params.clone
+ @tab_captions = ['Properties', 'History', 'Permissions', 'Provider Accounts', 'Pools']
+ @details_tab = params[:details_tab].blank? ? 'properties' : params[:details_tab]
+ respond_to do |format|
+ format.js do
+ if @url_params.delete :details_pane
+ render :partial => 'layouts/details_pane' and return
+ end
+ render :parial => @details_tab and return
+ end
+ format.html { render :show }
+ end
+ end
+
+ def multi_destroy
+ PoolFamily.destroy(params[:pool_family_selected])
+ redirect_to admin_pool_families_path
+ end
+
+ protected
+
+ def load_pool_families
+ @header = [{ :name => "Name", :sort_attr => :name},
+ { :name => "Quota limit", :sort_attr => :name},
+ { :name => "Quota currently in use", :sort_attr => :name},
+ ]
+ @pool_families = PoolFamily.paginate(:all,
+ :page => params[:page] || 1,
+ :order => ( params[:order_field] || 'name' ) + ' ' + (params[:order_dir] || 'asc')
+ )
+ @url_params = params.clone
+ end
end
+
diff --git a/src/app/controllers/admin/providers_controller.rb b/src/app/controllers/admin/providers_controller.rb
index 5f2f437..aac2c1d 100644
--- a/src/app/controllers/admin/providers_controller.rb
+++ b/src/app/controllers/admin/providers_controller.rb
@@ -28,7 +28,7 @@ class Admin::ProvidersController < ApplicationController
end
render :partial => @details_tab and return
end
- format.html { render :action => 'show'}
+ format.html { render :action => 'show' }
end
end
diff --git a/src/app/models/pool_family.rb b/src/app/models/pool_family.rb
index c070ee8..6925680 100644
--- a/src/app/models/pool_family.rb
+++ b/src/app/models/pool_family.rb
@@ -20,7 +20,7 @@
# Likewise, all the methods added will be available for all controllers.
class PoolFamily < ActiveRecord::Base
-
+ include PermissionedObject
DEFAULT_POOL_FAMILY_KEY = "default_pool_family"
has_many :pools, :dependent => :destroy
diff --git a/src/app/views/admin/pool_families/_form.haml b/src/app/views/admin/pool_families/_form.haml
new file mode 100644
index 0000000..362b67c
--- /dev/null
+++ b/src/app/views/admin/pool_families/_form.haml
@@ -0,0 +1,6 @@
+= form.error_message_on :name, 'Name'
+%fieldset.clear
+ = form.label :name,'Pool Family Name :'
+ = form.text_field :name, :title => 'pool_family_name', :value => @pool_family.name, :class => "clear grid_4 alpha"
+%fieldset.clear
+ = form.submit "Save", :class => "submit formbutton"
diff --git a/src/app/views/admin/pool_families/_list.haml b/src/app/views/admin/pool_families/_list.haml
new file mode 100644
index 0000000..7c31719
--- /dev/null
+++ b/src/app/views/admin/pool_families/_list.haml
@@ -0,0 +1,23 @@
+- form_tag do
+ = link_to "Create", new_admin_pool_family_path, :class => "button"
+ = restful_submit_tag "Delete", 'destroy', multi_destroy_admin_pool_families_path, 'DELETE', :id => 'delete_button'
+
+ %p
+ Select:
+ = link_to "All", @url_params.merge(:select => 'all')
+ %span> ,
+ = link_to "None", @url_params.merge(:select => 'none')
+
+ %table#pool_famlies_table
+ = sortable_table_header @header
+ - unless @pool_families.blank?
+ - @pool_families.each do |pool_family|
+ %tr
+ %td
+ - selected = @url_params[:select] == 'all'
+ %input{:name => "pool_family_selected[]", :type => "checkbox", :value => pool_family.id, :id => "pool_family_checkbox_#{pool_family.id}", :checked => selected }
+ = link_to pool_family.name, admin_pool_family_path(pool_family)
+ %td
+
+ %td
+ xx %
diff --git a/src/app/views/admin/pool_families/_properties.haml b/src/app/views/admin/pool_families/_properties.haml
new file mode 100644
index 0000000..02d197e
--- /dev/null
+++ b/src/app/views/admin/pool_families/_properties.haml
@@ -0,0 +1,4 @@
+%h3
+ Properties for
+ = @pool_family.name
+= link_to "Edit", edit_admin_pool_family_path(@pool_family), {:class => 'button'}
diff --git a/src/app/views/admin/pool_families/edit.haml b/src/app/views/admin/pool_families/edit.haml
new file mode 100644
index 0000000..1c09438
--- /dev/null
+++ b/src/app/views/admin/pool_families/edit.haml
@@ -0,0 +1,7 @@
+%h3
+ Editing
+ = @pool_family.name
+ Pool Family
+
+-form_for @pool_family, :url => admin_pool_family_path(@pool_family), :html => { :method => :put } do |f|
+ = render :partial => 'form', :locals => { :form => f, :cancel_path => admin_providers_path }
diff --git a/src/app/views/admin/pool_families/index.haml b/src/app/views/admin/pool_families/index.haml
index e50f43b..62ccbc6 100644
--- a/src/app/views/admin/pool_families/index.haml
+++ b/src/app/views/admin/pool_families/index.haml
@@ -1 +1,2 @@
-admin/pool_families/index.haml
+- content_for :list do
+ = render :partial => 'list'
diff --git a/src/app/views/admin/pool_families/new.haml b/src/app/views/admin/pool_families/new.haml
new file mode 100644
index 0000000..7dbac4c
--- /dev/null
+++ b/src/app/views/admin/pool_families/new.haml
@@ -0,0 +1,3 @@
+%h2 Create a new Pool family
+- form_for @pool_family, :url => admin_pool_families_path do |f|
+ = render :partial => "form", :locals => { :form => f, :cancel_path => admin_pool_families_path }
diff --git a/src/app/views/admin/pool_families/show.haml b/src/app/views/admin/pool_families/show.haml
new file mode 100644
index 0000000..0c36221
--- /dev/null
+++ b/src/app/views/admin/pool_families/show.haml
@@ -0,0 +1,5 @@
+- content_for 'list' do
+ = render :partial => 'list'
+
+- content_for 'details' do
+ = render :partial => 'layouts/details_pane'
diff --git a/src/config/routes.rb b/src/config/routes.rb
index 6d9b5cf..b3049f7 100644
--- a/src/config/routes.rb
+++ b/src/config/routes.rb
@@ -47,12 +47,13 @@ ActionController::Routing::Routes.draw do |map|
map.connect '/set_layout', :controller => 'application', :action => 'set_layout'
map.namespace 'admin' do |r|
- r.resources :hardware_profiles, :pool_families, :realms
+ r.resources :hardware_profiles, :realms
r.resources :providers, :collection => { :multi_destroy => :delete }
r.resources :users, :collection => { :multi_destroy => :delete }
r.resources :provider_accounts, :collection => { :multi_destroy => :delete }
r.resources :roles, :collection => { :multi_destroy => :delete }
r.resources :settings, :collection => { :self_service => :get, :general_settings => :get }
+ r.resources :pool_families, :collection => { :multi_destroy => :delete }
end
map.resources :pools
diff --git a/src/features/pool_family.feature b/src/features/pool_family.feature
new file mode 100644
index 0000000..64c5635
--- /dev/null
+++ b/src/features/pool_family.feature
@@ -0,0 +1,47 @@
+Feature: Pool Families
+ In order to manage my cloud infrastructure
+ As a user
+ I want to manage pool families
+
+ Background:
+ Given I am an authorised user
+ And I am logged in
+ And I am using new UI
+
+ Scenario: List pool families
+ Given I am on the homepage
+ And there are these pool families:
+ | name |
+ | pool_family1 |
+ | pool_family2 |
+ | pool_family3 |
+ When I go to the admin pool families page
+ Then I should see the following:
+ | pool_family1 |
+ | pool_family2 |
+ | pool_family3 |
+
+ Scenario: Show pool family details
+ Given there is a pool family named "testpoolfamily"
+ And I am on the admin pool families page
+ When I follow "testpoolfamily"
+ Then I should see "Name"
+
+ Scenario: Create a new Pool family
+ Given I am on the admin pool families page
+ And there is not a pool family named "testpoolfamily"
+ When I follow "Create"
+ Then I should be on the new admin pool family page
+ When I fill in "pool_family[name]" with "testpoolfamily"
+ And I press "Save"
+ Then I should be on the admin pool families page
+ And I should see "Pool family was added."
+ And I should have a pool family named "testpoolfamily"
+
+ Scenario: Delete a pool family
+ Given I am on the homepage
+ And there is a pool family named "poolfamily1"
+ When I go to the admin pool families page
+ And I check "poolfamily1" pool family
+ And I press "Delete"
+ Then there should not exist a pool family named "poolfamily1"
diff --git a/src/features/step_definitions/pool_family_steps.rb b/src/features/step_definitions/pool_family_steps.rb
new file mode 100644
index 0000000..2de705e
--- /dev/null
+++ b/src/features/step_definitions/pool_family_steps.rb
@@ -0,0 +1,28 @@
+Given /^there are these pool families:$/ do |table|
+ table.hashes.each do |hash|
+ Factory(:pool_family, :name => hash['name'])
+ end
+end
+
+Given /^there is a pool family named "([^\"]*)"$/ do |name|
+ @provider = Factory(:pool_family, :name => name)
+end
+
+Given /^there is not a pool family named "([^"]*)"$/ do |name|
+ pool_family = PoolFamily.find_by_name(name)
+ if pool_family then pool_family.destroy end
+end
+
+Then /^I should have a pool family named "([^\"]*)"$/ do |name|
+ PoolFamily.find_by_name(name).should_not be_nil
+end
+
+When /^(?:|I )check "([^"]*)" pool family$/ do |name|
+ poolfamily = PoolFamily.find_by_name(name)
+ check("pool_family_checkbox_#{poolfamily.id}")
+end
+
+Then /^there should not exist a pool family named "([^\"]*)"$/ do |name|
+ PoolFamily.find_by_name(name).should be_nil
+end
+
--
1.7.3.4
13 years, 4 months