Cloud status report and request for feedback on policies

Seth Vidal skvidal at fedoraproject.org
Wed Aug 29 18:03:26 UTC 2012




On Wed, 29 Aug 2012, Kevin Fenzi wrote:

> Greetings.
>
> I thought I would give a quick status update on our private cloud work
> (which skvidal has been doing. Thanks skvidal! )
>
> Our hardware in all in and working.
> Our network is up and working.
> We have a test instance of eucalyptus up and running with a pair of
> machines.
>
> Short term:
>
> I'd like to test out Openstack on another 3 nodes or so. It's come a
> ways since we evaluated it last.

+1
fed-cloud02 is the other 'head' system. If we take that and 04-05 - then 
that should give us a base to test more with.



> We need to test more with the admin/command line tools.

I've been using the command line tools exclusively for all the euca stuff. 
The web interface i've only used to verify that some setting changes have 
occurred.


>
> We need to figure out how we want to setup groups/users/etc.


My concept at the moment is to identify groups who will repeatedly need 
to create instances and create an 'account' for them. Then delegate admin 
access on those 'accounts' to specific users.

For people who just need an instance now to test with - we do that 
ourselves and flag the instance as having  a short life span and who it is 
for.



>
> We need to repave everything and re-install it in a controlled and
> documented manner.

+1. right now my steps have been:

1. new machines
2. setup repos
3. setup network devices (bridging, masquerading, dns, etc)
4. install euca software
5. configure eucalyptus.conf (and for node controllers libvirt.xsl)
6. do the euca initializing/registering and running of 
euca-modify-properties
7. reboot and make sure everything is up.


> What expectation do we want on reboots? They can go down at any
> time, or 'we will try and let you know if we want to reboot things' or
> we plan on doing a maint window every X and your instances WILL be
> rebooted?

I'd say users should plan for them to go down. Just like with ec2 
instances.



> What timeframe should we tell people they can use instances?

Ask the user but default to one working week? (5days?)

> Do we want to kill them after some specific time?

   yes

> Note that if we want to use this for dev instances, we may want to at
> least snapshot before taking down.

  agreed


> What sort of policy do we want on "Fedora relatedness" for instances?
> I don't think we want to offer general instances for people, but how to
> explain the line? Do we want to specifically forbid any uses?

not clear on this either. I think for a little while we'll have our hands 
full with just:
  - copr builders
  - randomn instances
  - fedora qa
  - fedora apps instances


>
> What ports do we want to allow folks to use? Anything? 80/443/22 only?


So if the user has a euca 'account' then they can create their own 
security policy "group" which controls what can access that instance. By 
default I'd say 22,80,443 and ping should be sufficient for remote.


> How about persistent data storage? We promise to keep data for X
> timeframe? We make no promises? We keep as long as we have storage
> available?

and how much in total, I'd think.



> I think we should have a very broad 'catch all' at the end of the
> policy allowing us to refuse service to anyone for any reason, allowing
> us to shutdown instances that cause problems. Or should we word that
> more narrowly?


don't we have something similar with regard to fedorapeople or 
fedorahosted?



> How often do we want to update images? Say we have a Fedora 17 image
> for folks, would we want to update it daily with updates? weekly? Just
> when we feel like it? When security bugs affect ssh ? When security
> issues affect the kernel?

Updating it daily seems excessive, the user can update it on their own of 
course. Given a short cycle of fedora I'd say maybe a couple of times a 
release and try to stay relatively on top of new kernels.

Running ami-creator to generate a new image is not very difficult, though.


-sv



More information about the infrastructure mailing list