Deploying Red Hat Workstations

Tom 'Needs A Hat' Mitchell mitch48 at sbcglobal.net
Wed Mar 17 06:32:31 UTC 2004


On Tue, Mar 16, 2004 at 12:48:44PM -0500, Chris Purcell wrote:
> 
> We're in the process of deploying about 200 Fedora (or RHEL) workstations
> out to our remote offices scattered across the US.  I need a way that I
> can easily make changes to the workstations all at once.  I've been
> thinking of a hack that I would write (in Perl) that would go something
> like this...
> 
> 1) each workstation would execute a cron job daily that would download a
> script from our central server each day

Reconsider the reverse direction.

Have the central server push out to each host.  I assume that the
central management box is the best managed box you have.  Your
strategy as expressed gives 200+ boxes access to this central
resource.  That is not true with a push.  i.e. What if one of the 201
is hacked?

It is true that this is the reverse of the rhn update strategy but
their busines and goals sound different. Anyhow think clearly what the
risks and the access methods are.  Both push and pull will work....

> 2) that script would be executed by another cron job a few minutes later.
>  This script will contain any changes that I need to make.   If there
> aren't any updates for the day, then the script will be blank that day.

Avoid race conditions.  i.e. What if a download is slow, faile,
contains a typo or is interrupted and cron triggers the second
step. i.e. Will a partial download do bad stuff.  

Have a code design that will validate checksums even if it defaults to
true on day-one.

> Will this work?  There has to be something better than this out there.
> What do you guys do in this situation?   For example, we've deployed about
> 25 workstations out so far, but now I need to change this Perl script on
> each one of the machines.  What is the easiest way to push this script out
> besides SCP'ing it to each one individually.

Today, I would automate SCP from the server to a user account (not
root).  That account on each machine can have a ~/.ssh/known_hosts
file so you can have a shell loop do the transfers and not have to put
passwds in the scripts.  You will want to manage bandwidth and not
trigger all the transfers at once.  Remember that a cron job on all
200 machines could prove intertesting when you get the clocks
synchronized.

A push model facilitates testing and verification of changes.  Do 5%
on Monday, verify those 5%, Do 20% on Tuesday and verify.  Finish on
Wed.  Thursday Friday get ready for Monday, Sat/Sunday sleep on it.
Monday check and begin again.

With your pull strategy design can you stage the distribution of changes?
Can you push emergency changes on a new schedule?

Use different pseudo users for different functions when it makes sense.
Do not have a single user (root) doing all the work (minimize sudo?).

Inside your perl script you can also use wget to pull files (or sftp
to push/pull files).  Consider then reject having your script update
itself.  

Do a simple bubble chart with arrows that lets you track data flow and
authentication.

One path from the central office should be a "Look but don't touch"
account.  I woule call it an audit function (account/machine).

A different path from the central office should be a "Can fix and
repair stuff" root equivalent account (perhaps with no
~/.ssh/known_hosts file).

The goal of the bubble chart and arrows is to ensure that there is no
circle of validation where if one box is hacked all the others fall
over (or can be discovered for the same attack).

Next look at the bubble chart and cover each resource and building of
resources, then check that some sane backup plan is possible.  Today a
<$1000.00 laptop in a fire proof safe in a different building with
ssh-keys and passwds could facilitate recovery.  A fifty cent CDROM
.... in the fire safe?  What if the laptop is stolen?

Next put dollar signs (or Euros, etc.) on each bubble.  Data has value
as does the hardware.

-- 
	T o m  M i t c h e l l 
	/dev/null the ultimate in secure storage.





More information about the users mailing list