[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Server Access
We've long used a shared 'radmind' account with su and rcs to manage
our loadsets and, as long as people aren't stupid and leave things
checked out/locked, it works really, really well.
I'd love to see something like a second radmind server that gets
updated ( via rsync or similar ) with a copy of the main radmind server
on a daily basis. This machine should be in another room, on a
different network, etc. This wouldn't be used for load balancing or
anything, but would provide an off-site backup and an easy way to load
a machine even when the standing water fire suppression system is hit
with the top of a rack and dumps hundreds of gallons of water on all of
On Tuesday, July 22, 2003, at 11:54 PM, gabi wrote:
I'll kick off the discussion.
The simplest thing seems to be to expand access to our current system
rather than setting up something new. Anyone with root on terminator/
equilibrium will be happy to give you the access you need. It just
to be done in person for security reasons.
Obviously, everyone will need to exercise caution about current
loadsets as they work on development and testing, but we're all used
to that :)
-- Should we all share a common radmind server?
I think that unless we start running into load problems, it makes sense
to use one radmind server (we can duplicate if we run into load).
-- Should the radmind server be separate for Linux and Solaris?
The server is platform independent, so the only reason for this would
be aesthetic. Even Mac OSX (with a different filesystem) was loaded
from our current server.
-- Is this best done with a shared account (radmind? root?)
We have used a shared account on terminator (radmind). We use
RCS to prevent accidentally stepping on each others' toes. lcreate
currently is anonymous, this gets uploaded to ~radmind/tmp.
Anyone who needs access to move loadsets into production can
get the radmind password (and a terminator acct if necessary) from
anyone who already has the password. We prefer to share passwords
in person since anyone with the radmind password could mess with
the production loadsets for currently radminded machines.
-- How shall we divide up responsibility for loadsets?
Some loadsets (base) will need to be collaborated on, and may be at
different stages for different production needs (accelerated patching
schedules, etc). This is a fundamental question and I think it will
some time to figure out the best way. A high level of communication
along with RCS has worked well for us. We have had 3-4 sysadmins
at various times all working on software in one loadset and email and
conscientious checkin/checkout behaviour got us through without
-- Should we share a common development server for building software?
Depends on rebooting needs. Kernel rebuilding probably needs a machine
per sysadmin (at a time) since you can't just reboot a machine with
people working on it. I think this depends on the development needs
given project. Alot of software can be built on eq, which is running
current production (dir, web) loadset. As mentioned in the meeting,
who needs an account can get one from anyone with root. Bill Brehm is
most frequently in his office, but you can ask anyone with root.
pre-deploy hardware can be turned into a dev server, this should be
to work out in the early stages. As more of the hardware is deployed
may need to work out a sharing strategy for a rebootable, trashable dev
machine, depending if we get hardware crunched. Silly budget ;)
and many others that I have no doubt missed. Perhaps if folks weigh
this via e-mail, we could come to some sort of consensus about what we
should do, and then we can do it.
Thanks, -- Bennet
... "I find your lack of faith