[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Server Access

I'll kick off the discussion.

The simplest thing seems to be to expand access to our current system
rather than setting up something new. Anyone with root on terminator/
equilibrium will be happy to give you the access you need. It just needs
to be done in person for security reasons.

Obviously, everyone will need to exercise caution about current production
loadsets as they work on development and testing, but we're all used
to that :)

-- Should we all share a common radmind server?

I think that unless we start running into load problems, it makes sense to use one radmind server (we can duplicate if we run into load).

-- Should the radmind server be separate for Linux and Solaris?

The server is platform independent, so the only reason for this would be aesthetic. Even Mac OSX (with a different filesystem) was loaded from our current server.

-- Is this best done with a shared account (radmind? root?)

We have used a shared account on terminator (radmind). We use RCS to prevent accidentally stepping on each others' toes. lcreate currently is anonymous, this gets uploaded to ~radmind/tmp. Anyone who needs access to move loadsets into production can get the radmind password (and a terminator acct if necessary) from anyone who already has the password. We prefer to share passwords in person since anyone with the radmind password could mess with the production loadsets for currently radminded machines.

-- How shall we divide up responsibility for loadsets?

Some loadsets (base) will need to be collaborated on, and may be at
different stages for different production needs (accelerated patching
schedules, etc). This is a fundamental question and I think it will take
some time to figure out the best way. A high level of communication
along with RCS has worked well for us. We have had 3-4 sysadmins
at various times all working on software in one loadset and email and
conscientious checkin/checkout behaviour got us through without
production issues.

-- Should we share a common development server for building software?

Depends on rebooting needs. Kernel rebuilding probably needs a machine
per sysadmin (at a time) since you can't just reboot a machine with multiple
people working on it. I think this depends on the development needs of a
given project. Alot of software can be built on eq, which is running the
current production (dir, web) loadset. As mentioned in the meeting, anyone
who needs an account can get one from anyone with root. Bill Brehm is
most frequently in his office, but you can ask anyone with root. Since any
pre-deploy hardware can be turned into a dev server, this should be easy
to work out in the early stages. As more of the hardware is deployed we
may need to work out a sharing strategy for a rebootable, trashable dev
machine, depending if we get hardware crunched. Silly budget ;)

and many others that I have no doubt missed. Perhaps if folks weigh in on
this via e-mail, we could come to some sort of consensus about what we
should do, and then we can do it.

Thanks, -- Bennet