[nSLUG] Wither workstations

Dop Ganger nslug at fop.ns.ca
Mon Feb 14 09:31:24 AST 2005


On Mon, 14 Feb 2005, George N. White III wrote:

> On Sun, 13 Feb 2005, Dop Ganger wrote:
>
> > On Sun, 13 Feb 2005, George N. White III wrote:
> > Disk handling is definitely fun. It sounds like you'll need a central
> > storage cluster (or SAN, or NAS, or whatever fits) then have plenty of
> > scratch space on the nodes.
>
> Yes.  The guiding principles should be:
>
> 1. you don't need to take a box apart to swap a disk
>
> 2.  replacement disks don't require any manual partioning or restores
> (e.g.,  RAID or automated configuration like Rocks for system disks).
>
> 3.  you don't need to install the disk in a PC and run a DOS utility to
> get the failure codes for the return authorization.  Someday we will have
> disks that sense when they are about to fail and submit the return
> authorization request for you.

Well, the cheapest and easiest way to handle this is probably a RAID5
array on a hardware RAID controller with hot swap capabilities (probably
SCSI would be the easiest, SATA is a bit new for my tastes). That way you
can swap drives out pretty much at will (modulo recovery time to rewrite
the data). That takes care of points 1 and 2. For point 3; big
manufacturers will take you at your word when you say a drive is dead and
cross-ship a replacement drive to you, so long as you're under warranty.
For example, Sun will ask you for the serial number of the device it came
out of, and HP just want the serial number of the device (which is printed
on it), or you can query it from the HP tools - which are available for
Linux). It's only at the cheap (PATA IDE) end of the market that you have
to play about with DOS utilities to scan the drive; if you've paid out
the money for a hot-swap SCSI system and the array reports it's dead,
there's no point doing diags on it!

The other option is a SAN, but I suspect it'd be cheaper to go gigabit
ethernet than to deploy fibre across the board, especially if you're going
for a rack full of machines. From the sounds of it, latency isn't going to
be too much of an issue as most of the jobs sound like fairly long-lived
batch jobs anyway.

Incidentally, if you want hardware that self-submits RMAs you *can* get
it. The cheapest I'm aware of is HP's NonStop architecture, but I suspect
that's probably a touch excessive for what you need (the type of market
it's aimed at, to give the canonical example, is credit card transaction
processing).

> We use quotas, but they are far from ideal for our workloads since
> many jobs create huge temporary files (e.g., uncompressed images)
> so either you give out quotas that total way more than the total
> disk space and hope no two users run big jobs at the same time
> or you adjust quotas early and often to track workloads.  A system
> that adjusted quotas dynamically based on actual job characteristics
> would be a big help.

Well, what happens is SGE will allocate a job requiring a lot of storage
space to one node and not let any other job run on that node until the
node is free; it's not quite dynamic, and you may end up with an idle CPU
in a multi-cpu system, but you shouldn't run out of storage. The plugin
script system will let you monitor storage on the shared device if that's
an issue, too.

> I've been playing with condor (it supports SGI, so people can start using
> it immediately), but on FC2 ver. 6.6.8 couldn't determine the system
> memory or swap space and you don't get sources so you can't investigate.
> Condor tries to be good about cleaning up after failed jobs.

Sun are trying to port SGE to as many platforms as possible; checking
http://gridengine.sunsource.net/project/gridengine/download60.html there's
a port available for Irix, in case that helps. I know there's also some
unofficial ports (for *BSD, for example).

I'm not trying to beat the drum for SGE, by the way... it's just the setup
I've had most experience with, and no-one else seems to be chiming in with
suggestions :-)

Cheers... Dop.

!DSPAM:4210a83183531131286238!




More information about the nSLUG mailing list