[Beowulf] MS Cray

Lux, James P james.p.lux at jpl.nasa.gov
Wed Sep 17 08:51:35 PDT 2008



-----Original Message-----
From: Tim Cutts [mailto:tjrc at sanger.ac.uk]
Sent: Wednesday, September 17, 2008 6:52 AM
To: Lux, James P
Cc: Prentice Bisbal; Beowulf
Subject: Re: [Beowulf] MS Cray


On 17 Sep 2008, at 2:22 pm, Lux, James P wrote:

> But how is that any different than having a PC on your desk?
>
> I see the deskside supercomputer as a revisiting of the "workstation"
> class computer.  Used to be that PCs and Apples were what sat on most
> peoples desks, but some had Apollo or Sun or Perq workstations,
> because they had applications that needed the computational horsepower
> (or, more likely, the high res hardware graphics support.. A CGA was
> pretty painful for doing PC board layout).
>
> Same sort of thing for having the old Tektronix 4014  graphics
> terminal, rather than hiking down to the computer center to pick up
> your flatbed plotter output.
>
> Jim

We don't generally allow people here to buy their own PCs and Apples either.  They get a standard build from us, all centrally managed by LanDESK.  They also get a known type of hardware; they can't just buy what the hell they like. I have more than 800 Windows desktops to support.  If they were all different and purchased ad-hoc by individual users, I would be in even worse hell than I am already.

Most people don't build Beowulf clusters out of ad-hoc piles of machines from God-knows-where.  Most of us buy consistent hardware, because it's impossible to support anything else.

The Tektronix graphics terminal is slightly different, because it was just that, a terminal, and consequently doesn't present such a headache.

Tim

----

Indeed, and such is the case in most large organizations.  Two that I have direct familiarity with have slightly different models.  One, in a Fortune 500 company, had, at any time, only 3 possible hardware configurations for the desktop (with literally 10s of thousands deployed), with the actual image rolled out every day. Essentially, the disk drive in the box served as a local cache.  There were other configurations for software developers, but still, pretty much locked down.  The server farms are run separately, by a centralized organization, as is the mainframe.  A small "departmental server" (e.g. for a software development group to use for testing) would be in a server room somewhere, managed by the central org.




  The other, here at JPL, has about 10,000 or so computers of various ages and configurations that are managed collectively (as opposed to those being Sysadmined locally,e.g. in a lab).  At any given time, there's a dozen or so kinds of computers (desktop/laptop/PC/Mac) available, but since the configurations are changing, and they have a 3 year recycle time, there's probably 30 or 40 configurations in the field at a given time.  The software configuration is substantially more consistent, in that there's a basic "core software" load of OS, tools (Office, Mail, Calendaring), but people, in general, have admin access to their own machine, and are free to install anything else (as long as it's legal).  OTOH, if something you add causes problems, they're not on the hook to support it, and ultimately, their response might be to reimage the disk.  They ARE pushing towards a thin client model, at least for non-specialized desktop users (e.g. if all you do is email, calendaring, documents, and web service consuming). Interestingly, the monthly cost for both organizations is about the same ( a few hundred bucks a month for hardware lease+service).  We also have "servers for rent" (with SA and 24/7 monitoring done by others), as well as various and sundry supercomputers.  A deskside supercomputer would fit in the model here fairly well, as just another flavor of either high performance desktop machine, or as a small server in your lab.


Jim




More information about the Beowulf mailing list