disadvantages of a linux cluster
gmpc at sanger.ac.uk
Tue Nov 12 09:38:42 PST 2002
> Can one elect to omit one or both disks? Ethernet interface(s)?
Not as far as I know. The network interfaces are integral to the blade,
though you could physically remove the disks if you wanted to.
> Do the NICs do PXE?
Yup. The blades use PXE as part of their provisioning routine. There is
no reason why you should not be able to run the blades diskless. (In fact
the blades must be running diskless whilst they copy their OS image to
disk after the initial PXE boot.)
> (allowing 4U of space at the top for patch panels or switches) can hold
> 12 3U boxes, or 10+ KW of power. That is, umm, HOT -- a 4 ton A/C with
> massive airflow can just be attached to the front of the rack, thank you
> very much:-)
We have 16 chassis per cabinet, so thats ~13kW per cabinet. The blades sit
in the same room as 360 alphas which throw out another 90kW or so. Hot.
> management issues, FLOPS rack densities, and long term reliability. It
> looks like your cluster does quite well on the first ones, and the last
> one remains to be proven in application.
Our users don't seem to be grumbling any more than normal...
Guy Coates, Informatics System Group
The Wellcome Trust Sanger Institute, Hinxton, Cambridge, CB10 1SA, UK
Tel: +44 (0)1223 834244 ex 7199
More information about the Beowulf