20 mointors and keyboard

Robert G. Brown rgb at phy.duke.edu
Mon Nov 11 19:10:27 PST 2002


On Sat, 9 Nov 2002, amit vyas wrote:

> we are new to beowulf, there is a problem we are facing,

> we have 20 pc's with us and all are celeron with 64 MB of RAM but
> since we are going to use the CPU in the cluster we would be left by 20
> mointors,20 keyboard and 20 mouses , and all would be wasted can anyone
> plz help so as how to proceed to use them to build Dumb Terminals,we
> would be grateful for a little help.

It sounds like you are building a learning cluster.  For learning
purposes, there is no harm in leaving the computers on tables or desks
with monitors and keyboards so people can use them as terminals or
workstations while they are also in use working on parallel programs.
Normal desktop usage barely warms up a modern CPU -- it twiddles its
metaphorical thumbs a few million times (literally) between keystrokes.

However, you will very much need to add memory to your workstation/nodes
to make this work.  I would recommend adding at least 128 MB to each
node.  Since it may be difficult to find 128 MB DIMMS anymore, you may
have to settle for a 256 MB upgrade to 320 MB, which is just fine
anyway.  Memory is so cheap now that this will cost you only $20-30 per
seat, or $400-600 total.

This will give you enough memory to easily run X, a web browser, editors
and xterms, and anything else not horribly CPU intensive without
significantly impacting CPU performance on many parallel applications.
It will also give your nodes enough memory to be able to run a good
sized background job (or several jobs) without swapping, which is very
important to good performance.  64 MB of memory is so little that you
would very likely be swapping on a good sized application even if you
weren't running X on the nodes.

This greatly simplifies administration on your learning cluster, as each
node is installed as an ordinary workstation, plus the various parallel
packages you might like.  If you want to experiment with fine grained
parallel code (which won't run as well on systems with graphical heads
and many interactive users who introduce random delays) you can install
the nodes so that they can be dual booted from an ordinary workstation
mode into a Scyld dedicated cluster (where the nodes will not function
interactively while in this cluster).  That way you can gain the
benefits of a multiuser compute cluster by day (for example) while still
experimenting with coarse grained or embarrassingly parallel HPC usage
and fully capable of developing or running MPI or PVM applications from
any workstation/node.  At night you can boot into scyld and try running
the parallel code you developed during the day at maximum efficiency.

Hope this helps.

   rgb

> thanks in advance.
> 
> 

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu






More information about the Beowulf mailing list