Beowulf and variable cpus

Robert G. Brown rgb at phy.duke.edu
Thu Sep 21 10:46:21 PDT 2000


On Thu, 21 Sep 2000 p.grimshaw at virgin.net wrote:

> 
> Hi, I am new to Beowulf and have some questions,
> 
> 1. Does anyone know if I am able to run a beowulf cluster with
> different types of clients, i.e I have a load of pentium 100s
> and some p2 500s which I would like to use together. Is this
> possible?

Sure, lots of ways.  On embarrassingly parallel applications it is just
the total available CPU that counts, see e.g. the SETI or RC5DES
projects, the Stone Soupercomputer, and other heterogeneous efforts.
For other parallel applications that do involve a moderate degree of
synchronicity and communication, such a system can still be useful as
long as the slower machines don't act as a brake on the faster ones,
This generally means that you have to partition the work proportional to
the speed of the system and its capacity to communicate.  For some kinds
of problems this is really not that difficult -- you just measure the
relative speed of the P5 at 100 to the P6 at 500 on your parallelized code
chunk (probably around 1:10) and give the PII's 10 times as much to do
between communication barriers (when everybody communicates).  Fine tune
to correct for differential network speed and the larger communications
from the PII chunks (if any).

The programming IS more complex if the systems are heterogeneous,
though, as the balancing and tuning will likely have to be done "by
hand".  You also have to have a really BIG load of P5 at 100's to make it
worth the hassle when one P6 at 500 (current cost of a 500 MHz Celeron node
is maybe $500-$600) can do the work of ten of the old Pentia.  Just the
cost of the extra electricity (maybe 1-1.3 kilowatts times the time of
operation) can pretty much buy you the Celeron node over a year.  It's
like paying to run a 900 Watt space heater in an air conditioned space
-- you pay twice.  The space requirements, the extra switch/hub ports
required, and the extra human labor for maintenance and installation all
have to be considered as well.  Finally, there is the inevitable drop
off in parallel efficiency as a job is distributed to more nodes.

Add this all up and you are probably better off economically NOT using
the Pentia and buying cheap but technologically current nodes of
equivalent total power by the time you factor in all of the real costs.
Still, there can be lots of circumstances that do make such a cluster
worthwhile -- low-budget/hobby beowulfs, beowulfs in schools with
strictly limited computer budgets (but somewhat elastic electricity/AC
budgets:-), clusters composed of your mix of Pentia and PII's sitting on
desktops and being ALSO used as workstations (somebody else pays for the
electricity and cooling and whatever is left of the CPUs is truly "free
computing power").

Hope this helps,

    rgb

> 
> Regards,
> 
> Paul Grimshaw.
> 
> -----
> Sent using MailStart.com ( http://MailStart.Com/welcome.html )
> The FREE way to access your mailbox via any web browser, anywhere!
> 
> 
> _______________________________________________
> Beowulf mailing list
> Beowulf at beowulf.org
> http://www.beowulf.org/mailman/listinfo/beowulf
> 

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu







More information about the Beowulf mailing list