[Beowulf] More cores/More processors/More nodes?

amjad ali amjad11 at gmail.com
Fri Nov 17 12:58:52 PST 2006


Dear Peter W.

Once I raised question similar to yours, having my requirement somwhat
very similar to yours. Please see my email in Beowulf Archive with
title:

"Slection from processor choices; Requesting Giudence"

posted during the later half of June this year.

At the end of discussion, I (opted) came up with the following design:

Each node having two AMD Opteron Dual Core processors and 4GB of main
memory on Tyan Thunder Board with GiGE as the interconnect.

regards,
AMJAD ALI.
BZU, Multan, Pakistan.



On 9/28/06, Peter Wainwright <prw at ceiriog.eclipse.co.uk> wrote:
> Please enlighten a baffled newbie:
>
> Now there are motherboards with 8 sockets; quad-core processors; and
> clusters
> with as many nodes as you can shake a stick at.
> It seems there are at least 3 dimensions for expansion.  What (in your
> opinion) is the right tradeoff between more cores, more processors and
> more
> individual compute nodes?
>
> In particular, I am thinking of in-house parallel finite difference /
> finite element codes,
> parallel BLAS, and maybe some commercial Monte-Carlo codes (the last
> being an
> embarrassingly parallel problem).
>
> I have been set the task of building our first cluster for these
> applications.
> Our existing in-house codes run on an SGI machine with a parallelizing
> compiler.
> They would need to be ported to use MPI on a cluster.  However, I do not
> understand
> what happens when you have multi-processor/multi-core nodes in a
> cluster.  Do you
> just use MPI (with each thread using its own non-shared memory) or is
> there any
> way to do "mixed-mode" programming which takes advantage of shared
> memory within a
> node (like, an MPI/OpenMP hybrid?).
>
> Peter Wainwright
>
>
>
>
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>
>
>
>



More information about the Beowulf mailing list