[Beowulf] Themes for a talk on beowulf clustering

Mark Hahn hahn at mcmaster.ca
Sun Mar 3 11:38:42 PST 2013


> I am giving a talk on beowulf clustering to a local lug and was wondering
> if you had some interesting themes that I could talk about.

I think an interesting topic is the relationship between beowulf
and cloud computing.  (which basically boils down to a question of 
build-it or rent-it.)

if the audience is academics, I think an interesting topic is:
transient versus career-scale HPC tools.  for instance, if you're 
undertaking a project using Cuda, you really must think about 
the timescale of the project.  will Cuda exist in N years?  will
you have access to NVidia GP-GPU resources of appropriate scale 
for M years?  it's really a question of tool choice: how to judge
the longevity of a tool versus its possible payoff in performance.
(MPI is a great counterexample, since it's been around and is not
going away any time soon.)

another interesting topic is how beowulf should respond to web2.0.
what I mean is that web2.0 involves a whole raft of techniques that
are culturally quite different from the usual linux/scheduler/MPI
stack.  things like 0mq, redis, mongodb, even node.js.  of course
hadoop and other, more recent big-data stuff.  I don't think there
is a conflict, exactly, but it's worth pondering, especially when 
parts of the HPC "establishment" are hot to trot about exascale.

you could make an interesting talk just on "scale".  for instance,
if you wanted to run a 60-rank MPI job a few years ago, it would
cost you a rack of nodes.  everyone knows the name "Moore's Law",
but it is sometimes interesting to make it tangible.  now, for 
instance, you could do it with a single 4-socket server or a 
1-socket server and a Phi card.  or rent some EC2 instances!

maybe just a survey of interconnects.  Infiniband is pretty much 
de rigeur for large-scale/intensive MPI clusters, but obviously 
the volume market is almost totally Gb still.  there is some 10G
happening, and may perhaps become cheaper than IB (though it doesn't
really match in performance.)  vendors are talking about DCE, 
and Cisco is starting to try to draw attention to their stuff
(including claims of IB-level performance.)

topic: concurrency.  examining the concurrency inherent to your 
problem, and deciding how to implement that on a cluster.  for instance, 
you really need to break a workflow down into dataflow: the paths
that particular values take, from setup to first computations, which
create new values that follow further paths through computation or 
communication or storage.  if your dataflow graph has long sequential
sections, that means independent computations and can be done very 
nicely with serial jobs.  lots of value "fan out" means that some 
form of communication must happen.  the dataflow graph determines 
whether you can reasonably use master/slave, or more tightly coupled
MPI with inter-node communication.  or whether you can run a flock
of serial jobs with only pre/post-processing to connect the pieces.


More information about the Beowulf mailing list