[Beowulf] Can one Infiniband net support MPI and a parallel filesystem?
csamuel at vpac.org
Tue Aug 12 20:29:31 PDT 2008
----- "Craig Tierney" <Craig.Tierney at noaa.gov> wrote:
> I am wondering, who shares nodes in cluster systems with
> MPI codes?
People in countries outside of the US where investment
in HPC results in insufficient resources to meet the
growing demand and where not even the peak *national*
HPC facility makes the Top 500.
For ourselves in a state based organisation our new
top of the range cluster has 760 cores, which is more
than all our previous clusters combined.
We have over 600 registered users from 8 universities
and our systems are continuously over subscribed and
we have to run our systems to try and get the best
We do use things like cpusets to try and limit the impact
that jobs can have on other jobs on the same nodes, and
users can request entire nodes for themselves should they
so wish, just that their project will be tallied as having
used all the cores on that node as they're not available to
Hmm, that wasn't meant to be a whinge, just that we
have to cut our cloth to fit.
Christopher Samuel - (03) 9925 4751 - Systems Manager
The Victorian Partnership for Advanced Computing
P.O. Box 201, Carlton South, VIC 3053, Australia
VPAC is a not-for-profit Registered Research Agency
More information about the Beowulf