[Beowulf] 4+ cpu benchmark machines wanted
diep at xs4all.nl
Mon May 23 06:16:45 PDT 2005
Benchmarking under full load is very important, as it approaches reality most
and i'm happy you start doing that type of benchmarking.
For SGI altix3000 series do you also take into account the complex way in
which the routing takes place?
Basically the problem is that if at this brick at a node A.1 (node = dual,
brick = 2 nodes) 2 processes run and from some other job 2 other processes
on A.2, that if elsewhere also processes run belonging A.2, the only way to
reach them is through the memory of A.1.
That's quite ugly.
We all know how hard it is to schedule jobs and avoid at big single
partition machines that memory is allocated here and the jobs getting run
on processes at the other side of the machine, at the same time we want to
get all cpu's busy so there is no way in avoiding all these problems.
At 07:40 PM 5/22/2005 -0700, Greg Lindahl wrote:
>I'm interested in doing some microbenchmarks of various interconnects
>where all of the cpus on a node are in use. This is an unusual way of
>microbenchmarking; the HPC Challenge benchmarks do it this way, which
>is why the Random Ring latency and bandwidth numbers are not what
>you'd expect. They only have 2-cpu node results, though, and I'd love
>to collect some data for 4+ cpu nodes. So, if you have such a cluster,
>minimum 2 machines, with any kind of interconnect, please email me
>privately and I'll send along the benchmark.
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf