[Beowulf] Cluster Benchmarks?
bill at cse.ucdavis.edu
Tue Jun 15 21:25:21 PDT 2004
On Mon, Jun 14, 2004 at 04:42:58PM -0700, Greg Lindahl wrote:
> Writing good microbenchmarks is hard. Does the world really need
> another one?
I'd argue, that yes, the world needs more good microbenchmarks. Especially
in the area of performance scaling with array size, number of threads, and
number of nodes.
I find a suite of microbenchmarks can be useful for not only predicting
performance, but also identifying performance bottlenecks. I've often
used microbenchmarks to identify reasons that explain the differing
application performance between clusters.
I don't know of a good microbenchmark suite that addresses:
* Memory hierarchy bandwidth, latency, and parallelism for a range of
array sizes and threads.
* File system bandwidth, latency, parallelism, for a wide range of file
sizes and number of nodes.
* Network system bandwidth, latency, and parallelism, for a wide variety
of message sizes and numbers of nodes.
Sure, real world application performance is king, but that doesn't mean
that microbenchmarks are not justified. I've learned quite a bit from
correlating application performance and microbenchmark performance. I
find both kinds of benchmarking quite complimentaryes
> Seriously: why not work on good whole-application
> benchmarks? That's what the world needs more of.
I'd love a replacement for SPEC's CPU2000 suite, but alas the requirements
for each application would be something like:
* Application size and run length targeted towards current machines, say
1 hour on a p4-3.0 GHz with 2GB ram.
* Capable of running on 32 and 64 bit machines
* Exceedingly portable
* Trivial to build
* Manageable input filesizes.
* Reflect current algorithms used for the research area
* Open source.
* Verifiable output (to verify correctness)
* Ideally scales to an arbitrary number of nodes
* Is an application that people buy clusters for.
In my experience such benchmarks are very hard to find, thus everyone I
know either uses SPEC's CPU2000 or ends up spending many hours talking
to sales types, getting benchmark accounts, and porting their code to
the new environments.
So while I dream of a new opensource application benchmark suite, and ask
researchers for possible candidates, I end up often using microbenchmarks
to understand the differences between various cluster technologies.
Computational Science and Engineering
More information about the Beowulf