[Beowulf] What kind of I/O benchmark ?
Robert G. Brown
rgb at phy.duke.edu
Wed Mar 23 08:45:27 PST 2005
On Wed, 23 Mar 2005, Brian D. Ropers-Huilman wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> I am currently engaged in a "bake-off" of sorts between two different
> vendors. We used IOzone as an initial test, but picked three of our most
> representative codes to do the remainder of the tests. With these codes we
> scaled up the number of concurrent processors, the number of files, and the
> file size, including parallel and single node writes and reads.
That reminds me (for Joe) that this is also a design goal of xmlbenchd's
tag set (in possible BBS merge). It needs to be able to run and display
benchmarks that DO demonstrate the scaling properties of a cluster, as
this is by far the biggest weakness in e.g. linpack and the top500
today. The top500 listings are utterly useless for determining scaling
properties for the architectures being listed as this is minimally a
CURVE of speed vs number of nodes and ideally a SURFACE of speed vs
number of nodes vs "size". As indicated, IIRC, in Don et. al's very on
book on how to build a beowulf;-).
How much beyond 2d/3d to go I'm not certain. For example, in a given
stream-like RDMA memory test, one could concievably vary vector size
(one ordinal variable), number of nodes (two ordinal variables), and
number of simultaneous test threads (three ordinal variables), stride
(four), size of data object being accessed (five) as well as any number
of discrete descriptive (non-ordinal) variables (e.g. random number seed
for a shuffled/random non-streaming test, a flag to force the test to
run backwards). It is possible that the performance surface for this
decomposes into distinct projections on representative planes at e.g.
the boundaries where the task is split onto nodes or runs over L1 and L2
cache boundaries, so that running a relatively small set of 2d/3d views
suffices to determine the 5d/6d surfaces, but this is really a computer
science research question.
I'd LIKE for the tool to be usable to ANSWER this question, as that
would give the world's Real Computer Scientists the opportunity to focus
on the relevant metrics when e.g. optimizing BLAST. In particular, as
it seems like its optimal design might depend nontrivially on just the
parameters varied in this example.
> As always, the best test is your own code.
So let's make it EASY to test your own code and "publish" the result in
a consistent way...
> Velu Erwan said the following on 2005.03.23 06:16:
> > Hi folks,
> > I'm searching the best way to stress a storage system but using some
> > real applications. I mean, using some bonnie++, Iozone, b_eff_io
> > benchmarks could give some raw performances about what your storage
> > infrastructure is able to provide.
> > Some benchmarks like mpi-tile-io seems starting another way by trying to
> > match what could be the "real" performance of your storage regarding a
> > kind of application (visualization for mpi-tile-io).
> > Does other people are working on such kind of application oriented
> > benchmark ?
> > I was heard that some BLAST code could be used for such use, does some
> > of you follow this way of validating/benchmarking clusters ?
> - --
> Brian D. Ropers-Huilman .:. Asst. Director .:. HPC and Computation
> Center for Computation & Technology (CCT) bropers at cct.lsu.edu
> Johnston Hall, Rm. 350 +1 225.578.3272 (V)
> Louisiana State University +1 225.578.5362 (F)
> Baton Rouge, LA 70803-1900 USA http://www.cct.lsu.edu/
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.2.4 (Darwin)
> Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
> -----END PGP SIGNATURE-----
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
More information about the Beowulf