Athlon SDR/DDR stats for *specific* gaussian98 jobs

Martin Siegert siegert at sfu.ca
Fri May 4 14:22:27 PDT 2001


On Fri, May 04, 2001 at 11:13:54AM -0400, Robert G. Brown wrote:
<snip>
> BTW, a related and not irrelevant question.
> 
> You have said that G98 is your dominant application -- are you doing
> e.g. Quantum Chemistry?  There is a faculty person here (Weitao Yang)
> who is very interested in building a cluster to do quantum chemistry
> codes that use Gaussian 98 and FFT's and that sort of thing, and he's
> getting mediocre results with straight P3's on 100BT.  I'm not familiar
> enough with the problem to know if his results are poor because they are
> IPC bound (and he should get a better network) or memory bound (get
> alphas) or whatever.  But I'd like to.  Any general list-wisdom for
> quantum chemistry applications?  Is this an application likely to need
> a high end architecture (e.g. Myrinet and e.g. Alpha or P4) or would a
> tuned combination of something cheaper do as well?

I cannot tell you anything about Quantum Chemistry (which theoretical
physicist does? sounds like density functional theory - arrgh), but I do know
quite a bit about parallel FFT's.

Parallel FFT's don't work very well with 100baseT. Hence upgrading your
processor speed (or even going to Alphas) will not help very much.
Getting a better network is the way to go (channel bonding or Myrinet).
Even switching from tulip cards to 905B's will help.

Also, you can optimize the parallel FFT: the best algorithm actually
depends on system size, network speed, MPI distribution, etc.

I summed up my experience here:

http://www.sfu.ca/acs/cluster/fft-performance.html

Martin

========================================================================
Martin Siegert
Academic Computing Services                        phone: (604) 291-4691
Simon Fraser University                            fax:   (604) 291-4242
Burnaby, British Columbia                          email: siegert at sfu.ca
Canada  V5A 1S6
========================================================================




More information about the Beowulf mailing list