[Beowulf] Re: Opteron 275 performance
kus at free.net
Thu Jul 14 08:55:19 PDT 2005
In message from "S.I.Gorelsky" <gorelsky at stanford.edu> (Thu, 14 Jul
2005 07:18:22 -0700 (PDT)):
>> It depends. DFT, for example, (and not only DFT) has good
>> under Linda also (about 1.8-1.9 at every twice increasing of CPU
>What Gaussian version did you test? Gaussian 98 had a good Linda
>parallelization (at least for DFT).
Of course we had a lot of tasks on G98 also, but ...
>With Gaussian 03, I do not think this is the case.
... I have no data that G03/Linda is more worse w/Linda than G98.
We have G03 on cluster w/only 3 nodes (dual-Opteron), and at least for
this small cluster G03/Linda scalability is appropriate.
Did you code NoFMM G03 keyword for your cluster runs ?
In any case for G03 in cluster there is no alternative to Linda.
*I forgot to say*: the best way in this case is to use SMP(*in* the
nodes) +Linda between nodes, this is allowed by G03.
>>And I don't remember, what else than HF and DFT is parallelized in
G03 for shared memory ?
>Most of jobs, not just HF and DFT, can be run in SMP. This is not a
Ehh, it looks that my data about Gaussian SMP parallelization are
out-of date: from the old times AFAIK things like MP2 was not
SMP-parallelized. Now HF, DFT, CIS (may be MP2 also) are
SMP-parallelized. What else ?
More information about the Beowulf