[Beowulf] Infiniband Configuration Options
llw7 at ix.netcom.com
Thu Jul 28 14:31:54 PDT 2005
We're working on a 64 node dual core Opteron cluster with a wide variety of
mpi heavy applications and have run into the typical funding issues that
are forcing some configuration restrictions (down sizing our wish list).
The original concept included the 64 dual processor dual core nodes with an
Infiniband 9288 or 9096.
As a means to reduce our cost the suggestion was raised to cascade 4 9024
switches. Just wondering how severe the hop penalty might be?
My best to you
More information about the Beowulf