[Beowulf] Reccomendations For Beowulf Cluster SMP Motherboards?
atp at piskorski.com
Tue Jun 13 12:15:52 PDT 2006
On Fri, Jun 09, 2006 at 02:09:25AM -0700, EKC wrote:
> I am going to build a 32-node Beowulf cluster interconnected by
> tri-bonded gigabit Ethernet on a stack of SMC Layer-3 gigabit switches
Tri-bonded? Have you tried this? What MPI stack and/or other
interconnect software do you plan to use?
I recall that MP_Lite got bandwith improvements up through 3 bonded
gigabit interfaces, but the 3rd NIC didn't add that much benefit. And
last I heard Linux kernel gigabit bonding was broken, 2 bonded NICs
get much worse performance than 1 NIC. The graph below shows both
I see no dates on that page though, and I don't know when those
results were measured. Perhaps things have changed since then?
Btw, what do you plan to run on this cluster?
> Does anyone have experience with the K8N Master2-Far or the Asus
> K8N-DL? http://www.newegg.com/Product/Product.asp?Item=N82E16813131059
The few ATX-sized dual Opteron boards available are certainly nice if
you want to fit them into an ATX size mid-tower workstation case. And
they tend to be cheaper (half the price?) then high end server
oriented boards from SuperMicro or the like.
However, the older non-PCI-Express versions of that MSI K8N
Master2-Far board all connected only ONE of the two Opterons to the
DIMMs. Thus the 2nd Opteron has to do all its memory access via the
1st Opterons HT link, so the 2nd Opteron sees more memory latency, and
probably more important for you, the total aggregate memory bandwith
is only 1/2 what you'd get with a real server-grade dual Opteron
I've no idea whether or not that is still the case with the current
MSI K8N Master2-Far or not, but it's something you'll want to check
carefully when considering those sorts of motherboards...
Andrew Piskorski <atp at piskorski.com>
More information about the Beowulf