[Beowulf] Infiniband: MPI and I/O?

Gilad Shainer Shainer at Mellanox.com
Thu May 26 09:50:17 PDT 2011


> Wondering if anyone out there is doing both I/O to storage as well as 
> MPI over the same IB fabric.  Following along in the Mellanox User's 
> Guide, I see a section on how to implement the QOS for both MPI and my

> lustre storage.  I am curious though as to what might happen to the 
> performance of the MPI traffic when high I/O loads are placed on the 
> storage. 

I am doing it in my lab  -have build my own Lustre solution and am
running it on the same network as the MPI jobs. At the end it all
depends on how much bandwidth do you need for the MPI and the storage,
and if you can cover both, you can do it. Today the QoS solution for IB
is out there, and you can set max BW and min latency as parameters for
the different traffics. 


> In our current implementation, we are using blades which are 50% 
> blocking (2:1 oversubscribed) when moving from a 16 blade chassis to 
> other nodes.  Would trying to do storage on top dictate moving to a 
> totally non-blocking fabric?

IB congestion control is being released now (finally), so this can help
you here. 

Gilad





More information about the Beowulf mailing list