[Beowulf] Connecting two 24-port IB edge switches to core switch:extra switch hop overhead

Ivan Oleynik iioleynik at gmail.com
Mon Feb 9 21:01:26 PST 2009


It would be nice to have non-blocking communication within the entire system
but the critical part is the 36-node complex to be connected to the main
cluster.

On Mon, Feb 9, 2009 at 1:33 AM, Gilad Shainer <Shainer at mellanox.com> wrote:

>  Do you plan to have full not blocking communications between the next
> systems and the core switch?
>
>  ------------------------------
> *From:* beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] *On
> Behalf Of *Ivan Oleynik
> *Sent:* Sunday, February 08, 2009 8:20 PM
> *To:* beowulf at beowulf.org
> *Subject:* [Beowulf] Connecting two 24-port IB edge switches to core
> switch:extra switch hop overhead
>
> I am purchasing 36-node cluster that will be integrated to already existing
> system. I am exploring the possibility to use two 24 4X port IB edge
> switches in core/leaf design that have maximum capability of 960Gb
> (DDR)/480Gb (SDR). They would be connected to the main Qlogic Silverstorm
> switch.
>
> I would appreciate receiving some info regarding the communication overhead
> incurred by this setup. I am trying to minimize the cost of IB communication
> hardware. It looks like buying single 48-port switch is really an expensive
> option.
>
> Thanks,
>
> Ivan
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20090210/1cf3149e/attachment.html>


More information about the Beowulf mailing list