[Beowulf] Infiniband Support ?

John Hearns hearnsj at googlemail.com
Mon May 26 05:41:22 PDT 2014


Looking on colfax direct, there is an 18 port QDR Mellanox switch at $3350
there is a single port QDR card for $522
3 metre cables for $48

Total of $12470

But PAY ATTENTION to the PCIe bus type on your motherboard (I know this the
hard way...)

   - PCIe Base 3.0 compliant, 1.1 and 2.0 compatible


I would say though that the above bundle could easily be re used when you
get higher specification motherboards.
Or as Andrew says spend that amount on a powerfule single motherboard
ssytem.








On 26 May 2014 13:23, John Hearns <hearnsj at googlemail.com> wrote:

> Amjad,
> I agree with what Andrew says - you can buy some very powerful single
> motherboard systems these days.
>
>
> I would suggest two things though:
>
> a) if you are planning to use this cluster for learning about MPI and
> cluster interconnects why not get some gigabit ethernet cards plus a
> gigabit switch?
>
> b) Look for second hand Infiniband cards and a switch on eBay.
>     Or look at  Colfax Direct sell  http://www.colfaxdirect.com
>
> I saw one of these rather nifty eigght port switches from Mellanox at
> Cloud Expo in London:
>
> http://www.colfaxdirect.com/store/pc/viewPrd.asp?idproduct=601&idcategory=7
>
> Sweet!
> Or even easier the twelve port version:
>
> http://www.colfaxdirect.com/store/pc/viewPrd.asp?idproduct=1819&idcategory=7
>
>
>
>
>
>
> On 26 May 2014 12:58, Andrew Holway <andrew.holway at gmail.com> wrote:
>
>> Hi,
>>
>> This cluster is now a little bit ancient. I have a feeling that, for the
>> price of upgrading your network to Infiniband (around $10000 for QDR), you
>> could buy a single, dual socket server that will be more powerful. The pcie
>> bus on those systems is PCIe x8 Gen1 which would halve the speed anyway.
>> You could *maybe* find old DDR IB on ebay or similar.
>>
>> What kind of application are you running on the cluster? is it connected
>> to any kind of storage system?
>>
>> Thanks,
>>
>> Andrew
>>
>>
>>
>> On 26 May 2014 11:50, amjad ali <amjad11 at gmail.com> wrote:
>>
>>>  Dear All,
>>>
>>> We have a small cluster of 16 nodes (single socket) with Intel S3210SH
>>> motherboards. Does it fully support to connect these nodes with infiniband
>>> switch and also installing the relevant iniband host adapter/interface
>>> cards?
>>>
>>> Is it worth to add such an high speed interconnect for such a general
>>> purpose cluster?
>>>
>>> If there is no support possible with infiniband then can we plan for any
>>> other high speed interconnect technology like Myrinet, Quadrics etc.
>>>
>>> Regards,
>>> Amjad Ali
>>>
>>>
>>>
>>> _______________________________________________
>>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>>> To change your subscription (digest mode or unsubscribe) visit
>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>>
>>>
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20140526/2ceb1bc4/attachment.html>


More information about the Beowulf mailing list