[Beowulf] Mellanox Multi-host
e.scott.atchley at gmail.com
Wed Mar 11 07:44:16 PDT 2015
Looking at this and the above link:
It seems that the OCP Yosemite is a motherboard that allows four compute
cards to be plugged into it. The compute cards can even have different CPUs
(x86, ARM, Power). The Yosemite board has the NIC and connection to the
switch. It is not clear if the "multi-host connection" is tunneled over the
PCIe connection between the compute card and the Yosemite board or if
network communication is handled over the compute card's NIC to the
aggregator on the Yosemite board. Expect it is tunneled over PCIe, but more
details would be nice.
It seems the whole OCP Yosemite project is geared towards avoiding NUMA and
using cheaper, simpler CPUs.
On Wed, Mar 11, 2015 at 8:51 AM, John Hearns <hearnsj at googlemail.com> wrote:
> Talking about 10Gbps networking... and above:
> "In the configuration Mellanox demonstrated, a 648-node cluster would only
> need 162 each of NICs, ports and cables."
> So looks like one switch port can fan out to four hosts,
> and they talk about mixing FPGA and GPU
> Might make for a very interesting cluster.
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf