[Beowulf] Great Lakes cluster

Jonathan Engwall engwalljonathanthereal at gmail.com
Mon Oct 22 11:37:32 PDT 2018


https://krebsonsecurity.com/2014/02/the-new-normal-200-400-gbps-ddos-attacks/

Here is a story, almost comical. The 200gbps handles high request volume.

On October 22, 2018, at 1:10 AM, John Hearns via Beowulf <beowulf at beowulf.org> wrote:

>
>
>I will slightly blow my own trumpet here. I think a design which has high bandwidth uplinks and half speed links to the compute nodes is a good idea.
>
>I would love some pointers to studies on bandwith utilisation on large scale codes.
>
>Are there really any codes which will use 200Gbps across many nodes simultaneously?
>
>On Sun, 21 Oct 2018 at 18:57, John Hearns <hearnsj at googlemail.com> wrote:
>
>A comment from Brock Palane please?
>
>https://www.nextplatform.com/2018/10/18/great-lakes-super-to-remove-islands-of-compute/
>
>I did a bid for a new HPC cluster at UCL in the UK, using FDR adapters and 100Gbps switches, making the same arguments abotu cutting down on switch counts but still having a non-blocking network (at the time Mellanox were promoting FDR by selling it at 40Gbps prices).
>
>But in this article if you have 1x switch in a rack and use all 80 ports (with splitters) - there are not many ports left for uplinks!
>
>I imagine this is 2x 200Gbps switches, with 20 ports of each switch equipped with port splitters and the other 20 ports as uplinks.
>


More information about the Beowulf mailing list