[Beowulf] Register article on Epyc

Kilian Cavalotti kilian.cavalotti.work at gmail.com
Wed Jun 21 09:39:51 PDT 2017


On Wed, Jun 21, 2017 at 5:39 AM, John Hearns <hearnsj at googlemail.com> wrote:
> For a long time the 'sweet spot' for HPC has been the dual socket Xeons.

True, but why? I guess because there wasn't many other options, and in
the first days of multicore CPUs, it was the only way to have decent
local parallelism, even with QPI (and its ancestors) being a
bottleneck. And also to have enough PCIe lanes (40 lanes ought to
enough for anyone, right?)

But now, with 20+ core CPUs, does it still really make sense to have
dual socket systems everywhere, with NUMA effects all over the place
that typical users are blissfully unaware of?

Seems to me like this is a smart design move from AMD, and that
single-socket systems, with 20+ core CPUs and 128 PCIe lanes could
make a very cool base for many HPC systems. Of course, that's just on
paper for now, proper benchmarking will be required.

Cheers,
-- 
Kilian


More information about the Beowulf mailing list