[Beowulf] Register article on Epyc

John Hearns hearnsj at googlemail.com
Thu Jun 22 07:06:27 PDT 2017


Echoing what Joe says, "The Network is the Computer" - now who said that
(hmmmm....)
We know this anyway - more attention is being paid to bandwidth to memory,
and memory access patterns rather than hero numbers with core counts.
Perhaps I'm replaying a long running LP record, but going back to working
on CFD we had an SGI Itanium with single cores.
That server ran from day one till the knackers turned up to take it away to
the farm (aka SGI engineers with a taillift truck).
I literally had to stop user jobs to give them the chance to switch it off.
Mind you that thing ate power....

Perhaps its a good time to be in HPC.  Do we see two camps forming - Intel
with their integrated onto the package Omnipath and Xeon Phi,
with AMD / Mellanox in the other corner.  Of course the other two corners
of the ring have ARM and Power.
and who said rings have to be four sided?  Being in HPC we would of course
have a hyper-ring....





On 22 June 2017 at 15:31, Scott Atchley <e.scott.atchley at gmail.com> wrote:

> Hi Mark,
>
> I agree that these are slightly noticeable but they are far less than
> accessing a NIC on the "wrong" socket, etc.
>
> Scott
>
> On Thu, Jun 22, 2017 at 9:26 AM, Mark Hahn <hahn at mcmaster.ca> wrote:
>
>> But now, with 20+ core CPUs, does it still really make sense to have
>>> dual socket systems everywhere, with NUMA effects all over the place
>>> that typical users are blissfully unaware of?
>>>
>>
>> I claim single-socket systems already have NUMA effects, since multiple
>> layers (differently-shared) of cache have the same effect as memory at
>> different hop-distances.
>>
>> regards, mark hahn.
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20170622/f0a66df4/attachment-0001.html>


More information about the Beowulf mailing list