[Beowulf] NUMA zone weirdness

Peter St. John peter.st.john at gmail.com
Fri Dec 16 14:35:10 PST 2016


I notice an odd thing maybe someone more hardware cluefull could explain?
Node 1 has 32GB (that is, 32 * 1024 MB) but Node 0 is an odd number (very
odd, to me), 32673 is 95 MB short. It doesn't make sense to me that a bank
of bad memory would be such a funny number short.
Peter

On Fri, Dec 16, 2016 at 4:59 PM, Greg Lindahl <lindahl at pbm.com> wrote:

> Wow, that's pretty obscure!
>
> I'd recommend reporting it to Intel so that they can add it to the
> descendants of ipath_checkout / ipath_debug. It's exactly the kind of
> hidden gotcha that leads to unhappy systems!
>
> -- greg
>
> On Fri, Dec 16, 2016 at 03:52:34PM +0000, John Hearns wrote:
> > Problem solved.
> > I have changed the QPI Snoop Mode on these servers from
> > ClusterOnDIe Enabled to Disabled and they display what I take to be
> correct
> > behaviour - ie
> >
> > [root at comp006 ~]# numactl --hardware
> > available: 2 nodes (0-1)
> > node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11
> > node 0 size: 32673 MB
> > node 0 free: 31541 MB
> > node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23
> > node 1 size: 32768 MB
> > node 1 free: 31860 MB
> > node distances:
> > node   0   1
> >   0:  10  21
> >   1:  21  10
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20161216/49cb82ff/attachment.html>


More information about the Beowulf mailing list