[Beowulf] NUMA zone weirdness
tom.elken at intel.com
Fri Dec 16 14:26:33 PST 2016
Hi John and Greg,
You showed Nodes 0 & 2 (no node 1) and a strange CPU assignment to nodes!
Even though you had Cluster On Die (CoD) Endabled in your BIOS, I have never seen that arrangement of Numa nodes and CPUs. You may have a bug in your BIOS or OS ?
With CoD enabled, I would have expected 4 NUMA nodes, 0-3, and 6 cores assigned to each one.
The Omni-Path Performance Tuning User Guide
does recommend Disabling CoD in Xeon BIOSes (Table 2 on P. 12), but it's not considered a hard prohibition.
Disabling improves some fabric performance benchmarks, but Enabling helps some single-node applications performance, which could outweigh the fabric performance aspects.
> -----Original Message-----
> From: Beowulf [mailto:beowulf-bounces at beowulf.org] On Behalf Of Greg
> Sent: Friday, December 16, 2016 2:00 PM
> To: John Hearns
> Cc: Beowulf Mailing List
> Subject: Re: [Beowulf] NUMA zone weirdness
> Wow, that's pretty obscure!
> I'd recommend reporting it to Intel so that they can add it to the
> descendants of ipath_checkout / ipath_debug. It's exactly the kind of
> hidden gotcha that leads to unhappy systems!
> -- greg
> On Fri, Dec 16, 2016 at 03:52:34PM +0000, John Hearns wrote:
> > Problem solved.
> > I have changed the QPI Snoop Mode on these servers from
> > ClusterOnDIe Enabled to Disabled and they display what I take to be correct
> > behaviour - ie
> > [root at comp006 ~]# numactl --hardware
> > available: 2 nodes (0-1)
> > node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11
> > node 0 size: 32673 MB
> > node 0 free: 31541 MB
> > node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23
> > node 1 size: 32768 MB
> > node 1 free: 31860 MB
> > node distances:
> > node 0 1
> > 0: 10 21
> > 1: 21 10
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf