[Beowulf] Q: IB message rate & large core counts (per node)?
Shainer at mellanox.com
Mon Mar 15 16:09:53 PDT 2010
I don’t appreciate those kind of responses and it is not appropriate for this mailing list. Please fix in future emails. I am standing behind any info I put out, and definitely don’t do down estimations as you do. It was nice to see that you fixed your 20+20 numbers to 24+23 (that was marketing that you did?), but I suggest you do a better search to look on numbers of recent systems, with decent Bios setting. Gen2 system can provide 3300MB/s uni or >6500MB bi dir. Of course you can find versions that gives lower performance, and I can send you some instruction to get the PCIe BW even lower than 20 for your own performance testing if you want to. It still will be much higher than what you can do with Myri10G...
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Patrick Geoffray
Sent: Monday, March 15, 2010 3:48 PM
To: beowulf at beowulf.org
Subject: Re: [Beowulf] Q: IB message rate & large core counts (per node)?
On 3/15/2010 5:33 PM, Gilad Shainer wrote:
> To make it more accurate, most PCIe chipsets supports 256B reads, and
> the data bandwidth is 26Gb/s, which makes it 26+26, not 20+20.
I know Marketers lives in their own universe, but here are a few nuts
for you to crack:
* If most PCIe chipsets would effectively do 256B Completions, why is
the max unidirectional bandwidth for QDR/Nehalem is 3026 MB/s (24.2
GB/s) as reported in the latest MVAPICH announcement ?
3026 MB/s is 73.4% efficiency compared to raw bandwidth of 4 GB for Gen2
8x. With 256B Completions, the PCIe efficiency would be 92.7%, so
someone would be losing 19.3% ? Would that be your silicon ?
* For 64B Completions: 64/84 is 0.7619, and 0.7619 * 32 = 24.38 Gb/s.
How do you get 26 Gb/s again ?
* PCIe is a reliable protocol, there are Acks in the other direction. If
you claim that one way is 26 GB/s and two-way is 26+26 Gb/s, does that
mean you have invented a reliable protocol that does not need acks ?
* If bidirectional is 26+26, why is the max bidirectional bandwidth
reported by MVAPICH is 5858 MB/s, ie 46.8 Gb/s or 23.4+23.4 Gb/s ?
Granted, it's more than 20+20, but it depends a lot on the
chipset-dependent pipeline depth.
BTW, Greg's offer is still pending...
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf