[Beowulf] Intel buys QLogic InfiniBand business
Shainer at Mellanox.com
Sat Jan 28 10:21:59 PST 2012
> > So I wonder why multiple OEMs decided to use Mellanox for on-board
> > solutions and no one used the QLogic silicon...
> That's a strange argument.
It is not an argument, it is stating a fact. If someone claims that a product provide 10x better performance, best fit etc., and from the other side it has very little attraction, something does not make dense.
> What does Intel want? Something to make them more money.
Intel explained their move in their PR. They see lots of growth in HPC, definitely in the Exascale, and they see InfiniBand as a key to deliver the right solution. They also mention InfiniBand adoption in other markets, so a good validation for InfiniBand as a leading solution for any server and storage connectivity.
> >> Also, keep in mind that Intel's benchmarking group in Moscow has a
> >> lot of experience with benchmarking real apps for bids using
> >> TrueScale
> >> against other HCAs, and I wouldn't be surprised if it was the case
> that TrueScale
> >> QDR is faster than that other company's FDR on many real codes,
> > Surprise surprise... this is no more than FUD. If you have real
> > numbers to back it up please send. If it was so great, how come more
> > people decided to use the Mellanox solutions? If QLogic was doing so
> > great with their solution, I would guess they would not be selling the
> > IB business...
> FUD = Fear, Uncertainty, and Doubt. Doesn't sound like FUD to me.
> More like a cheap attack on Greg, I think we (the mailing list) can do better.
I never saw any genuine testing from PathScale and then QLogic comparing their stuff to Mellanox, and you are more than welcome to try and prove me wrong. The argument in this email thread is no more than a re-cap of QLogic latest marketing campaign and yes, it is no more than FUD. Cheap attacks are not my game, so please....
> I've personally compared several generations of Myrinet and Infinipath to
> allegedly faster Mellanox adapters. Mellanox hasn't won yet, but I've not
> compared QDR or FDR yet. With that said the reason I run the benchmarks to
> find the best solution and it might well be Mellanox next time. It would be
> irresponsible to recommend Mellanox cluster provide just pick mellanox FDR
> over Qlogic QDR just because of the spec sheet.
> Of course recommending Qlogic over Mellanox without quantifying real world
> performance would be just as irresponsible.
Going into a bit more of a technical discussion... QLogic way of networking is doing everything in the CPU, and Mellanox way is to implement if all in the hardware (we all know that). The second option is a superset, therefore worse case can be even performance. I encourage you to contact me directly for any application benchmarking you do, and I will be happy to provide you the feedback on what you need in order to get the best out of the Mellanox products. That can be QDR vs QDR as well, no need to go to FDR - I am open for the competition any time...
> Maybe we could have a few less attacks, complaining and hand waving and
> more useful information? IMO Greg never came across as a commercial
> (which beowulf list isn't an appropriate place for), but does regularly contribute
> useful info. Arguing market share as proof of performance superiority is just
I am not sure about that... quick search in past emails can show amazing things...
I believe most of us are in agreement here. Less FUD, more facts.
> Speaking of which, you said:
> There is some add latency due to the 66/64 new encoding, but overall
> latency is lower than QDR. MPI is below 1us.
> I googled for additional information, looked around the Mellanox website, and
> couldn't find anything. Is that above number relevant to
> HPC folks running clusters? Does it involve a switch? If not
It is with a switch
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf