[Beowulf] Intel buys QLogic InfiniBand business

Joe Landman landman at scalableinformatics.com
Fri Jan 27 12:19:31 PST 2012


On 01/27/2012 03:06 PM, Vincent Diepeveen wrote:
>
> On Jan 27, 2012, at 8:29 PM, Håkon Bugge wrote:
>
>> Greg,
>>
>>
>> On 23. jan. 2012, at 20.55, Greg Lindahl wrote:
>>
>>> On Mon, Jan 23, 2012 at 11:28:26AM -0800, Greg Lindahl wrote:
>>>
>>>> http://www.hpcwire.com/hpcwire/2012-01-23/
>>>> intel_to_buy_qlogic_s_infiniband_business.html
>>>
>>> I figured out the main why:
>>>
>>> http://seekingalpha.com/news-article/2082171-qlogic-gains-market-
>>> share-in-both-fibre-channel-and-10gb-ethernet-adapter-markets
>>>
>>>> Server-class 10Gb Ethernet Adapter and LOM revenues have recently
>>>> surpassed $100 million per quarter, and are on track for about fifty
>>>> percent annual growth, according to Crehan Research.
>>>
>>> That's the whole market, and QLogic says they are #1 in the FCoE
>>> adapter segment of this market, and #2 in the overall 10 gig adapter
>>> market (see
>>> http://seekingalpha.com/article/303061-qlogic-s-ceo-discusses-
>>> f2q12-results-earnings-call-transcript)

I found that statement interesting.   I've actually not known anything 
about their 10GbE products.  My bad.

>>
>> That can explain why QLogic is selling, but not why Intel is buying.
>>
>> 10 years ago, Intel went _out_ of the Infiniband marked, see http://
>> www.networkworld.com/newsletters/servers/2002/01383318.html
>>
>> So has the IB business evolved so incredible well compared to what
>> Intel expected back in 2002? Do not think so.
>>
>> I would guess that we will see message passing/RDMA over
>> Thunderbolt or similar.

Intel buying makes quite a bit of sense IMO.  They are in 10GbE silicon 
and NICs, and being in IB silicon and HCAs gives them not only a hedge 
(10GbE while growing rapidly, is not the only high performance network 
market, and Intel is very good at getting economies of scale going with 
its silicon ... well ... most of its silicon ... ignoring Itanium here 
...).  Its quite likely that Intel would need IB for its PetaScale 
plans.  Someone here postualted putting the silicon on the CPU.  Not 
sure if this would happen, but I could see it on an IOH, easily.  That 
would make sense (at least in terms of the Westmere designs ... for the 
Romley et al. I am not sure where it would make most sense).

But Intel sees the HPC market growth, and I think they realize that 
there are interesting opportunities for them there with tighter high 
performance networking interconnects (Thunderbolt, USB3, IB, 10GbE 
native on all these systems).

> Qlogic offers that QDR.
> Mellanox is a generation newer there with FDR.
>
> Both in latency as well as in bandwidth a huge difference.

Haven't looked much at FDR or EDR latency.  Was it a huge delta (more 
than 30%) better than QDR?  I've been hearing numbers like 0.8-0.9 us 
for a while, and switches are still ~150-300ns port to port.  At some 
point I think you start hitting a latency floor, bounded in part by "c", 
but also by an optimal technology path length that you can't shorten 
without significant investment and new technology.  Not sure how close 
we are to that point (maybe someone from Qlogic/Mellanox could comment 
on the headroom we have).

Bandwidth wise, you need E5 with PCIe 3 to really take advantage of FDR. 
  So again, its a natural fit, especially if its LOM ....

Curiously, I think this suggests that ScaleMP could be in play on the 
software side ... imagine stringing together bunches of the LOM FDR/QDR 
motherboards with E5's and lots of ram into huge vSMPs (another thread). 
  Shai may tell me I'm full of it (hope he doesn't), but I think this is 
a real possibility.  The Qlogic purchase likely makes this even more 
interesting for Intel (or Cisco, others as a defensive acq).

We sure do live in interesting times!

-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman at scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615




More information about the Beowulf mailing list