[Beowulf] Intel’s 800Gbps cables headed to cloud data centers and supercomputers

Mark Hahn hahn at mcmaster.ca
Wed Mar 12 09:22:21 PDT 2014


> "Intel and several of its partners said they will make 800Gbps cables 
> available in the second half of this year, bringing big speed increases to 
> supercomputers and data centers."

this seems very niche to me.  here's a very practical question: 
how many of your cluster's links are saturated, and which of them
are both saturated and currently composed of trunks or wide FDR?

I speculate that although it would be natural for larger clusters
to need higher-bandwidth backbones, the majority of large clusters
are, in fact, not used for giant runs of a single job, and therefore
mostly experience more locality-friendly BW patterns.  (a counter
to this would be workloads that spend much time doing all-to-all,
but I've never been clear on whether that's laziness or necessary.)

but basically, this is just a very dense cable.  what would make more
of a difference is a breakthrough in the electro-optics to make a 
25 Gb bi-directional link cheap.  today, even 10G optics run about 
$1k/link, which is a big problem if the device at the endpoint is 
likely only worth $1-2k.  (ie, optics are a non-starter for nodes 
that are not pretty fat, and most of the action seems to be either 
at the traditional 2s level of plumpness or lower.)

> "US Conec established an MXC certification program to help other companies 
> sell the cables. Tyco Electronics and Molex are the first besides Corning to 
> announce that they will build and sell MXC cable assemblies."

which is, of course, the least interesting thing :(

> So it sounds like there will be competition for the cables, but what about 
> the NICs and switches? Will Intel have a monopoly on that, or will this be a 
> standardized technology that will allow other manufacturers to make their own 
> silicon/complete products?

IMO, there's a strong smell of "build it and they will come" to this.

OTOH, optical usually gets away without dramatic serialization/EC overheads.

> Years ago (the late 90s?) I read an interesting magazine article about Intel 
> and why they started making their own NICs, graphics processors, etc. 
> According to the article, Intel was content to let 3Com and others make 
> networking gear, but when network speeds weren't increasing fast enough,

usually, I think of most Intel moves as promoting a lower-friction industry.
standards they create/sponsor/push, like power supplies, MB specs, IPMI
tend to be good for everyone, and pull vendors away from the kind of
customer-hostile lockin they (vendors) love so much.  it's hard to tell
how much is self-interest, of course, since Intel manages to take a pretty
big bite of everything.

my memory, though, is that Intel didn't make much of a difference to the 
Gb transition.  most of the hardware I experienced from that era was from
Broadcom, with a few trivial vendors (some of which are still around,
though still insignificant).

> Intel got into the game because without increasing network speeds, there 
> wasn't much of a need for faster processors. We all know that Intel has 
> bought QLogic and is spending a lot money on high-speed interconnects.

are they?  I see the occasional little-ish product-ish thing pop up,
but not much vision.  thunderbolt appears to be a cul-de-sac to make Apple
happy (they get off on being "special"...)  IB doesn't appear to be going
anywhere, really (it'll always be a weird little non-IP universe.)
this new connector (just a connector) looks exactly like "big version of 
the original optical thunderbolt".  how about pcie-over-whatever - is 
that going anywhere?  then there's all that blather about dis-aggregated 
servers (what about latency?!?).

> Following the logic of that article, I guess Intel realized you can't sell 
> truckloads of processors if your don't have an interconnect that makes it 
> worthwhile.

I think this connector is a solution in search of a problem.

if they can make a 800Gb connector that adds $100 to a motherboard, and 
plugs into switch ports that cost $100/port, they could move some product.
even 80Gb would be epic.  even 10Gb would shake up the market bigtime...

regards, mark hahn.



More information about the Beowulf mailing list