[Beowulf] Bolts of Thunder and Upgraded desktop interconnect silicon....

Jeff Johnson jeff.johnson at aeoncomputing.com
Mon Jun 10 16:27:24 PDT 2013


Thunderbolt is packetized PCI-Express. It also interleaves encapsulated 
DisplayPort packets on the same chain. If there are no DisplayPort 
devices on the chain then the entire bandwidth is available for data.

All of the Thunderbolt data devices on the market have an internal board 
that contains a Thuunderbolt chip that converts the packetized 
PCI-Express to standard format and that is fed into a 
PCI-Express/Whatever chip (SATA, SAS, USB3, etc).

My guess is that Thunderbolt's progress will follow Intel's PCI-Express 
roadmap. When PCI-Express gets faster, Intel will roll a faster TB chip. 
Again, I am guessing. I am not reading off of any NDA material.

I don't know what the interface latencies are. For interconnect use, I 
am guessing, you would start with the same construct used with 
PCI-Express host connections. I don't know if TB will recongize another 
host on a device chain or if it is single host/multi slave.

--Jeff

/* disclaimer: the above was written under the influence of severe jet 
lag. */



On 6/10/13 3:57 PM, James Cuff wrote:
> Hi all!
>
> So a company based out of Cupertino mentioned using this silicon in a
> revamp of their MacPro line today...
>
> http://blogs.intel.com/technology/2013/06/video-creation-bolts-ahead-%E2%80%93-intel%E2%80%99s-thunderbolt%E2%84%A2-2-doubles-bandwidth-enabling-4k-video-transfer-display-2/
>
> we appear to have a second version of a 20GB/s consumer connection
> (latency unknown), and yet this search:
>
> https://www.google.com/search?q=linux+thunderbolt+interconnect
>
> does not really go anywhere cool like a github or kernel.org repo....
>
> Any qualified folks know where this thunderbolt stuff is all heading
> and are able to talk in public?
>
> Best,
>
> j.
>
> p.s.
>
> yes I did move back to .edu just in case folks were doing a double
> take. And yes, (like Dr. Layton) I do still think that cloud infrastructure
> as a service and HPC/HTC are still a really good idea for the right
> algorithms and workloads! :-)
>
> --
> dr. james cuff, director of research computing & chief technology architect
> harvard university | faculty of arts and sciences | division of science rm
> 210, thirty eight oxford street, cambridge. ma. 02138 tel: +1 617 384 7647 |
> http://about.me/jcuff
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


-- 
------------------------------
Jeff Johnson
Co-Founder
Aeon Computing

jeff.johnson at aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101   f: 858-412-3845
m: 619-204-9061

/* New Address */
4170 Morena Boulevard, Suite D - San Diego, CA 92117




More information about the Beowulf mailing list