[Beowulf] Multicore Is Bad News For Supercomputers
Lux, James P
james.p.lux at jpl.nasa.gov
Sat Dec 6 07:26:20 PST 2008
On 12/6/08 6:13 AM, "Franz Marini" <franz.marini at mi.infn.it> wrote:
> On Sat, 2008-12-06 at 23:03 +1100, Chris Samuel wrote:
>> I'm wondering though if we're starting to see a
>> subtle shift in direction with more and more
>> emphasis getting placed on accelerators (mainly
>> GPGPU, but including Cell, FPGA's, etc) ?
> Starting ? Am I the only one remembering accelerators boards (based on
> FPGA, Transputers, Motorola 88k, Intel i960, various DSPs and other
> processors) being produced and advertised in, e.g., Byte magazine back
> in the 80s and early 90s ?
> The problems with those solutions have always been the extremely
> proprietary nature of the products, and therefore the lack of libraries
> and (community) support, and last but not least, cost.
I don't think "proprietary" is quite the right word here, at least in the
sense of a closed architecture. A lot of those coprocessor boards had
complete documentation and anyone who knew how to program, say, a TMS320,
could use them.
I think the real problem was that they were always sort of niche products
(often, a commercial product derived from a specific custom device meeting a
specific custom need) and unless you had just the right problem to solve,
they didn't buy you very much in performance.
The other problem was toolchains. Back then, there was no gnu tool chain.
The FPGA folks (like xilinx and altera) were using the ASIC design model for
their tools (i.e. Charge a huge amount, because they save enough engineer
time over graph paper and rubylith that you can charge a FTE's wages as an
annual license fee and still come out ahead).
The boards themselves weren't particularly expensive compared to other
add-on boards for your PC or (dare I say it) S-100 chassis.
(I note that some of these things are really still available, at least in
functionally similar form. A lot of FPGA development is done on various
cards that plug into a PCI bus..See the offerings from, e.g., Nallatech)
> Things are better now with, say, CUDA, mainly because of the huge
> installed base and the low cost.
That's exactly it. The special purpose hardware has become commodity.
> OpenCL may shape to be an interesting solution. Should someone develop
> a, e.g., FPGA-based accelerator board, he would (only) need to support
> OpenCL to overcome all, except maybe cost, the problems that plagued the
> older solutions I mentioned before...
My general impression is that it is an order of magnitude more difficult to
build a FPGA solution for a given computational problem than for a general
purpose CPU/VonNeumann style machine. So, you're not going to see compilers
that take an algorithm description (at a high level) and crank out optimized
FPGA bitstreams any time soon. After all, we've had 50 years to do
compilers for conventional architectures. (I'm not talking here about
generating code for a CPU instantiated on an FPGA.. I'm talking purpose
specific gate designs).
There are high level design tools for FPGAs (Signal Processing Workbench,
etc.) but they're hardly common or cheap. For all intents and purposes,
doing FPGA designs today is basically like coding in assembler on a bare
machine with no operating system, etc. There are libraries of standard
components available under GPL (e.g. Gaisler's GRLIB), but it's still pretty
low level. (in software terms: Oh, we've got MACROS in our assembler! And
include files! And a linker!)
More information about the Beowulf