[Beowulf] Overcoming processor/accelerator multiplicity and heterogeniety with virtual machines ... ??

Richard Walsh rbw at ahpcrc.org
Wed Mar 28 07:37:22 PDT 2007


All,

The nice-ities of commodity, ILP and clock-driven performance improvements
have given way to the complexity of multi-core (with two flavors, hetero
and homo)
and connected accelerators/processors (GPUs, FPGAs, HPC Specific
Accelerators).
And while the hardware mentioned is various, a large plurality of HPC
applications are
still data intensive (not all as the Berkeley "Dwarfs" remind us) and
often suited to
DLP and data flow techniques.  A fundamental HPC programming problem is
programming to this common algorithmic theme while the underlying processor
environment shifts more rapidly that ever.  This is a productivity,
performance, and
portability issue.

When things get too complicated on one side of an interface, computing
types have a
habit of creating a virtual layer that appeals to the side that remains
consistent and
hides the complexities of the other.  A virtual machine and an interface
to it is a theme
that is creeping into the HPC environment.  There is PeakStream model in
which
HPC (data parallel pre-dominant code) is compiled to a virtual machine
that is a
proxy for any resource that can process the data parallel kernel its API
can represent
(multi-many-core, accelerator, etc.).  It has the added "marketed" bonus
of dynamically
creating the "right-sized", partially blocked chunks of said kernel
(just in time) to meet
the fixed computational intensity features of the particular destination
hardware
(flop/mop ratio) to manage its particular bandwidth insufficiencies. 
Then there is the
virtual machine of Mitrion-C which serves as a proxy for whatever FPGA
(mostly Xilinx
at the moment) they support and you have attached to your processor. 
And there is
Cray stated intent to provide an adadable computing environment and the
huge problem
of actually doing ... which suggests virtual machines again (at least to
me).

I am interested in discussing the benefits (and drawbacks) of this
virtual machine
concept as it relates to HPC and cluster computing in our increasingly
complex processing
environment.  Does it makes sense at all?  Is a DLP, data-flow oriented
VM the right
first choice for such a machine?

What does the collective wisdom of Beowulf oracle have to say on the matter?

Looking forward to your always interesting thoughts ...

rbw

--

Richard B. Walsh

Project Manager
Network Computing Services, Inc.
Army High Performance Computing Research Center (AHPCRC)
rbw at ahpcrc.org  |  612.337.3467

>
>  "The world is given to me only once, not one existing and one
>   perceived. The subject and object are but one."
>
>   Erwin Schroedinger

-----------------------------------------------------------------------
This message (including any attachments) may contain proprietary or
privileged information, the use and disclosure of which is legally
restricted.  If you have received this message in error please notify
the sender by reply message, do not otherwise distribute it, and delete
this message, with all of its contents, from your files.
----------------------------------------------------------------------- 




More information about the Beowulf mailing list