[Beowulf] Is there really a need for Exascale?
prentice.bisbal at rutgers.edu
Wed Nov 28 07:53:25 PST 2012
On 11/28/2012 02:17 AM, Mark Hahn wrote:
> as heretical as it sounds, I have to ask: where is the need for exaflop?
> I'm a bit skeptical about the import of the extreme high end of HPC -
> or to but it another way, I think much of the real action is in jobs
> that are only a few teraflops in size. that's O(1000) cores, but you'd
> size a cluster in the 10-100 Tf range...
I don't think this heretical. I think it's a perfectly legitimate
question we should be asking ourselves. Such a discussion can open a big
can of worms on this list, and should probably be it's own thread. I've
got too much work to do today, so I can't weigh in as much as I should,
but I will change the subject of this reply to start a new thread on
I frequently make analogies between HPC and car racing, usually F1 (John
Hearn's ears just pricked up!). In this case, Exascale is auto racing,
and the rest of the HPC world is regular computing. Manufacturers say
that competing in autoracing allows them to develop and test new
technology that will eventually trickle down to their passenger
vehicles. You could argue that Exascale is the same thing. Sure, it's
impractical and expensive, but it creates R&D opportunities and allows
this new technology to be proven in real use before it trickles into
"consumer" products. And then, there's the "Win on Sunday, sell on
Monday" effect, which I don't think needs any explanation.
A more cynical view would say that it's just a huge pissing contest
between different vendors,or countries, or national labs, or
An even more cynical view say that the HPC vendors lobby the government
to believe exascale is important so the government invests in it and
subsidizes their R&D.
In my opinion, the new technology driven by the move to petscale,
exascle, etc, will ultimately valuable to use consumers, but to your
average researcher, having a decent-sized cluster that they have a lot
of access to is more valuable than a large, shared system like Blue
Waters or something similar, that must shared with hundreds or thousands
of other researchers. It all comes down to FLOPS/year that they can
actually use. Yes, this ignoring capability computing situations where
you MUST have a super large cluster in order to run a really large job.
More information about the Beowulf