NDAs Re: [Beowulf] Nvidia, cuda, tesla and... where's my
double floating point?
James.P.Lux at jpl.nasa.gov
Tue Jun 17 10:43:07 PDT 2008
At 10:07 AM 6/17/2008, Vincent Diepeveen wrote:
>I feel you notice a very important thing here.
>That is that mainly for hobbyists like me a GPU is interesting to
>program for, or for companies who have a budget to buy less than a dozen
>of them in total.
And, as you say, that's a hobby/R&D market where you're willing to
spend more in labor than in hardware.
>For ISPs the only thing that matters is the power consumption and for
>encryption at a low TCP/IP layer it's too easy to equip all those
>cheapo cpu's with encryption coprocessors which are like 1 watt or so
>and are delivering enough work to get the 100 mbit/1 gigabit nics
>fully busy, in case of public key it's in fact at a speed that you
>won't reach at a GPU when managing to parallellize it and it to work
>in a great
>manner. The ISPs look for full scalable stuff of course such
>machines, quite the opposite of having 1 card @ 250 watt.
I don't know much about the economies of running an ISP. While
electrical power (and cooling,etc.) might be a big chunk of their
budget, I suspect that mundane business stuff like advertising,
billing, account management, etc. might actually be a bigger
slice. For instance, do co-lo facilities charge you for power, or is
it like office space, where you rent it by the square foot, and an
assumed amount of power and HVAC comes with the price.
>Yet those hobbyists who are the interested persons in GPU programming
>have limited time
>to get software to run and have a budget far smaller than $10k.
>They're not even gonna buy as much Tesla's as NASA will.
>Not a dozen.
There, I think you're wrong. There's lots of hobbyists and
tinkerer's of one sort or another out there. I'd bet they sell at
least thousands of them.
>The state in which gpu programming is now is that some big companies
>can have a person toy fulltime with 1 gpu,
>as of course the idea of having a cpu with hundreds of cores is very
>attractive and looks like a realistic future,
>so companies must explore that future.
The various flavors of multi-core in a field of RAM have been around
for decades, because it's (superficially?) attractive from a
scalability standpoint. The problem, as everyone on this list is
aware, is effectively using such an architecture.. parallelizing
isn't trivial. There's a reason they still sell mainframe computers,
but, hope does spring eternal.
>Of course every GPU/CPU company is serious in their aim to produce
>products that perform well, we all do not doubt it.
Not necessarily, unless your performance metric is shareholder
return. It is the rare company that can make a business of selling
solely on top-end performance (e.g. Cray). There's also several
strategies and target markets. If you have good manufacturing
capability for large quantities, you adjust your performance to what
consumers will buy at a price you can make money on. If you're in a
more "fee for service" model, then you likely are doing smaller unit
volumes, but the units cost a lot more (I suspect that most of the
cluster vendors on this list fall in this category), but still, in
the long run the cost to do the job MUST be less than what the
customer is willing to pay (unless the owner is some sort of
philanthropist, naive, or a fool)
>Yet it is only attractive to hobbyists and those hobbyists are not
>gonna get any interesting technical data needed to get the maximum
>out of the GPU's from Nvidia. This is a big problem. Those hobbyists
>have very limited time to get their sparetime products done
>to do numbercrunching,
So it's basically an investment decision. How much value do you want
to get out of your investment of time or money? If you're only
willing to spend a few hours, then you must not value the end state
of the work very highly (or, more correctly, you value something else
> so being busy fulltime writing testprograms to
>know everything about 1 specific GPU is not something they
>all like to do for a hobby. Just having that information will attract
>the hobbyists as they are willing to take the risk to buy 1 Tesla and
>spend time there. That produces software. That software will have a
>based upon that performance perhaps some companies might get interested.
To a certain extent, this is the "build it and they will come" model.
It's not one that is going to make any real difference to Nvidia's
bottom line, so they're unlikely to invest more than a token amount in it.
>So in the end i guess some stupid extension of SSE will give a bigger
>increase in crunching power than the in itself attractive gpgpu
>The biggest limitation being development time from hobbyists.
And HPC hobbyists are a very tiny market, not worth very much commercially.
More information about the Beowulf