[Beowulf] [EXTERNAL] Re: Frontier Announcement
Lux, Jim (337K)
james.p.lux at jpl.nasa.gov
Sun May 12 10:57:46 PDT 2019
ML has been around forever (way back to perceptrons, I built a 2 neuron 3 input classifier using opamps in 1974).. it was the computational horsepower that made empirical experiments with more than 3 layers possible, along with some datasets to train/test with.
Without huge datasets of labeled images, for instance, it is hard to try new approaches.
Fast computers, sitting on a desk, without needing to ask a “high performance computing resource allocation committee” for runtime lets someone do the “hey, what if I stack up 10 layers of neurons” and grind it over night and find “hey it works, I don’t have any clue why, but it’s cool none-the-less”..
Then, some commercial applications (things like sorting images, or processing video streams) and, all of a sudden, there you go.
This, to me, is the true value of Beowulf – commodity stuff (meaning cheap), ganged up in multiples, lets you have a “personal” HPC capability. Something a bit more than a fast desktop machine, but one I don’t have to justify the use (or non-use) of. A speedup of 10-50X is the difference between “one try overnight” and “one try in a few minutes when I have some spare time in between the other 13 things I have to do”.
From: Beowulf <beowulf-bounces at beowulf.org> on behalf of "beowulf at beowulf.org" <beowulf at beowulf.org>
Reply-To: John Hearns <hearnsj at googlemail.com>
Date: Thursday, May 9, 2019 at 9:54 AM
To: "beowulf at beowulf.org" <beowulf at beowulf.org>
Subject: [EXTERNAL] Re: [Beowulf] Frontier Announcement
Gerald that is an excellent history.
One small thing though: "Of course the ML came along"
What came first - the chicken or the egg? Perhaps the Nvidia ecosystem made the ML revolution possible.
You could run ML models on a cheap workstation or a laptop with an Nvidia GPU.
Indeed I am sitting next to my Nvidia Jetson Nano - 90 dollars for a GPU which can do deep learning.
Prior to CUDA etc. you could of course do machine learning, but it was being done in universities.
I stand to be corrected.
On Thu, 9 May 2019 at 17:40, Gerald Henriksen <ghenriks at gmail.com<mailto:ghenriks at gmail.com>> wrote:
On Wed, 8 May 2019 14:13:51 -0400, you wrote:
>On Wed, May 8, 2019 at 1:47 PM Jörg Saßmannshausen <
>sassy-work at sassy.formativ.net<mailto:sassy-work at sassy.formativ.net>> wrote:
>Once upon a time portability, interoperabiilty, standardization, were
>considered good software and hardware attributes.
>Whatever happened to them?
I suspect in a lot of cases they were more ideals and goals than
Just look at the struggles the various BSDs have in getting a lot of
software running given the inherent Linuxisms that seem to happen.
In the case of what is relevant to this discussion, CUDA, Nvidia saw
an opportunity (and perhaps also reacted to the threat of not having
their own CPU to counter the integrated GPU market) and invested
heavily into making their GPUs more than simply a 3D graphics device.
As Nvidia built up the libraries and other software to make life
easier for programmers to get the most out of Nvidia hardware AMD and
Intel ignored the threat until it was too late, and partial attempts
at open standards struggled.
And programmers, given struggling with OpenCL or other options vs
going with CUDA with its tools and libraries, went for what gave them
the best performance and easiest implementation (aka a win/win).
Of course then ML came along and suddenly AMD and Intel couldn't
ignore the market anymore, but they are both struggling from a distant
2nd place to try and replicate the CUDA ecosystem...
Beowulf mailing list, Beowulf at beowulf.org<mailto:Beowulf at beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf