[Beowulf] ***UNCHECKED*** Re: [EXTERNAL] Re: Re: Spark, Julia, OpenMPI etc. - all in one place

Douglas Eadline deadline at eadline.org
Thu Oct 15 08:00:30 PDT 2020


Jim,

The new term (buzzword) for local computing is "edge computing"
I continue to build Limulus appliance systems for this reason.

  https://www.limulus-computing.com

I can pack a reasonable amount of horsepower in
a turn-key local power/noise/heat envelope (i.e. next to your desk
with no data center needed)

There are a variety of use cases such as you describe and
situations where a Data Center (local or remote) is not
a convenient possibility, yet computational heavy problems
are important (as are some analytics Hadoop/Spark type things)

There is a bunch of publicly available technical background
in the online manual (not quite complete) The functional diagram
can be very helpful.

 https://www.limulus-computing.com/Limulus-Manual

Also, one of the enabling technologies has been 3D-Printing.
We can put almost anything in a box (given power/heat constraints)
by designing and printing parts to fit in commodity cases.
Our micro-ATX blades are a good example.


--
Doug





> What I find fascinating about the poster from NASA is the comment about
> "man in the loop data product generation" - this is what has always
> interested me - being able to get interactive supercomputing - My desires
> have always been to run moderately large models (run time >30 seconds)
> with large parameter spaces, in an interactive sense, so I can "turn the
> knobs" on the design.
>
> As a result, my particular interests have run more towards the
> embarrassingly parallel, run lots of cases that a single node can handle,
> with the associated scatter/gather.  This is opposed to running "big
> models" (although I've had reasons to do that).
>
> For example, in the last year, I've been running a lot of models of an
> antenna forming the Owens Valley Radio Observatory Long Wavelength Array
> (OVRO-LWA). This is a large array of hundreds of antennas scattered across
> a few sq km near Big Pine, CA and observes the cosmos in the 30-80 MHz
> band. The properties of a single antenna are easy and quick to model.  But
> there's a bunch of questions that require more time - What's the
> interaction between the antennas? How close can they be and not interact?
> What's the effect of manufacturing tolerances? When it rains, and the dirt
> under the antenna is wet, how much does that change the response?
>
> Similarly, I've been doing models of wire antennas on the surface of the
> Moon, for 100kHz to 50 MHz.  Any one antenna is trivial and quick to model
> (and, for that matter, there are analytic models that are pretty good).
> But we've got the same questions.  What if the rover laying the wire out
> doesn't do it in a perfectly straight line? What's the interaction between
> 2 antennas that are 300 meters apart (given that the wavelength at 100kHz
> is 3km, the antennas are "close" in electromagnetic terms)?
>
> These are really sort of Monte Carlo type analyses (much like running
> multiple runs of weather models with slightly different starting
> parameters).
>
> Some time in the past (>10 years ago), I was really interested in "field
> supercomputing" - there are problems (subsurface imaging by ground
> penetrating radar) that require a lot of computation.  And you don't have
> a fat pipe to a HPC facility to send the data (satphones at 10s of kbps is
> the notional scenario).  But here, you want something that will survive
> field use - no computer room, preferably a sealed box, etc.
>
> Interestingly, all of these are "personal HPC" - that is, the goal is to
> have a HPC capability that is controlled locally - you don't have to
> compete in a queue for resources, etc. and because it's "local", there's
> no shoveling data to the HPC and getting results back.  Further, it's
> interactive - you want to tweak, run it again, and get the answers back
> quickly - single digit minutes at most.  This has been described as high
> throughput computing, but I'm not sure that's right - to me that implies
> sustained bandwidth - it's kind of like "station wagon full of tapes" has
> high data throughput but long latency.  Latency is important for human
> scale interaction.
>
> These days, data pipes are easier to come by - A field scientist or
> engineer could send megabits to a remote HPC center.  I send my data and
> computation to TACC, in Texas, and half of JPL's latest cluster Gattaca is
> hosted at SuperNAP, hundreds of miles from my office.  But neither of
> those are truly interactive - there are batch queues, and while my jobs
> are small enough to get done quickly (minutes), it *is* redolent of when I
> was a youth, submitting my deck over the counter and coming back a few
> hours later to pick up the greenbar paper.  I want that immediacy -
> glowing digits and characters on the screen appearing instantaneously,
> even if it's "divide by zero error in Line 310", as opposed to sitting at
> the keypunch and standing in line.  (I do have to chuckle, though, at my
> CS lecturers in the late 1970s making a big deal about desk checking your
> code before submitting, because time is money, and computer time is more
> expensive than your time - how life has changed in 40 years).
>
>
>
>
>
> On 10/15/20, 7:15 AM, "Beowulf on behalf of Michael Di Domenico"
> <beowulf-bounces at beowulf.org on behalf of mdidomenico4 at gmail.com> wrote:
>
>     ah, interesting.
>
>     this is what i was referring to, which is what i believe is codified
>     in the "beowulf" book i recall reading.
>
>     https://urldefense.us/v3/__https://spinoff.nasa.gov/Spinoff2020/it_1.html__;!!PvBDto6Hs4WbVuu7!bIrJokEF-PV-9u42csgECfdGQv6CPxfH654QEeVT__BsDh4PURoz-fJxq23GkgQ_OaRx87I$
>
>     but it seems this also exists.  i can't recall offhand whether that
>     was mentioned in the book or not
>
>     https://urldefense.us/v3/__https://www.hq.nasa.gov/hpcc/reports/annrpt97/accomps/ess/WW49.html__;!!PvBDto6Hs4WbVuu7!bIrJokEF-PV-9u42csgECfdGQv6CPxfH654QEeVT__BsDh4PURoz-fJxq23GkgQ_c8ZFY-A$
>
>
>     On Thu, Oct 15, 2020 at 9:44 AM Douglas Eadline <deadline at eadline.org>
> wrote:
>     >
>     >
>     > > On Thu, Oct 15, 2020 at 12:10 AM Lux, Jim (US 7140) via Beowulf
>     > > <beowulf at beowulf.org> wrote:
>     > >>
>     > >> Well, maybe a Beowulf cluster of yugos…
>     > >
>     > > not really that far of a stretch, from what i can recall wasn't
> the
>     > > first beowulf cluster a smattering of random desktops layout on
> the
>     > > floor in an office
>     >
>     > Actually it was a single small cabinet with 486 processor
>     > motherboards and 10Mbit Ethernet with a hub. There is
>     > a small picture of it on the SC14 Beowulf Bash invite
>     > (in the middle) As I recall we could only find an old
>     > small picture of it.
>     >
>     > https://urldefense.us/v3/__https://www.clustermonkey.net/Supercomputing/beowulf-bash-invitations-2008-to-present.html__;!!PvBDto6Hs4WbVuu7!bIrJokEF-PV-9u42csgECfdGQv6CPxfH654QEeVT__BsDh4PURoz-fJxq23GkgQ_zWEo5Pw$
>     >
>     > From there all kinds of configurations appeared.
>     > Including mostly "workstations" on wire shelves
>     > (the differentiation between "desktop" and "server"
>     > was just starting with the introduction of the Pentium-Pro)
>     >
>     > For those interested in the Beowulf history you can watch
>     > the short video (fully shareable BTW, sponsored by AMD)
>     >
>     >   https://urldefense.us/v3/__https://www.youtube.com/watch?v=P-epcSlAFvI__;!!PvBDto6Hs4WbVuu7!bIrJokEF-PV-9u42csgECfdGQv6CPxfH654QEeVT__BsDh4PURoz-fJxq23GkgQ_FyrFvOo$
>     >
>     >
>     > --
>     > Doug
>     >
>     > > _______________________________________________
>     > > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> Computing
>     > > To change your subscription (digest mode or unsubscribe) visit
>     > > https://urldefense.us/v3/__https://beowulf.org/cgi-bin/mailman/listinfo/beowulf__;!!PvBDto6Hs4WbVuu7!bIrJokEF-PV-9u42csgECfdGQv6CPxfH654QEeVT__BsDh4PURoz-fJxq23GkgQ_or4lTPE$
>     > >
>     >
>     >
>     > --
>     > Doug
>     >
>     _______________________________________________
>     Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> Computing
>     To change your subscription (digest mode or unsubscribe) visit
> https://urldefense.us/v3/__https://beowulf.org/cgi-bin/mailman/listinfo/beowulf__;!!PvBDto6Hs4WbVuu7!bIrJokEF-PV-9u42csgECfdGQv6CPxfH654QEeVT__BsDh4PURoz-fJxq23GkgQ_or4lTPE$
>
>


-- 
Doug



More information about the Beowulf mailing list