[Beowulf] [OT] MPI-haters

Douglas Eadline deadline at eadline.org
Fri Mar 11 11:41:01 PST 2016


To the rugged individualist who do it the wrong way:

I think Brian is essentially correct. In the old days, this list
was about getting the most out of commodity hardware for
HPC (remember it is called the Beowulf list) Many discussions
revolved around re-purposing things (hardware and software)
that were not intended for HPC into clusters. We were accused of
doing HPC "the wrong way."

Pentium Pro for HPC that is crazy. Use a video card for HPC,
that will never work. IDE instead of SCSI that is blasphemy.

The problem was the "wrong way" was faster, better, and cheaper
than the established method for many types of problems (not all).
Success followed.

Eventually, Tom Sterling wrote a "Beowulf is Dead" editorial in
the December 2003 issue of Clusterworld where he basically proposed
that Beowulf or commodity based HPC has become mainstream as
commercial companies entered the market. We used to talk about
creating things from various "off-the-shelf" sources, now it
often comes off-the-shelf from one source.

In my experience we became legit (for the most part).
We now host the biggest event at SC uniquely supported by
multiple vendors who find cooperation works. And, I do
miss the days when we sat around and talked about
making things work. I should note, however, that I do enjoy
creating the snarky invitations.

In any case, there is still lots to figure out. I have my issues
but, I would like to invite list readers to suggest areas that
need attention in this space. To borrow a question from Sterling:

  Why does Beowulf suck?

Let it fly. If there is enough responses, I'll even write a summary
and place it on ClusterMonkey or the Next Platform.


--
Doug





> I like to think that RGB can be 'summoned' by mentioning his name a few
> times in a thread... and then magically he appears, waxing poetically
> about
> some interesting area of Beowulfry / HPC, and then vanishes in a puff of
> equations.
>
> So that I'm actually contributing something meaningful and not wistfully
> remembering the past, I'll add that I think the low traffic is simply
> because *building* systems has become much easier - there's plenty of
> open-source or proprietary tools if you're inclined to do it yourself, and
> plenty of vendors who'll ensure you don't need to.  Clearly there's been a
> large increase in HPC usage over the years, but the vast majority of those
> systems (>98%?) are ones that operate at a scale where not *much* needs to
> be 'figured out' - eg, a flat network topology so you don't need to ensure
> hop-aware node selection for jobs, parallel file systems that 'work' and
> give improvement without requiring you to recompile a kernel, rip your
> hair
> out, etc.
>
> As a corollary to this, years ago most places were still 'experimenting'
> with clusters - at universities, they were often run by a research group
> or
> a department, tasked to a narrow area, and serving a small handful of
> users.  That meant that tinkering with them was very doable - you want to
> take the 12-node cluster down for two hours to try a new network driver
> that might help your QCD code via better latency?  Go for it!  Now,
> clusters are no longer an 'engineering project' by a handful of grad
> students or linux geeks, they're a fundamental, central resource for
> research communities, and they're larger, serving many more users, and
> often managed by dedicated teams of IT staff.  When you tried to tinker
> with that network driver six years ago it wasn't a problem.  But now you
> want the IT department that's running a production cluster 'appliance' to
> give you root access to try some beta driver to get a few percentage
> faster
> results on their 500-node cluster?  Well, I'm going to go out on a limb
> and
> label that as 'unlikely'.  ;)
>
> In short, I think the environment we operate under has changed
> considerably, leading to less traffic about the nuts and bolts of clusters
> -
> if you no longer need to wrestle with your PXE boot configuration files
> because some distribution or tool handles that all for you, you no longer
> need to post your frustrations and questions to the list for help, right?
>  (I say that because I think I did it once..)  At the same time, the
> *usage* landscape
> has diversified quite a bit - so fewer people know as much about the whole
> field, and thus certain topics garner fewer comments.
>
> All in all, though, it's a list with some incredibly experienced people --
> maybe it's worth thinking about a better way to use this list as a
> resource?  For example, instead of it just being a 'How do I do <X>?"
> thing, perhaps once a month someone (*cough*Chris Samuel*cough*) gets a
> volunteer to write a post about their recent challenges/experiences/etc.?
> Just an idea; I know I rarely post questions here, yet when I hear a talk
> about something, I always have a bunch of thoughts about it.  Thoughts?
>
> Cheers,
>   - Brian
>
>
> On Thu, Mar 10, 2016 at 11:48 AM, Prentice Bisbal <
> prentice.bisbal at rutgers.edu> wrote:
>
>> On 03/10/2016 01:34 PM, Jeff Becker wrote:
>>
>>> On 03/10/2016 10:32 AM, Prentice Bisbal wrote:
>>>
>>>> This list used to get A LOT more traffic. Not sure what happened over
>>>> the past few years. I miss the witty banter and information I used to
>>>> get
>>>> from all that traffic, but I definitely don't miss Vincent.
>>>>
>>>
>>> :-)
>>>
>>>>
>>>> It just occurred to me that if you know who Vincent or RGB is, you're
>> probably an old-timer on this list now.
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>
> --
> Mailscanner: Clean
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>


--
Doug

-- 
Mailscanner: Clean



More information about the Beowulf mailing list