[Beowulf] Good demo applications for small, slow cluster

Lux, Jim (337C) james.p.lux at jpl.nasa.gov
Wed Aug 21 12:47:01 PDT 2013


Yeah, there was a POVray with parallelism that I've used.  And a variety of "video wall" kind of things.

What I was looking for was something that you could give as an assignment to a student "go code this in parallel, using this MPI-lite style library, and measure the performance".   Rendering is almost EP, and is more about shuffling files (or pieces thereof).

That's why the sorting/searching kinds of problems are interesting.

I was also thinking that some sort of simple finite element code, where you have to tile it.  Say you're doing a 1000x1000 grid, and spreading it across 8 processors.

The computation could be simply solving Laplace's equation for heat in an iterative fashion.  It doesn't have to be multidimensional Navier-Stokes with compressibility and inhomogenous media.

Jim Lux


-----Original Message-----
From: Douglas Eadline [mailto:deadline at eadline.org] 
Sent: Wednesday, August 21, 2013 9:45 AM
To: Lux, Jim (337C)
Cc: Max R. Dechantsreiter; beowulf at beowulf.org
Subject: Re: [Beowulf] Good demo applications for small, slow cluster


> Sorts in general.. Good idea.
>
> Yes, we'll do a distributed computing bubble sort.
>
> Interesting, though.. There are probably simple algorithms which are 
> efficient in a single processor environment, but become egregiously 
> inefficient when distributed.

e.g. The NAS parallel suite has an integer sort (IS) that is very latency sensitive.

For demo purposes, nothing beats parallel rendering.
There used to be a PVM and MPI POVRay packages that demonstrated faster completion times as more nodes were added.

--
Doug


>
> Jim
>
>
>
> On 8/20/13 12:11 PM, "Max R. Dechantsreiter" 
> <max at performancejones.com>
> wrote:
>
>>Hi Jim,
>>
>>How about bucket sort?
>>
>>Make N as small as need be for cluster capability.
>>
>>Regards,
>>
>>Max
>>---
>>
>>
>>
>>On Tue, 20 Aug 2013 maxd at performancejones.com wrote:
>>
>>> Date: Tue, 20 Aug 2013 00:23:53 +0000
>>> From: "Lux, Jim (337C)" <james.p.lux at jpl.nasa.gov>
>>> Subject: [Beowulf] Good demo applications for small, slow cluster
>>> To: "beowulf at beowulf.org" <beowulf at beowulf.org>
>>> Message-ID:
>>> 	<F7B6AE13222F1B43B210AA4991836895236966E9 at ap-embx-sp40.RES.AD.JPL>
>>> Content-Type: text/plain; charset="us-ascii"
>>>
>>> I'm looking for some simple demo applications for a small, very slow 
>>>cluster that would provide a good introduction to using message 
>>>passing to implement parallelism.
>>>
>>> The processors are quite limited in performance (maybe a  few 
>>>MFLOP), and they can be arranged in a variety of topologies (shared 
>>>bus, rings,
>>>hypercube) with 3 network interfaces on each node.  The processor to 
>>>processor link probably runs at about 1 Mbit/second, so sending 1 
>>>kByte takes 8 milliseconds
>>>
>>>
>>> So I'd like some computational problems that can be given as 
>>>assignments on this toy cluster, and someone can thrash through 
>>>getting it to work, and in the course of things, understand about 
>>>things like bus contention, multihop vs single hop paths, 
>>>distributing data and collecting results, etc.
>>>
>>> There's things like N-body gravity simulations, parallelized FFTs, 
>>>and so forth.  All of these would run faster in parallel than 
>>>serially on one node, and the performance should be strongly affected 
>>>by the interconnect topology.  They also have real-world uses (so, 
>>>while toys, they are representative of what people really do with 
>>>clusters)
>>>
>>> Since sending data takes milliseconds, it seems that computational 
>>>chunks which also take milliseconds is of the right scale.  And, of 
>>>course, we could always slow down the communication, to look at the 
>>>effect.
>>>
>>> There's no I/O on the nodes other than some LEDs, which could blink 
>>>in different colors to indicate what's going on in that node (e.g.
>>>communicating, computing, waiting)
>>>
>>> Yes, this could all be done in simulation with virtual machines (and 
>>>probably cheaper), but it's more visceral and tactile if you're 
>>>physically connecting and disconnecting cables between nodes, and 
>>>it's learning about error behaviors and such that's what I'm getting at.
>>>
>>> Kind of like doing biology dissection, physics lab or chem lab for 
>>>real, as opposed to simulation.  You want the experience of "oops, I 
>>>connected the cables in the wrong order"
>>>
>>> Jim Lux
>>>
>>_______________________________________________
>>Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin 
>>Computing To change your subscription (digest mode or unsubscribe) 
>>visit http://www.beowulf.org/mailman/listinfo/beowulf
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin 
> Computing To change your subscription (digest mode or unsubscribe) 
> visit http://www.beowulf.org/mailman/listinfo/beowulf
>
> --
> Mailscanner: Clean
>


--
Doug

--
Mailscanner: Clean




More information about the Beowulf mailing list