[Beowulf] mem consumption strategy for HPC apps?

Toon Knapen toon.knapen at fft.be
Mon Apr 18 06:16:22 PDT 2005


Robert G. Brown wrote:
> On Sun, 17 Apr 2005, Toon Knapen wrote:

<snip interesting responses of Mark and Greg>

> 
> These responses are not inconsistent with Mark's or Greg's remarks.  Let
> me reiterate:
> 
>   a) The "best" thing to do is to match your job to your node's
> capabilities or vice versa so it runs in core when split up. 

<snip>

> 
>   b) IF your job is TOO BIG to fit in core on the nodes, then there IS
> NO "BEST PRACTICE".  There is only a choice.  Either:

<snip>

> 
>    Bite the bullet and do all the very hard work required to make your
> job run efficiently with whatever hardware you have at the scale you
> desire.  As you obviously recognize, this is not easy and involves
> knowing lots of things about your system.  Ultimately it comes down to
> partitioning your job so that it runs IN core again, whether it does so
> by using the disk directly, by letting the VM subsystem manage the in
> core/out of core swapping/paging for you, by splitting the job up across
> N nodes (a common enough solution) so that its partitioned pieces fit in
> core on each node and relying on IPCs instead of memory reads from disk.
> 
> Really that's it.  And in the second of these cases although people may
> be able to help you with SPECIFIC issues you encounter, it is pointless
> to discuss the "general case" because their ain't no such animal.
> Solutions for your problem are likely very different from a solution for
> somebody partitioning a lattice which might be different from somebody
> partitioning a large volume filled with varying number and position of
> particles.  An efficient solution is likely going to be expensive and
> require learning a lot about the operating system, the various parallel
> programming paradigms and available support libraries, the compute,
> memory and network hardware, 


It's true that there is no 'general case' both OTOH all developers of 
HPC applications may learn a lot of each other. It's a pitty there is 
little discussion on HPC application and/or library design, which is of 
course OT for the beowulf list (it's just a general remark), except for 
a few hot topics (such as beowulf).

This is also the reason IIUC that a minisymposium on 'computational 
infrastructures' 
(https://compmech.ices.utexas.edu/mslist/mslist.pl?recid=ha328630.119) 
is organised at the 8th USNCCM conference 
(http://compmech.ices.utexas.edu/usnccm8.html) for instance and 
workshops on library design (like 
http://www.cs.chalmers.se/Cs/Research/Software/dagstuhl/) are organised.



> and may require some nonlinear thinking, as
> there exist projects such as trapeze to consider (using other nodes' RAM
> as a virtual extension of your own, paying the ~5 us latency hit for a
> modern network plus a memory access hit, but saving BIG time over a ~ms
> scale disk latency hit).
> 
> Anyway, good luck.

thanks and thanks for your interesting response.

-- 
Check out our training program on acoustics
and register on-line at http://www.fft.be/?id=35



More information about the Beowulf mailing list