[Beowulf] automount on high ports

Prentice Bisbal prentice at ias.edu
Thu Jul 3 06:19:31 PDT 2008


Henning Fehrmann wrote:
> On Wed, Jul 02, 2008 at 09:19:50AM +0100, Tim Cutts wrote:
>> On 2 Jul 2008, at 8:26 am, Carsten Aulbert wrote:
>>
>>> OK, we have 1342 nodes which act as servers as well as clients. Every
>>> node exports a single local directory and all other nodes can mount this.
>>>
>>> What we do now to optimize the available bandwidth and IOs is spread
>>> millions of files according to a hash algorithm to all nodes (multiple
>>> copies as well) and then run a few 1000 jobs opening one file from one
>>> box then one file from the other box and so on. With a short autofs
>>> timeout that ought to work. Typically it is possible that a single
>>> process opens about 10-15 files per second, i.e. making 10-15 mounts per
>>> second. With 4 parallel process per node that's 40-60 mounts/second.
>>> With a timeout of 5 seconds we should roughly have 200-300 concurrent
>>> mounts (on average, no idea abut the variance).
>> Please tell me you're not serious!  The overheads of just performing the NFS mounts are going to kill you, never mind all the network traffic going 
>> all over the place.
>>
>> Since you've distributed the files to the local disks of the nodes, surely the right way to perform this work is to schedule the computations so that 
>> each node works on the data on its own local disk, and doesn't have to talk networked storage at all?  Or don't you know in advance which files a 
>> particular job is going to need?
> 
> Yes, this is the problem. The amount of files is too big to store it
> everywhere (few TByte and 50 million files). Mounting a view NFS server does not provide
> the bandwidth. 
> On the other hand, the coreswitch should be able to handle the flows non
> blocking. We think that nfs mounts are the fastest possibility to
> distribute the demanded files to the nodes. 
> 
> Henning

Sounds like you need a parallel filesystem of some sort. Have you looked
at that option? I know, they cost $$$$.

--
Prentice



More information about the Beowulf mailing list