[Beowulf] automount on high ports

Gerry Creager gerry.creager at tamu.edu
Wed Jul 2 07:09:34 PDT 2008


Scott Atchley wrote:
> On Jul 2, 2008, at 7:22 AM, Carsten Aulbert wrote:
> 
>> Bogdan Costescu wrote:
>>>
>>> Have you considered using a parallel file system ?
>>
>> We looked a bit into a few, but would love to get any input from anyone
>> on that. What we found so far was not really convincing, e.g. glusterFS
>> at that time was not really stable, lustre was too easy to crash - at l
>> east at that time, ...
> 
> Hi Carsten,
> 
> I have not looked at GlusterFS at all. I have worked with Lustre and 
> PVFS2 (I wrote the shims to allow them to run on MX).
> 
> Although I believe Lustre's robustness is very good these days, I do not 
> believe that it will not work in your setting. I think that they 
> currently do not recommend mounting a client on a node that is also 
> working as a server as you are doing with NFS. I believe it is due to 
> memory contention leading to deadlock.

Lustre is good enough that it's the parallel FS at TACC for the Ranger 
cluster.  And, I've had no real problems as a user thereof.  We're 
brining up glustre on our new cluster here ( <flamebait> CentOS/RHEL5, 
not debian </flamebait>).  We looked at zfs but didn't have sufficient 
experience to go that path.

> PVFS2 does, however, support your scenario where each node is a server 
> and can be mounted locally as well. PVFS2 servers run in userspace and 
> can be easily debugged. If you are using MPI-IO, it integrates nicely as 
> well. Even so, keep in mind that using each node as a server will 
> consume network resources and will compete with MPI communications.

Someone at NCAR recently suggested we review PVFS2.  I'm gonna do it as 
soon as I get a free moment on vacation.
-- 
Gerry Creager -- gerry.creager at tamu.edu
Texas Mesonet -- AATLT, Texas A&M University	
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843



More information about the Beowulf mailing list