diskless 72 node

Putchong Uthayopas pu at smile.cpe.ku.ac.th
Sat Aug 5 18:59:58 PDT 2000


Hi,






I think here is the reference that you are looking for. Our PIRUN
Beowulf Cluster at KU in Bangkok, Thailand is 72 nodes diskless PIII500
Cluster.
The basics configs are:

- 72 diskless nodes
- 3 Fileserver with Mylex RAID Hardware.

We divide the server into 3 banks of 24 nodes and each file server will 
responsible for one bank. (This trick is similar to what Chiba city at ANL
use for their shared disk space). 
We also develop a useful tool that allow you to install this type of
cluster easily. It is opensource and available on our web .

System is running fine for a 6 month now. BUt we currently bring it down
due to the need to rearrange the power line and also we need tochange some
configuration. Please refer to this URL for more information

http://pirun.ku.ac.th./newhtml/index.html

I hope that this is useful to you.
You can also contact me directly for more information. 

Putchong Uthayopas
Kasetsart University, Thailand.

PS: I suspect that this is the largest system in South East Asia Region. 
Can anybody confirm? Are there any bigger functional beowulf cluster in
south east asia. 

Thanks.


On Fri, 4 Aug 2000, Stephan Mertens wrote:

> Hi Beowulfers,
> 
> we are planning a 72 node Beowulf cluster with diskless nodes.
> We managed to get the grant for it, but the corresponding
> referee doubts that a diskless cluster of this size can work.
> He wants us to find a working "reference installation"
> before we actually can spend the money.
> 
> Here is our setup:
> 
> 72 PIII double boards, each with 512 MByte, floppy and 2 NICs
> (100Mbps). One NIC is for interprocess communication,
> the other for NFS.
> 
> 1 dedicated NFS-Server (Linux, PIII, 19 GB RAID)
> 
> Everything is connected via 100Mbps switches.
> 
> Do you know of anything comparable that is running smooth?
> Or any serious pitfalls?
> 
> We could afford local disks, actually, but we don't like extra heat, 
> power consumption, noise, sources of failure etc.
> 
> Thanks for your help,
> 
> Stephan
> 
> _______________________________________________
> Beowulf mailing list
> Beowulf at beowulf.org
> http://www.beowulf.org/mailman/listinfo/beowulf
> 





More information about the Beowulf mailing list