[Beowulf] High Performance for Large Database

Laurence Liew laurence at scalablesystems.com
Tue Nov 16 01:14:20 PST 2004


Hi,

Sorry - I had meant data being distributed on nodes (not necessary 
compute) - and in Lustre case in dedicated data servers called OSTs - 
and not in a central SAN.

I believe Lustre can actually run compute + IO today on the same nodes 
but not a recommended configuration as it may lead to some data 
corruption - saw a  FAQ on this and CFS points to a new version that 
will resolve this problem.

Cheers!
Laurence

Guy Coates wrote:
>>I would still prefer the model of PVFS1/2 and Lustre where the data is
>>distributed amongst the compute nodes
>>
> 
> Lustre data isn't distributed on compute nodes; the data sits on dedicated
> nodes called OSTs. You can't mount the lustre filesystem back onto nodes
> which are OSTs as you hit all sorts of race conditions in the vfs layer.
> 
> One filesystem to look at is GPFS from IBM. You can run it in a direct-SAN
> attached mode or you can run where the storage is distributed to local
> disk on the compute nodes. We run both configurations on our cluster.
> 
> GPFS will also do "behind the scenes replication", so you can tolerate up
> to two node failures per node group and still have a complete filesystem.
> 
> Cheers,
> 
> Guy Coates
> 

-- 
Laurence Liew, CTO		Email: laurence at scalablesystems.com
Scalable Systems Pte Ltd	Web  : http://www.scalablesystems.com
(Reg. No: 200310328D)
7 Bedok South Road		Tel  : 65 6827 3953
Singapore 469272		Fax  : 65 6827 3922




More information about the Beowulf mailing list