PVFS

Kumaran Rajaram kums at CS.MsState.EDU
Fri Nov 15 12:15:02 PST 2002


   I used the Bonnie Disk benchmark (http://www.acnc.com/benchmarks.html)
to assess the I/O load on CPU's performance. The results on two
configurations of PVFS: PVFS-TCP/IP-FastEthernet and PVFS-TCP/IP-Myrinet
(Through Ethernet Emulation) are tabulated below. Probably you could do
the same in your testbed for better analysis of the different Compute +
I/O Node configurations. For my experiments, I had configured 4 Nodes as
dedicated I/O Nodes and other 4 as dedicated Compute Nodes.

--------------------

PVFS over Fast-EtherNet
-----------------------

[root at dell1 root]# ./Bonnie -d /mnt/pvfs -m dell1 -s 100
File './Bonnie.16262', size: 104857600
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
              -------Sequential Output-------- ---Sequential Input--
--Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec
%CPU
dell1      100  9835 99.7 111140 97.7 134982 94.9  9049 99.2 407565 99.5
20860.1 99.1

PVFS-MyriNet
------------

[root at dell1 root]# ./Bonnie -d /mnt/pvfs -m dell1_m -s 100
File '/mnt/pvfs/Bonnie.10062', size: 104857600
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 3...Seeker 2...start 'em...done...done...done...
              -------Sequential Output-------- ---Sequential Input--
--Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec
%CPU
dell1_m   100  5768 99.9 37526 61.6 79017 99.5  5353 100.0 204009 99.6
18299.5 196.7

-----------

-Kums

-- Kumaran Rajaram, Mississippi State University --
kums at cs.msstate.edu  <http://www.cs.msstate.edu/~kums>




On Fri, 15 Nov 2002, Jeffery A. White wrote:

> The ongoing discussion about PVFS and other file systems caused me
> to want to revisit the issue for our application. We are running a
> 48 node P4 based cluster that has two networks. Net boot and MPI are
> run across a fast ethernet network  with a seperate boot server and
> the user file space, that is resident on a hardware RAID5 files server,
> is run across a 2nd seperate fast ethernet network. We have multiple
> users who pretty much utilize all the compute nodes all the time so I
> don't want to give up compute nodes and turn them into PVFS I/O nodes.
> However my compute nodes usually only have a 50% to 60% load on them.
> My question is would using my existing nodes as both compute and PVFS
> I/O nodes severely degrade their performance as compute nodes? What
> about
> using dual cpu nodes where one cpu is for compute and one cpu is for
> PVFS
> I/O? Would that be a better approach or would having the MPI traffic and
> PVFS traffic sharing the same system bus and PCI bus wind up being a
> bottle
> neeck?
>
> Jeff White




More information about the Beowulf mailing list