[Beowulf] NFS cache vs. local reading
Daniel Navas-Parejo Alonso
danapa2000 at gmail.com
Sat Sep 30 04:50:21 PDT 2006
To make NFS to do that, you should configure something similar to cachefs.
I've used this in Solaris, don't know if it has been released also to Linux
or other OSs. If you use LustreFS or similar, you'll have also the cache
I think your suggestion is right, let NFS do that, just in case in the
future, the code behaviour is different so you have to change the script
again and again each time the number 100 decreases/increases.
Anyway, take into account that NFS cache is local to each node of the
cluster, so if the subsequent times you run the code, the jobs are scheduled
to other nodes that have never access the data file, you've got to access
2006/9/15, Xu, Jerry <YXU11 at partners.org>:
> Hi, Guys,
> I am maintaining a cluster that is using NFS and LSF. There is one user
> to run a large mount jobs with only few nodes. In each of his job, he
> needs to
> read a gread deal of data from the home directory which is shared and
> to every computing node. Many times the data file are same, but will
> every 100 job finishes. If every job on the computing node(s) just go
> to read data from the home directory, it (they) will go through NFS and
> network to get the file. Seems a lot waste of efforts. So, I suggested to
> script (by using job array and LSB_JOBINDEX) to determine whether to copy
> data to local disk in the first job of every 100 job, then the rest 99 job
> just read from the local disk.
> My question is, since NFS also have cache, how much benefit this approach
> improve the performance? Because, if I were NFS and I am smart enough, I
> be able to know whether I am reading the same file over and over again..
> will NFS cache size matter?
> Somebody can give a comment?
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf