<div>To make NFS to do that, you should configure something similar to cachefs. I've used this in Solaris, don't know if it has been released also to Linux or other OSs. If you use LustreFS or similar, you'll have also the cache mechanism built-in.
<div>I think your suggestion is right, let NFS do that, just in case in the future, the code behaviour is different so you have to change the script again and again each time the number 100 decreases/increases.</div>
<div>Anyway, take into account that NFS cache is local to each node of the cluster, so if the subsequent times you run the code, the jobs are scheduled to other nodes that have never access the data file, you've got to access NFS again.
<div><span class="gmail_quote">2006/9/15, Xu, Jerry <<a href="mailto:YXU11@partners.org">YXU11@partners.org</a>>:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid"><br>Hi, Guys,<br><br>I am maintaining a cluster that is using NFS and LSF. There is one user need<br>to run a large mount jobs with only few nodes. In each of his job, he needs to
<br>read a gread deal of data from the home directory which is shared and mounted<br>to every computing node. Many times the data file are same, but will change<br>every 100 job finishes. If every job on the computing node(s) just go straight
<br>to read data from the home directory, it (they) will go through NFS and the<br>network to get the file. Seems a lot waste of efforts. So, I suggested to use<br>script (by using job array and LSB_JOBINDEX) to determine whether to copy the
<br>data to local disk in the first job of every 100 job, then the rest 99 job will<br>just read from the local disk.<br>My question is, since NFS also have cache, how much benefit this approach will<br>improve the performance? Because, if I were NFS and I am smart enough, I shall
<br>be able to know whether I am reading the same file over and over again.. then,<br>will NFS cache size matter?<br><br>Somebody can give a comment?<br><br>Jerry<br><br>_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br>To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf