[Beowulf] NFS over IPoIB

John McCulloch johnm at pcpcdirect.com
Fri Jun 12 14:36:26 PDT 2020


Thanks for the quick response Alex... There's no problem yet... It's a new install with 100GbE Mellanox IB and currently at defaults except I bumped the number of NFS daemons to 128. Just researching ways to optimize performance for 36 nodes concurrently accessing the storage server. It is my understanding that setting MTU to 9000 is recommended but that seems to be applicable for 10GbE.


Regards,

John McCulloch | PCPC Direct, Ltd.
________________________________
From: Alex Chekholko <alex at calicolabs.com>
Sent: Friday, June 12, 2020 3:53 PM
To: John McCulloch
Cc: beowulf at beowulf.org
Subject: Re: [Beowulf] NFS over IPoIB

I think you should start with all defaults and then describe the problem you're having with those settings.  IIRC last time I ran NFS over IPoIB I didn't tune anything and it was fine.

On Fri, Jun 12, 2020 at 12:10 PM John McCulloch <johnm at pcpcdirect.com<mailto:johnm at pcpcdirect.com>> wrote:

Can anyone comment on experience with compute nodes mounting NFS v4.1 shares over IPoIB... i.e., tuning parameters that are likely to be most effective... We looked at NFS over RDMA but doing that would require a kernel upgrade...


https://www.admin-magazine.com/HPC/Articles/Useful-NFS-Options-for-Tuning-and-Management


Cheers,

John McCulloch | PCPC Direct, Ltd.
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org<mailto:Beowulf at beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20200612/76d6caa5/attachment.html>


More information about the Beowulf mailing list