[Beowulf] SATA II - PXE+NFS - diskless compute nodes

Eric Shook eric-shook at uiowa.edu
Wed Dec 13 18:44:04 PST 2006


Thank you for commenting on this Greg.  I might look deeper into perceus 
as an option if rhel (and particularly variants as in Scientific Linux) 
work well.  Our infrastructure will most likely include nfs-root, 
possibly hybrid and full-install.  So if Perceus can support it with a 
few simple VNFS capsules then that should simplify administration greatly.

Would you declare Perceus as production quality?  Or would our 
production infrastructure be a large-scale test?  (Which I'm not sure if 
I'm comfortable being a test case with our production clusters ;o)

Thanks,
Eric

Greg Kurtzer wrote:
> 
> On Dec 9, 2006, at 11:27 AM, Eric Shook wrote:
> 
>> Not to diverge this conversation, but has anyone had any experience 
>> using this pxe boot / nfs model with a rhel variant?  I have been 
>> wanting to do a nfs root or ramdisk model for some-time but our 
>> software stack requires a rhel base so Scyld and Perceus most likely 
>> will not work (although I am still looking into both of them to make 
>> sure)
> 
> I haven't made any announcements on this list about Perceus yet, so just 
> to clarify:
> 
> Perceus (http://www.perceus.org) works very well with RHEL and we will 
> soon have some VNFS capsules for the commercial distributions including 
> high performance hardware and library stack and application stack 
> pre-integrated into the capsule (which we will offer, support and 
> certify for various solutions via Infiscale (http://www.infiscale.com).
> 
> note: Perceus capsules contain the kernel, drivers, provisioning scripts 
> and utilities to support provisioning the VNFS into a single file that 
> is importable into Perceus with a single command. The released capsules 
> support stateless provisioning, but there is already work in creating 
> capsules that can do statefull, NFS (almost)root, and hybrid systems.
> 
> We have a user already running Perceus with RHEL capsules in HPC and 
> another prototyping it for a web cluster solution.
> 
> Also, Warewulf has been known to scale well over 2000 nodes. Perceus 
> limits have yet to be reached, but it can natively handle load balancing 
> and fail over multiple Perceus masters. Theoretically the limits should 
> be well beyond Warewulf's capabilities.
> 
> Version 1.0 of Perceus has been released (GPL) and now we are in bug 
> fixing and tuning mode. We are in need of testers and documentation so 
> if anyone is interested please let me know.
> 
> -- 
> Greg Kurtzer
> gmk at runlevelzero.net
> 
> 
> 

-- 
Eric Shook (319) 335-6714
Technical Lead, Systems and Operations - GROW
http://grow.uiowa.edu



More information about the Beowulf mailing list