NIS?

Kian_Chang_Low at vdgc.com.sg Kian_Chang_Low at vdgc.com.sg
Fri Oct 5 07:14:13 PDT 2001


How about getting each node to rsync every time it comes up?

Kian Chang.


                                                                                          
                    Steven Timm                                                           
                    <timm at fnal.gov>        To:     David Bussenschutt                     
                    Sent by:               <d.bussenschutt at mailbox.gu.edu.au>             
                    beowulf-admin at b        cc:     beolist <beowulf at beowulf.org>          
                    eowulf.org             Subject:     Re: NIS?                          
                                                                                          
                                                                                          
                    10/05/01 09:28                                                        
                    PM                                                                    
                                                                                          
                                                                                          




The rsync script is a good idea and something we are thinking
of implementing--only problem is...how do you handle the
situation when a node happens to be down during a push?

Steve


------------------------------------------------------------------
Steven C. Timm (630) 840-8525  timm at fnal.gov  http://home.fnal.gov/~timm/
Fermilab Computing Division/Operating Systems Support
Scientific Computing Support Group--Computing Farms Operations

On Fri, 5 Oct 2001, David Bussenschutt wrote:

> Slight side-bar here, but I think it relates:
>
> My chain of thought:
>
> 1) everyone agrees NIS works (even if it is arguable about the speed,
> reliability, security etc)
> 2) everyone agrees that it can/cause have problems in some situations -
> especially beowulf speed related ones.
> 3) the speed has to do with the synchronisation delays inherent in a
> bidirectional on-the-fly network daemon approach like NIS
> 4) many people prefer the files approach for speed/simplicity (ie to
avoid
> problems in 3).
> 5) In a beowulf cluster, passwords shouldn't be changed on nodes, so a
> server push password system is all that's required -hence the files
> approach in 4).
> 6) why not have the best of both worlds?   What we need is a little
daemon
> on the server that pushes the passwd/shadow/group/etc files to the
clients
> over a ssh link whenever the respective file is modified on the server.
> 7) How I suggest implementing this:
>
> The nieve/simple approach:
> set up the client so that root can ssh to them without a password (I
> suggest a ~/.ssh/authorisedkeys2 file amd a ~/.ssh/known_hosts2 file)
> root crontab entries that run the following commands periodically (as
> often as you require - depending on how much password latency you can
live
> with)
> # first client
> /usr/bin/rsync -ae 'ssh -x' --rsync-path='/usr/bin' /etc/passwd
> root at client1
> /usr/bin/rsync -ae 'ssh -x' --rsync-path='/usr/bin' /etc/shadow
> root at client1
> /usr/bin/rsync -ae 'ssh -x' --rsync-path='/usr/bin' /etc/group
> root at client1
> # second client
> /usr/bin/rsync -ae 'ssh -x' --rsync-path='/usr/bin' /etc/passwd
> root at client2
> /usr/bin/rsync -ae 'ssh -x' --rsync-path='/usr/bin' /etc/shadow
> root at client2
> /usr/bin/rsync -ae 'ssh -x' --rsync-path='/usr/bin' /etc/group
> root at client2
> # etc
>
>
> The improved aproach (a perl program i just wrote - tell me what u think?
>
> ):
>
>
> --------------------------------------------------------------------
> David Bussenschutt          Email: D.Bussenschutt at mailbox.gu.edu.au
> Senior Computing Support Officer & Systems Administrator/Programmer
> Location: Griffith University. Information Technology Services
>            Brisbane Qld. Aust.  (TEN bldg. rm 1.33) Ph: (07)38757079
> --------------------------------------------------------------------
>
>
>
>
> Donald Becker <becker at scyld.com>
> Sent by: beowulf-admin at beowulf.org
> 10/05/01 10:32 AM
>
>
>         To:     Tim Carlson <tim.carlson at pnl.gov>
>         cc:     Greg Lindahl <lindahl at conservativecomputer.com>, beolist
> <beowulf at beowulf.org>
>         Subject:        Re: NIS?
>
>
> On Thu, 4 Oct 2001, Tim Carlson wrote:
> > On Thu, 4 Oct 2001, Greg Lindahl wrote:
> >
> > > BTW, by slaves, do you mean "slave servers" or "clients"? There's a
> > > big difference. Having lots of slave servers means a push takes a
> > > while, but queries are uniformly fast.
> >
> > I meant clients.
> > 1 master, 50 clients.
> > The environment on the Sun side wasn't a cluster. 50 desktops.
>
> Completely different cases.
>  Workstation clients send a few requests to the NIS server at random
> times.
>  Cluster nodes will send a bunch of queries simultaneously.
>
> > Never had complaints about authentication delays. I just haven't seen
> > these huge NIS problems that everybody complains about.
>
> The problems are not failures, just dropped and delayed responses.  A
> user might not notice an occasional ten second delay.  When even trivial
> cluster jobs took ten seconds, you'll notice.
>
> > If you were running
> > 1000 small jobs in a couple of minutes I could imagine having problems
> > authenticating against any non-local mechanism.
>
> Hmmm, a reasonable goal is running a small cluster-wide job every
> second.  I suspect the NIS delays alone take longer than one second with
> just a few nodes.
>
> > Our current cluster builds use http://rocks.npaci.edu/ for clustering
> > software. This system uses NIS.  I know it is odd to hear of any other
> > system than Scyld on this list,  but we have had good luck with NPACI
> > Rocks.
>
> We don't discourage discussions about other _Beowulf_ systems on this
> list.  We have thought extensively about the technical challenges
> building and running clusters, and are more than willing to share our
> experiences and solutions.
>
> Donald Becker becker at scyld.com
> Scyld Computing Corporation
http://www.scyld.com
> 410 Severn Ave. Suite 210                                Second
Generation
> Beowulf Clusters
> Annapolis MD 21403 410-990-9993
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
>
>
>
>


_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf








More information about the Beowulf mailing list