Compaq DP K6-350 Scyld Cluster
Eric T. Miller
emiller at techskills.com
Wed Aug 8 17:13:51 PDT 2001
I have abandoned the cluster project on the 500mhz Celerons and shifted to a
9 node cluster consisting of old Compaq DeskPro's with AMD K62-350
processors and the supported D-link NICS. Incredibly, this cluster
functioned THE FIRST TIME, right out of the gates. I have some questions
regarding the current status of this setup.
1. The slave nodes will appear as up, even thought I have done nothing with
the partition tables, written partition tables to slaves, etc. The nodes
just do everything from the boot floppy, and after the Phase 3, appear as
up. I am assuming that writing the partitions is only necessary to enable
swap space and if I want to install the image to the hard drive?
2. Speaking of swap space, how will not having swap on the slaves effect
the performance of the cluster?
3. What programs are recommended to graphically display the abilities of
the cluster? I have used the Mandelbrot set fractal renderer that comes
packaged with Scyld, are there some other good programs are widely used to
test the abilities of a cluster. This cluster will be on display, museum
style, to demonstrate the capabilites of Linux, so a graphical program would
be nice, but command line programs would be good too, as long as the output
4. Currently, my cluster is only 3 nodes, (I need to buy more NICs). When
I run the Mandelbrot renderer, only one of the nodes seems to be processing
a job, by the output of the BeoStatus. The processor % and network
bandwidth % on the other two slaves remains unchanged during the process?
If I run the process with only one node attached (regardless which one, it
will register on that node). When I connect two or more slaves, it still
only registers using resources on one node (usually the last node I brought
5. Is it common for slave nodes to bounce thier status "up" and "down" for
no reason? I can get all nodes in the up status at once, but after several
minutes, one will go down, then several minutes later, the other. There
doesn't SEEM (to my newbie eye, anyway) to be an obvious reason for this, no
kill messages, IP cponflicts, etc. It also doesn't seem to happen at regular
intervals or any logical reason. A simple reboot brings it right back up.
I finally got a Beowulf cluster to function! What a great feeling. Thanks
for all your support. Now if I can just get more than one node to work at
Eric T. Miller
MCP, CCNA, CCDA, N+, A+, i-Net+
614.891.3200 ext. 105
More information about the Beowulf