The FNN (Flat Neighborhood Network) paradox
Horatio B. Bogbindero
wyy at cersa.admu.edu.ph
Wed Feb 21 07:36:47 PST 2001
> I will try to prepare first 4 node beowulf tonight :)
> I will put to every node 3 nics and I connect this without any switch.
> After this I will put in every node only one nic and I compare the
> benchmark results.
well, doing it with 4 nodes would not really be a good idea. it would be
better to channel bond a 4 node beowulf. frankly, this is practical for
large clusters such as 64 nodes and above with fast ethernet hardware. if
it is any smaller channel bonding would answer you bandwidth requirements
(it is quite expensive though compared to BNN).
> After this we can say if this giving some performance. I know so 4 nodes
> is not enough but I don't have enought
> free hardware now.
scaling properties...hmmm...you can try it just for the sake of
experimentation and i would really appreciate it if you can email me the
results of your simulation so that we can make modifications and
optimization if necessary to the topology.
btw, this topology does not only work for clusters. it is also good for
building vector parallel machines. it may not be as good as a complete
graph but it is cheaper and may yield a better price/performance ratio.
> The reason for this test is the potential problem when some nics access
> memory and PCI bus - I think so this can't give any reasonable speedup
> because 3 nics can't (probably) transmit and receive data in the same
> however I hear about chanell bonding in beowulf - so mayby I am in wrong
well that is a potential bottleneck. but with PCI 66 hardware out and 64
bit network interface cards things may change. besides PCI 33 with 4 NICs
possible since people report good benchmarking results with channel
> Does anybody have some results about this type of comaprision ? How many
> nics is ideal for clusters ?
there are some discussion on the paper that may help you decide. but, one
of the major factors for choose 4 NICs instead of 5 is due to the
available number of PCI slots in a typical motherboard (not a superduper
special mobo). 4 was more of a practical reason choice. btw, there is a
Intel EthernetExpress Pro 100 4 port NIC available. you can contact intel
> The teoretical PCI bandwith say : 132Mbytes so if the nic drivers is
> good than 5 100Mbits nics will work well but the theory is only theory
5 NICS X 100Mbits X 8Bytes = 62Mbytes which is way below the PCI max.
however, everything else lives there too such as IDE, ISA and other
> btw. I know so tonight is not enough too :))
cool. good luck with your tests. we are looking forward to seeing some
results from you.
> "Horatio B. Bogbindero" wrote:
> > the beauty of these types of network neighborhoods is that they have
> > greater aggregate bandwidth that just a single 100mbps line. if you use
> > the single switch solution your top bandwidth per line will be 100mbps.
> > if you use an network neighborhood method like the FNN then you aggregate
> > and bisection bandwidth per node is greater than a single 100mbps.
> > ...................[cutted].......................
> > hope this clears things up a bit.
> > >
> > > I just plan to make beowulf with 100M nic with network architecture
> > > described on the http://aggregate.org/FNN.
> > > But I am in trouble because I found so this is without sense for me :(
> > > I was count the prices for architectures prepared from 8/16/24/and
> > > greater switches.
> > > I was count the total network cost ie : cost of switches, nics,
> > > patchcoords and patchpanels.
> > > My results say : ONLY 1 switch giving good price !
> > >...........[cutted].........................
William Emmanuel S. Yu
Ateneo Cervini-Eliazo Networks (ACENT)
email : william.s.yu at ieee.org
web : http://cersa.admu.edu.ph/
phone : 63(2)4266001-5925/5904
Man is the best computer we can put aboard a spacecraft ... and the
only one that can be mass produced with unskilled labor.
-- Wernher von Braun
More information about the Beowulf