Mathematics of gigabit question

Jared Hodge jared_hodge at iat.utexas.edu
Fri Dec 7 13:17:20 PST 2001


Mark and fellow Beowulfers,
        A friend of mine reminded me about a couple of other things that
I
thought I'd mention.  When we did testing on our Myrinet equipment, we
got 256 MB/sec through the PCI which was set at 33 Mhz (I forgot to
remove the jumper for 66 Mhz).  This caused our Myrinet link to be
limited to 160 MB/sec.  After removing the jumper and getting 66 Mhz, we
got the full speed of the Myrinet.  This shows me that at least when
using Myrinet I lost about 35% of my PCI bandwidth in the PCI to Myrinet
link.  I would assume that ethernet and TCP/IP would be worse.
        I decided to do a little more checking at Intel's website. 
Looking at
the spec page for their Intel Pro/1000 T Desktop adapter I realized
immediately that it wasn't a 64 bit link.  I guess they assume most
people can't tell these things by looking at the picture because I
really had to dig to find out that in fact it was a 32 bit PCI that
could run at 33 or 66 Mhz (I don't think 32/66 is a standard, but I'm
sure it will be soon if Intel is making cards like this).  Even if you
got it running at 66 Mhz, you still wouldn't be able to get full duplex
at full speed like they advertise.  Apparently they have a line that
uses the PCI-X which must be where they are getting their numbers.  Of
course I've never even seen a computer with a PCI-X bus, except maybe on
the alien spaceship that has the CPU goo (before people reply thinking
I've been kidnapped by aliens, see the previous message).  I think the
fact that they could sell a NIC that is incapable of running at it's
advertised specs is disgraceful and I hope this encourages people to do
their homework before buying CPU hardware.  If anyone wants to check the
spec sheet for the Intel Pro/1000, the links are below:
http://www.intel.com/network/connectivity/resources/doc_library/data_sheets/gigabit_over_copper_adapters.pdf
http://www.intel.com/network/connectivity/products/pro1000t_desktop_adapter.htm

Jared

"Mills, Mark" wrote:
> 
> Thanks you for your reply, I really enjoyed seeing an opinion that had some
> mathematical basis to it.  You gave me some mathematical perspectives I had
> not considered. I fully agree with you - marketing people should be shot for
> their twisting of facts!
> 
> -----Original Message-----
> From: Jared Hodge [mailto:jared_hodge at iat.utexas.edu]
> Sent: Friday, December 07, 2001 11:13 AM
> To: Mills Mark
> Cc: beowulf at beowulf.org
> Subject: Re: Mathematics of gigabit question
> 
> Mark,
>         These are actually very good questions that a lot of people have,
> which is why I decided to CC the beowulf mailing list.  Perhaps someone else
> on the list could do better, since I'm not an expert on PCI or GigE but I'll
> give it my best shot at answering your questions.
> 
> First, I'm afraid Intel over idealizes their PCI numbers just a bit (plus I
> think they are going with a strange definition for MB). OK, 1 MB is
> 1024*1024 bytes = 1,048,576 bytes.  For some reason manufacturers
> (especially hard drive builders) tend to go with an even 1,000,000 bytes and
> pretend that it's 1 MB (actually I know the reason, and it's not because the
> math is easier, its because it makes their products look bigger).  Now,
> where does Intel get their numbers?  Here's what I think they did:
> 
> 33,000,000 cycles/sec (that's 33 million cycles per second or 33 Mhz) * 4
> bytes/cycle (32 bits = 4 bytes) = 132,000,000 bytes/sec (132 small MB/sec,
> or 125.8 real MB/sec).
> 
> Similarly, you get the 256 MB/sec (really 251.7 MB/sec), and 528MB/sec
> (really 503.5 MB/sec) for 64/33 and 64/66 PCI respectively.  Ok, that's
> their bad math, now the over idealized part is that they may not be telling
> you that this is shared between all PCI slots on the same bus and that it is
> for both directions (total bandwidth for the PCI **BUS** -meaning shared).
> Also, no matter what specification is given, you'll never get full
> connection speed over any link because of various overhead costs.  Measuring
> actual communication speed (using a tool I got from Myricom and motherboards
> we actually have) I get:
> 
> 32/33 = 128 MB/sec (really 122 MB/sec)
> 64/32 = 250 MB/sec (really 238 MB/sec)
> 64/66 = 512 MB/sec (really 488 MB/sec)
> 
> Note, just multiply 1,000,000 byte megabytes by 0.95367431640625 to get
> actual.
> 
> Ok, now before I launch into GigE cards there is one caveat.  When you are
> going from a PCI connection to any network connection, you are talking about
> a totally different type of communication protocol.  I haven't studied the
> intricacies of PCI protocol, but knowing all of the overhead TCP/IP has, the
> conversion takes time.  This means buffering is required while processing
> occurs on the NIC and extra processing is required on the system processor.
> My point is that this is not a one-to-one conversion so we are glossing over
> quite a few unknowns.
> Maybe someone else on the mailing list could give you a few more details.
> My guess is that the PCI bus has less overhead than the NIC, but to be
> honest I don't know for sure.
> 
> I believe most ethernet devices have the ability to work in either full
> duplex mode or half duplex modes (I think these terms are a little weird,
> especially half duplex.  Seems like it should be just duplex and not duplex
> at all, but that wouldn't sell NICs).  That's 1000 Mbps each direction or
> 125 MB/sec (really 119 MB/sec) one way, 250 MB/sec (238
> MB/sec) in full duplex mode.  So to try to compare apples to apples, for a
> half duplex link you've got (I'll use real MB, since I refuse to conform to
> marketing ploys): 32/33 PCI = 122 MB/sec vs. half duplex GigE = 119 MB/sec.
> 
> Seems like it should work, right.  Well the problem is that aside from the
> unknown overhead costs that I mentioned above (which could already mean the
> GigE NIC is getting starved a little), we have to figure out where that data
> is coming from.  If you want to sustain the full-speed link for any length
> of time with real data, you've got to get lots of data from somewhere which
> means it's probably not all in physical RAM (it very well could be, but we
> don't want to depend on this when designing a system).  That means the hard
> drive is probably working some.  Well, with most Intel PC chipset designs,
> this goes through the south bridge (or whatever they are calling it now).
> You would have to look at some board-specific diagrams of your motherboard
> to know for sure, but this often means that any traffic from the Hard drive
> goes through the PCI bus since the PCI bus connects the north and south
> bridges.  On newer motherboards this isn't a problem since there is a
> separate connection from the north bridge to the south bridge (I think that
> AMD calls it Hypertransport, I don't remember what Intel calls it).  Again,
> I know north bridge and south bridge aren't the latest terms, but it's
> gotten to the point where even the names for motherboard components are just
> marketing.  Anyway, you're bound to be doing something over the PCI besides
> just communicating with the NIC, so I imagine that the NIC isn't going to be
> fully fed, but I don't have any hard numbers to give you for this.
> 
> Obviously, a NIC operating at full speed on full duplex on a 32/33 PCI
> doesn't have much of a chance of staying completely fed.  I imagine you'd do
> fine with 64/32 since the chances of actually need full speed on full duplex
> for any length of time is very slim and the PCI could do a pretty good job
> of feeding it anyway.  If you're going with a really high end NIC though, it
> makes sense to keep it fed as well as possible, which may even mean 64/66
> PCI.  I imagine a lot of GigE NICs that are half duplex are only 32/33 and
> that are full duplex are only 64/32.  Why make a more expensive NIC when 99%
> of your market wouldn't know the difference and you can already say it's
> GigE?
>         Did I mention the marketing ploys involved in all this?  Actually I
> think it's funny how dumb the big manufacturers think the public is.
> Whether when designing NICs (not dumb in this case, just maybe uninformed),
> or (very dumb in this case) showing us aliens that are flying in space that
> are mystified by the power of a little chip that they can drop in strange
> goo (wish I had some, might be useful) and then do all sorts of wonderful
> things with, like get stereo sound (wow my speakers just got better) and
> edit pictures (wow my software just got better).  Those earthlings down
> there sure have advanced technology...
> At least they don't have people flying all over the place for no reason.
> 
> 
> > "Mills, Mark" wrote:
> >
> > I read some of your posting on "64-bit PCI - 66 Mhz vs 33 Mhz
> > networking performance?" at http://www.beowulf.org If you could answer
> > clear up 2 question that has been bothering me I would appreciate it.
> >
> > 1.      I have often heard that a 32 bits/33 Mhz PCI bus cannot fully
> > utilize a gigabit ethernet card.  But if a 32 bits/33 Mhz PCI bus can
> > handle an aggregated ideal peak at 132 MB/s.  (32bits x 33MHz =
> > 1056bits or 132MBytes/s) then why not?  A gigabit card only passes
> > 125MB/second (1000bits\8bits= 125MB's) right? Or do they mean it can
> > only work in half duplex mode and not full duplex for a total of
> > 264MB/s
> >
> > 2.      If a 32 bits/33 Mhz PCI bus can handle 132 MB/s.  (32bits x
> > 33MHz = 1056bits or 132MBytes/s) does that mean that when the NIC is
> > running in full duplex mode that the data being sent is running at
> > 66MB/sec and the simultaneous data being received is at 66MB/sec for a
> > total transfer of rate of 132MB per second?
> >
> > Thanks for any help you can give.
> > Mark Mills
> >
> > Voice 281-370-3861
> > Fax     281-370-3801
> > Email  Mark.Mills at Desktop-Assistance.com
> >
> > Intel's site gives this info on Gigabit ethernet and today's PCI
> > slots. You need to keep in mind the (theor.) peak of the PCI bus :
> > 32 bits/33 Mhz : aggregated ideal peak at 132 MB/s
> > 64 bits/33 Mhz : aggregated ideal peak at 264 MB/s
> > 64 bits/66 Mhz : aggregated ideal peak at 528 MB/s"
> > Found at
> >
> http://support.intel.com/support/network/adapter/1000/sb/1010453072955946.ht
> m
> 
> --
> Jared Hodge
> Institute for Advanced Technology
> The University of Texas at Austin
> 3925 W. Braker Lane, Suite 400
> Austin, Texas 78759
> 
> Phone: 512-232-4460
> Fax: 512-471-9096
> Email: Jared_Hodge at iat.utexas.edu

-- 
Jared Hodge
Institute for Advanced Technology
The University of Texas at Austin
3925 W. Braker Lane, Suite 400
Austin, Texas 78759

Phone: 512-232-4460
Fax: 512-471-9096
Email: Jared_Hodge at iat.utexas.edu



More information about the Beowulf mailing list