SMC 8624T 24-port 10/100/1000 switch
bob at drzyzgula.org
Tue Nov 12 12:02:09 PST 2002
On Tue, Nov 12, 2002 at 11:34:09AM -0500, Mark Hahn wrote:
> > From what I can tell this is an unmanaged switch,
> > that doesn't support port aggregation or
> > mirroring, jumbo frames, etc?
> does standard port aggregation work with the kernel?
> would anyone care about managable switches in a cluster environment?
> shame about jumbo frames, but doesn't the use of interrupt mitigation
> by the kernel ameliorate that lack?
Personally, I was more interested in port aggregation
for switch-to-switch links.
Also, I'm guessing that one of the biggest reasons
that the HP doesn't support jumbo frames is that they've
provided you with no way to turn them off. They've
also provided you with no way to turn autonegotiation
off, so I suspect you'd have trouble with an uplink
on these in many Cisco shops :-(
> I suspect the lack of jumbo frames just reflects the size of
> buffers attached to each port. I also guess that HP/d-link/smc/etc
> are all using the same chipset, since they do seem to offer
> pretty much exactly the same performance specs:
> actually, the latency figures are interesting (and I don't remember
> seeing them on the other specs). I'm guessing there's a modular,
> 8pt chip that supports glueless connection to two peers. the HP specs
> show linear scaling for throughput, fabric speed and MAC table size;
> it's sort of interesting that the max latency goes from 2.5 to
> 12us in the 3-way config. I expect that means that the throughput
> rating only holds if you don't cross chip boundaries as well...
> the dlink sheet says "8 Mbits of buffer per device" - I wonder
> if that's per 8-port chip? the SMC datasheet is almost identical
> except that it says "2 Mb per system" (it's also managed, does
> link aggregation))
I don't know for sure, but it seems likely that most of these
are using either the Broadcom StrataXGS chips or the Marvell
Prestera-EX/FX chips, and at that I'd guess it was more likey the
The Broadcom chipset includes an 80Gbps fabric with four
10Gbps ports, and a 12-port 1Gbps switch with a 10Gbps uplink.
A vendor can easily use these to build a 48-port standalone
Gigabit switch. They also make a 160Gbps fabric with eight
10Gbps switches, so this can be exapanded to 96 ports. The
12-port 1Gbps switch chips can be connected back-to-back to
form a standalone 24-port switch; it's possible that the
HP uses this configuration while the SMC uses a more complex
configuration that allows the connection of the 4 GBIC ports.
The Marvell chipset includes pretty much the same kind of
functionality, although I can't find as much information on
Marvell's website. They quote their fabric as doing 50Gbps
rather than Broadcom's 80Gbps, but it appears that Broadcom
is counting both directions in a full duplex connection, whereas
These are both designed to hit the $100/port price port
for Gigabit switches, which is regarded as the point at which
wholesale adoption of Gigabit will start to occur.
National has a 16-port Gigabit chip in development, and Intel
as well as several start-ups are also working on this stuff,
but Broadcom and Marvell appear to be the market leaders at
More information about the Beowulf