[Beowulf] Win64 Clusters!!!!!!!!!!!!
landman at scalableinformatics.com
Sun Apr 8 20:59:40 PDT 2007
Jon Forrest wrote:
> Joe Landman wrote:
>> No. The extra registers make compiler optimization work better (lower
>> register pressure). The flat memory model (doing away with segmented
>> registers) simplifies addressing, and reduces the work of the processor
>> for every physical address calculation (no more segment + offset
>> operations to perform *before* pushing out the address onto the wires).
> Right. I was talking about the difference in running an application
> that fits in 32-bits in a 64-bit environment. There's a flat memory
> when it runs this way, or when it runs in a 32-bit environment.
??? Flat memory is non-segmented by definition. Would you care to
point out the flat memory addressing mode on x86 which can access all
4GB ram? I am sure I missed it.
> The tests I made myself were non-HPC, e.g. building large software
> packages. But, I'm a reasonable person and I'll be glad to
> modify my statement to say that 64-bit computing is oversold
> in non-HPC markets. For example, when you look at pretty much
Ok, here is where I guess I don't understand your point in posting this
to an (obviously) HPC list then. Is it oversold in the gamer market?
In the DB market?
I have found very few exceptions 64 bit being faster. There are some,
but not many. The arguments leveled by those claiming that you lose
half your cache data density have been demonstrated not to be a
significant concern for quite a number of real world apps.
> any AMD-based computer these days, and compare it to what
> was available ~2 years ago (I'm not sure of the exact date), what
> difference do you see on the front panel? You'll see "AMD Athlon"
> in both cases, but now you also see "64". On the majority
> of computers being sold, this makes no difference. (HPC users
... and your point is .... that the 64 appendage makes no difference
when you are running the chip in 32 bit mode (e.g. windows)?
> are different). I bet most people think that since 64 is bigger
> than 32 then a 64-bit computer is "better". Yet, this isn't the
ok. I might suggest avoiding conflating marketing with technology.
Also note that Athlon64 units do run noticably faster than Athlon units
of similar clock speed in 32 bit mode. The desktop I am typing this on
now is a 1.6 GHz Athlon 64 currently running a windows XP install, and
is noticably (significantly) faster than the system it replaced (a 2 GHz
Athlon). The minimal benchmarks I have performed indicate 30% ballpark
on most everything, with some operations tending towards 2x faster.
> case for them, especially if they're using a modern version of
> Windows, which is what the original posting was about. These days you
> also see "X2" which is a different kettle of fish and is, if anything,
> being undermarketed.
Undermarketed? Not the way I see it (see the Intel ads on TV)
>> This is a bold assertion. Sort of like the "no program will ever use
>> more than 640k of memory" made by a computing luminary many moons ago.
> Bill Gates says he never said that. In any case, most of that was
> due to the architectural inferiority of the x86 at the time.
> What I'm talking about is a real limit in the complexity of
> what a human, or group of humans, can create. Please name of
> piece of software, free or commercial, that needs more
> than a 32-bit address space for its instruction space.
> As far as I know, there isn't any such thing. Not even close.
>>> about the data segment of a program. Also, people tell
>>> me that there are programs that were generated by other
>>> programs that are larger than 32 bits. I've never seen
>>> one, but maybe they exist, and that's what I'm talking
>>> about human written programs.
>> I am sorry, but I think this may be an artificial strawman.
> If so, I don't see it. If my statement is true, that is that
You set up an argument for the sole purpose of knocking it down. "no
32bit address needed for instruction text" ... "a real limit in the
Of course, no program I have seen is ever *just* instruction text.
There are symbols. Data sections, other sections. Whether or not ld
can link anything that large is not something I am directly aware of,
not having done it myself.
My point with fortran is that the size of the binary (which is more
relevant than the size of the simple instruction text) may in fact be
And I still claim that your assertion is bold and difficult to support
or contradict. Way back in the good old days of DOS, we regularly
generated binaries that were 700+kB in size. Had to use overlays and
linker tricks to deal with it. This was around the time of DOS 6. OS2
was a breath of fresh air, my codes could hit 2-3 MB without too much
pain (binary size). Had 16 MB on the machine, so it was "plenty of
room". I was not however of the opinion that 16 MB was all I would ever
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax : +1 734 786 8452 or +1 866 888 3112
cell : +1 734 612 4615
More information about the Beowulf