Reliability of Beowulf (memory limits)
reid_huntsinger at merck.com
Mon Sep 16 15:01:15 PDT 2002
PC's memory limit is a function of how many slots on the mainboard and
what "chipset" is used. Higher-end Serverworks chipsets for example can
very large amounts of RAM.
To access more memory than 32-bit unsigned integers can address (4GB), Linux
(and any other OS) uses an Intel hardware provision called Physical Address
Extension (PAE), which does some sleight-of-hand with 4 extra bits to make
use of the available RAM up to 64 GB.
Regardless of the amount of physical RAM, Linux allows a process to address
3 GB of memory (maybe up to .5 GB more if kernel parameters are modified).
So the extra RAM is only helpful if there are several memory-hungry
processes. (Some information is available at
If you want a single process to address more memory than this, you really
need a 64-bit machine (not necessarily a supercomputer). But you may be able
to rewrite your applications as several processes, each of which needs a
smaller (say < 3GB) amount of RAM. This can then be distributed across a
Beowulf, for example. Thankfully, the tools you need are all free.
If you're thinking of making a network of PCs into a machine with a unified
> 4GB virtual address space, this is as far as I know a research topic.
Date: Mon, 16 Sep 2002 05:48:36 -0700 (PDT)
From: Toof Chanon <tooff2 at yahoo.com>
Subject: Reliability of Beowulf
To: beowulf at beowulf.org
I am a newbie to Beowulf and Computer
Architecture context. I don't understand how Beowulf
PC cluster could manage huge memory usage. Please
correct if I am wrong. For each PC, a Beowulf's node
could provide only 32-bit address array A(n) ; n <
(2**31-1)/nb; nb = number of byte, e.g. real*8 array,
A(268435455), which depend on (fortran) compilers.
Currently, 2-3 GB physical ram is PC limit. Suppose
that I have 2 nodes, then 4 GB is all share physical
ram. Ok, I knew that there is linux kernel which
supports > 2 GB ram. Do I need a special compiler
which supports > 2GB address also? If yes, I guess
such a compiler may be too expensive. No advantage for
going Beowulf. I believe that Beowulf is a set of
cheap and robust thing. My point is,
How could Beowulf cluster manage its huge memory > 2GB
by using a common 32-bit extent
Is this true that Beowulf provides only parallel
processing jobs but it does not tackle a huge problem
such a big array requirement/management? We have to
turn to supercomputer again.
I have a stupid question because I am a student who is
seeking for a high performance machine for doing
research. Beowulf is only one worth, I go.
So I would like to give your suggestion to
convince my supervisor go Beowulf for a big problem.
Any suggestion will be appreciated
Notice: This e-mail message, together with any attachments, contains information of Merck & Co., Inc. (Whitehouse Station, New Jersey, USA) that may be confidential, proprietary copyrighted and/or legally privileged, and is intended solely for the use of the individual or entity named in this message. If you are not the intended recipient, and have received this message in error, please immediately return this by e-mail and then delete it.
More information about the Beowulf