Max common block size, global array size on ia32

Joe Griffin joe.griffin at mscsoftware.com
Wed Jul 24 07:14:03 PDT 2002


Hi Craig,

I can get your code to run by adding to
to the "append entry of /etc/lilo.conf

task_unmapped_base=0xB0000000

then rerunning lilo.

task_unmapped_base moves the location of system
information which you are stomping on.

This changed worked with my MSC.Linux system.  I
do not have a "redhat" system to mess with.

Regards,
Joe

Craig Tierney wrote:
> Sorry if this is a bit off topic.  I am not sure
> where to ask this question.  The following
> two codes fail on my system (Dual Xeon, 2 GB Ram,
> Linux-2.4.18, redhat 7.2).
> 
> program memtest
> 
> integer*8 size
> parameter(size=896*1024*1024)
> haracter a(size)   
> common /block/ a
> 
> write(*,*) "hello"
> 
> stop
> end
> 
> OR:
> 
> #include<stdio.h>
> #include<memory.h>
> char ar[896*1024*1024];
> 
> int main() { printf("Hello\n"); }
> 
> I get a segmentation fault before the codes
> start.  I can use ifc, icc, pgf77 and gcc and
> get the same results.   If I change the array size to 895 MB,
> the codes run.  If I change the C code to
> define the array as 'static char ar[blah]' I can
> allocate more than 895MB.
> 
> I have bumped up the max stack size with:
> 
> ulimit -Hs 2048000
> ulimit -s 2048000
> 
> But this does not help.
> 
> I cannot find anywhere in the linux source where
> the max stacksize might be set.  It seems that
> it might be tied to 1 GB, but I cannot find it.
> 
> Does anyone know how I can get around this
> issue?
> 
> Thanks,
> Craig
> 
> 






More information about the Beowulf mailing list