newbie requests advice.
larry at spack.org
Sat Jun 16 15:30:35 PDT 2001
Hi. I've read through the FAQ, the HOWTO and browsed the list archives
but there's a lot of information there and I'm having a hard time turning
it into concrete answers :-)
I've just started work at a fabless semiconductor company, up until now
we've run all of our simulationsregression tests on single Solaris boxes,
or directly on the developers workstation. Recently we've done some tests
and it appears that high end Intel cpu's are not only much cheaper but
that they significantly outperform high end Sparc cpu's. So I've been
asked to build and evaluate a Linux cluster of some sort to try and take
advantage of this. Unfortunately the applications that we have to run are
all commercial so we don't have the ability to tune the source of them.
So, questions ...
* Some of our jobs can use upwards of 4Gb of RAM, from my understanding
3Gb is the maximum that a single process can address with 2.4 kernel.
Is this limitation something that Network Virtual Memory can help with?
If so how much of a performance hit does it impose, I assume it's
better then swapping to disk?
* Without the ability to optimize the code of the apps we run is it even
worth pursuing a beowulf cluster?
* And off-topic, if it's not can you suggest any other open source
solutions that might help? Currently all of our designers have dual
1.7Ghz boxes as their desktops. Perhaps some form of scheduling
software to harness all this to run jobs over night would be useful?
* Any other suggestions of what to read, buy, look into?
Thanks for your time,
More information about the Beowulf