phasing out Solaris/Oracle/Netscape with Linux/PostgreSQL/Apa che
RSchilling at affiliatedhealth.org
Wed Feb 14 11:30:54 PST 2001
> -----Original Message-----
> From: Mark Hahn [mailto:hahn at coffee.psychology.mcmaster.ca]
> Sent: Saturday, February 10, 2001 5:28 PM
> To: Schilling, Richard
> Cc: linux-elitists at zgp.org; pigdog-l at bearfountain.com;
> beowulf at beowulf.org
> Subject: RE: phasing out Solaris/Oracle/Netscape with
> Linux/PostgreSQL/Apa che
> > SCSI is GREAT, and you should set up redundant hot swaps so
> if you crash,
> > you insert a new disk, type "boot", and you're back online
> with a node. I
> uh, that misses the whole point of raid, which is to survive hard disk
> failures. "survive" as in "not crash, keep functioning". raid1 or 5
> built on IDE disks do this *just*fine*.
Actually, I'm not just thinking of a single disk crash, but total recovery
from a previously
known, stable backup. Hard to beat a five minute restart time.
RAID is a great way to go if you anticipate no catestrophic failures.
A complete rebuild, for example in the event your cluster is connected to
outside and someone manages to hack it - or if an innocent scientist works
on the wrong directory, or if your software vendor dials in and screws up.
At least you get up and running and buy time to deal with the
major problem later. It's real important in health care because we can't
have down times.
Hey, it happens. . . .
> > think Sun stations outperform the Intel boards on disk
> throughput, but you
> > could check.
> disk performance is basically no longer an issue: even the cheapest
> modern disks sustain 20-30 MB/s. if you're concerned with seek times,
> you simply use lots of spindles. I'm sure there _are_ people who need
> more than the 90 MB/s that a simple raid of ide disks can sustain,
> but that's a pretty exotic market...
There's actually quite a few of them/us. Health insurance companies, banks,
etc . . . Every second counts when you do $2m per day in transactions.
> > You'll take a big performace hit running perl too much -
> it's interpreted.
> nonsense. perl is compiled, and also not a major performance hit.
> I just tested a trivial CGI-type perl script, and on my cheesy,
> so-low-end-you-can't-even-buy-it-anymore machine (duron/600),
> it takes 6 ms to run. golly gee, I could only do 1.5e6 hits/day...
I'll double check the mechanics of the Perl VM. That I know of, Perl
are not usually compiled to an object file then linked.
Not that it's not fast, but with interpretation, there's always overhead.
> you can also imbed a perl VM in your webserver (mod_perl in apache).
When mod_perl runs the Perl scripts, it's interpreting them.
> hell, most of that 6ms is actually the (libc) dynamic linker:
> simply statically linking perl would result in a huge speedup,
> if you really need thousands of cgi's per second. (for proof,
> on the aforementioned cheesy desktop, "hello world" takes 8ms
> dynamically linked, but .4ms statically linked.)
There you go!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf