hahn at mcmaster.ca
Tue Jul 17 15:04:43 PDT 2007
> It seems to me that the current trend is going towards this kind of
> setup for people needing server space. Pay per hour, per processor.
it's an idea popular with certain vendors. and marketing aside,
it makes some sense in some cases. but computers are incredibly cheap!
there is some point where computers are expensive enough to justify
virtualization, and below which it makes more sense to just use
disposable (or at least "re-provisionable") hardware. (lots of PHB
types do not understand that computers are cheap because they're in
the habit of buying gold-plated "enterprise" servers, which are on
a very steep part of the price/performance curve.)
v12n is also often driven by people whose workloads are sparse -
that is, they don't keep their resources 100% busy. so a hosting
company might put many virtualized customers on a single server.
obviously, if one's workload is sparse, the inefficiency of v12n
may be entirely irrelevant.
> If it is to be offered as an alternative to dedicated hosting I think
> a container is very important. Allowing users an environment they
> understand is essential.
the question is how much isolation you want and are willing to pay for.
> I'm interested in utilising the hardware to create something akin to
> the sun grid or the amazon elastic computing cloud whereby the
> resources available to the environment are automatically expanded and
> contracted. Maybe I have the wrong end of the stick on how these
> services operate.
no, I think you're right on, and there's not much to it. why do you
think Sun or Amazon have any special magic? beowulf clusters running
multi-user queueing systems are precisely such an "elastic", "compute-
on-demand" thingy, just without paying for the isolation, because such
clusters are mainly motivated by performance.
> would be reliability. From the very little I understand about beowulf
> clusters a node dying isn't really a problem. I need to work out the
again, how much is increased reliability worth to you? it's not even hard
to buy machines which have arbitrarily close to 100% uptime - just a matter
of price and performance. buy a triplet (for voting) of tandem
lockstep-itanium banking servers, geograpically separated, etc.
> economics of such a setup but I think, in the interests of the
> hallowed 100% uptime a fair chunk of the performance could be
"nine-ism" - mine is 5 nines, yours is only 4 is a kind of pissing contest
for PHB's in my opinion. sure, you might well have business-critical
services. but the main point is to use redundancy to let you achieve a
_system_ reliability which is higher than any of the component reliabilities.
doing this doesn't require virtualization, though v18n may indeed help -
what it mainly demands is careful design and rational evaluation of risks.
take RAID, for instance.
More information about the Beowulf