[Beowulf] VMC - Virtual Machine Console

Peter Skomoroch peter.skomoroch at gmail.com
Fri Mar 7 08:21:00 PST 2008


I used Doug's BPS package to benchmark a virtual cluster on Amazon EC2, and
was hoping the beowulf list could give their feedback on the results and
feasibility of using this for on-demand clusters.  This approach is
currently being used to run some MPI code which is tolerant of poor latency,
i.e. MPIBlast, monte carlo runs, etc.

You get gigabit ethernet on EC2, but the latency from netpipe seems to be an
order of magnitude higher than Doug's Kronos example on the Cluster Monkey
page:

Amazon EC2 Latency:  0.000492 (microseconds)
Kronos Latency: 0.000029 (microseconds)

Full Results/Charts for a "small" cluster of two extra-large nodes here (I
just used the default BPS config with MPICH2):

http://www.datawrangling.com/media/BPS-AmazonEC2-xlarge-run-1/index.html
http://www.datawrangling.com/media/BPS-AmazonEC2-xlarge-run-2/index.html

The unixbench results are misleading on VM, so I left those out.  Others
have verified the performance mentioned in the EC2 documentation: "One EC2
Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007
Opteron or 2007 Xeon processor."

Some bonnie results are here:
http://blog.dbadojo.com/2007/10/bonnie-io-benchmark-vs-ec2.html

The cluster is launched and configured using some python scripts and a
custom beowulf Amazon Machine Image (AMI), which is basically a Xen image
configured to run on EC2.  You end up paying 80 cents/hour for 8 cores
with15GB RAM, and can scale that up to 100 or more if you need to.  I'm
cleaning up the code, and will post it on my blog if anyone wants to try it
out.   I think this could be a cost effective path for people, who for
whatever reason, can't build/use a dedicated cluster.

Here are the specifications for each instance:

Extra Large Instance:

      15 GB memory
      8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)
      1,690 GB instance storage (4 x 420 GB plus 10 GB root partition)
      64-bit platform
      I/O Performance: High
      Price: $0.80 per instance hour

-Pete



> There are plenty of parallel chores that are tolerant of poor latency --
> the whole world of embarrassingly parallel computations plus some
> extension up into merely coarse grained, not terribly synchronous real
> parallel computations.
>
>
VMs can also be wonderful for TEACHING clustering and for managing
> "political" problems. ... Having any sort of access to a high-latency Linux VM
> node running on a Windows box beats the hell out of having no node at
> all or having to port one's code to work under Windows.
>
>

>
> We can therefore see that there are clearly environments where the bulk
> of the work being done is latency tolerant and where VMs may well have
> benefits in administration and security and fault tolerance and local
> politics that make them a great boon in clustering, just as there are
> without question computations for which latency is the devil and any
> suggestion of adding a layer of VM latency on top of what is already
> inherent to the device and minimal OS will bring out the peasants with
> pitchforks and torches.  Multiboot systems, via grub and local
> provisioning or PXE and remote e.g. NFS provisioning is also useful but
> is not always politically possible or easy to set up.
>
> It is my hope that folks working on both sorts of multienvironment
> provisioning and sysadmin environments work hard and produce spectacular
> tools.  I've done way more work than I care to setting up both of these
> sorts of things.  It is not easy, and requires a lot of expertise.
> Hiding this detail and expertise from the user would be a wonderful
> contribution to practical clustering (and of course useful in the HA
> world as well).
>
>

-- 
Peter N. Skomoroch
peter.skomoroch at gmail.com
http://www.datawrangling.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.scyld.com/pipermail/beowulf/attachments/20080307/db2f89e3/attachment.html


More information about the Beowulf mailing list