[Beowulf] clustering using xen virtualized machines

Gavin Burris bug at sas.upenn.edu
Tue Jan 26 08:30:51 PST 2010


Is it just me, or does HPC clustering and virtualization fall on
opposite ends of the spectrum?

With virtualization, you are pooling many virtual OS/server instances on
high availablility hardware, sharing memory and cpu as demanded,
oversubscribing.  What would be idle time on one server, is utilized by
another loaded server.

With HPC clustering, you are running many physical OS/server instances
that usually do not need to be highly available, but instead need to
have direct access and total utilization of memory, cpu and storage.  If
queuing is done well, all servers are maxed out for performance under load.

With xen/vmware/amazon clusters, it seems that you would be adding the
complexity and cost of a virtualization infrastructure, with few of the
benefits that virtualization is targeted to solve.

Cheers.


On 01/26/2010 10:24 AM, Hearns, John wrote:
> for starters to save on resourses why not cut out the gui and go commandline to free up some more of the shared resources, and 2ndly wouldnt offloading data storage to a san or nfs storage server mitigate the disk I/O issues? 
> 
> i honestly dont know much about xen as i just got my hands dirty with it.  wouldnt it be better then using software virtualization since xen takes advantage of the hardware virtualization that most modern processors come with?
> 
> 
> Jonathan,  in a private reply I've already said that you should not be put off from having bright ideas!
> 
> In no way wishing to rain on your parade - and indeed wishing you to experiment and keep asking questions,
> which you are very welcome to do, this has been thought of.
> 
> Cluster nodes are commonly run without and GUI - commandline only, as you say.
> The debate comes around on this list every so often about running diskless! The answer is yes, you can run diskless compute
> nodes, and I do. You boot them over the network, and have an NFS-root filesystem.
> On many clusters the application software is NFS mounted also.
> 
> Your point about a SAN is very relevant - I would say that direct, physical fibrechannel SAN connections in a cluster are
> not common - simply due to the expense of installing the cards and a separate infrastructure. However, iSCSI is used and
> Infiniband is common in clusters.
> 
> 
> Apologies - I really don't want to come across as knowing better than you (which I don't). If we don't have people asking 2what if" and "hey - here's a good idea" then you won't make anything new.
> 
> 
> 
> The contents of this email are confidential and for the exclusive use of the intended recipient.  If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy.
> 
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> 



More information about the Beowulf mailing list