[Beowulf] HPC in the cloud question

Gavin W. Burris bug at wharton.upenn.edu
Tue May 19 05:32:33 PDT 2015


Hi, O-P.

Take a look at consolidated billing to simplify things.  Each researcher
then uses their own account and budget, which is joined underneath of
the umbrella departmental account.  We launch compute node instances
with an IAM API key from the joined account.  This effectively makes the
local cluster resource a springboard to as much capacity as required.
Pro tip:  Don't forget to set a CloudWatch billing alert that will email
when projected cost exceed the monthly budget!

Cheers.


On 02:40PM Tue 05/19/15 +0300, Olli-Pekka Lehto wrote:
> I agree that the billing is non-trivial issue. In the last couple of years we’ve put quite a lot of effort into developing cost models, capacity planning as well planning for this kind of possible surplus. This enables us to track the costs at much finer granularity than just annual checkpoints and adjust things if necessary. 
> 
> The customers are also starting out fairly small so it’s possible to evolve this as the service matures.
> 
> It’s also possible to fairly flexibly add capacity to react to demand, initially from excess unused capacity of our regular cluster and buying or leasing nodes in moderate batches. In the future I could imagine that HPC cluster nodes could also be “retired" to the cloud for a while before EOLing them. 
> 
> The above things are quite crucial to develop and stabilize the chargeback model without having those hugely deep pockets. 
> 
> I see that a major challenge is the friction between the funding models with the new way that services are produced: 
> 
> For the end user the “remotely accessible shared resource" offers the potential for efficiently growing their services organically, setting up parallel test and dev environments on demand etc. For the more forward-looking contractors that are doing (pardon the buzzword) DevOps, these are already crucial features for developing services. However, for many public and research funding entities the focus is still towards infrastructure investments and fairly waterfally, rigid long-term capacity plans. 
> 
> Going a bit offtopic here I guess. Sorry.. :)
> 
> O-P
> -- 
> Olli-Pekka Lehto
> Development Manager, Computing Platforms 
> CSC - IT Center for Science Ltd.
> E-Mail: olli-pekka.lehto at csc.fi // Tel: +358 50 381 8604 // skype: oplehto // twitter: @ople
> 
> On 14 May 2015, at 02:05, Lux, Jim (337C) <james.p.lux at jpl.nasa.gov> wrote:
> 
> > Without getting into the semantics of clouds, smoke, fog, or mirrors.
> >  
> > Isn’t this basically a “remotely accessible shared resource” which happens to be a cluster (defined as something more than a bunch of PCs on the same network: typically with a high performance interconnect and some “cluster management” software of one sort or another).
> >  
> > In all, a useful concept.  The tricky part is how the “chargeback” system works.  What we have here at JPL are called “service centers” for things like antenna ranges, equipment loan pools, clean room services, etc.     They charge a “per unit” charge (where the unit depends on the service, be it day of use, month of rental, etc.)   that per unit charge is determined at the beginning of the fiscal year by the manager of the service, based on past history, and is designed to cover all the operating costs of the service.
> >  
> > Of course, at the end of the year, if the total cost of operation is different than the total unit charges received, there’s a problem.  And, strangely, I’ve never gotten a rebate from the service center because the TCO was less than they collected.
> >  
> > In any case, this kind of strategy is pretty common for “big iron” computers (when you used to lease a machine from IBM, you’d pay by the CPU-second, by the kilocore-second, etc.)…
> >  
> > It also will pass muster for government contracting, which requires that costs be allowable, accountable, and allocable (i.e. you can’t artificially reduce your profit by charging yourself exorbitant rates for computing services)
> >  
> > But it depends on having someone with deep enough pockets to absorb the instantaneous differences between revenue and expense (and the political expertise to handle the problem of “retro rate changes” when the original user has spent all their money)
> >  
> >  
> >  
> > Jim Lux
> >  
> > From: Beowulf [mailto:beowulf-bounces at beowulf.org] On Behalf Of Olli-Pekka Lehto
> > Sent: Monday, May 11, 2015 11:48 AM
> > To: John Hearns
> > Cc: Beowulf Mailing List
> > Subject: Re: [Beowulf] HPC in the cloud question
> >  
> > We have a similar service intended especially for colocating the datacenters of Polytechnics and Universities in our datacenter in the north of Finland. 
> > http://www.slideshare.net/PeterJenkins1/csc-modular-datacenter
> >  
> > In addition we have been operating an HPC-oriented IaaS-cloud, carved off our production cluster for over a year now (https://research.csc.fi/cloud-computing). One thing that’s under active development is a virtual cluster toolchain and front-end which could fairly easily be utilized by other sites as well: https://github.com/CSC-IT-Center-for-Science/pouta-blueprints
> >  
> > Recently there’s been a growing demand for private cloud for internal projects and even from other public institutions. They present a possibility that the service may evolve to become a more general-purpose cloud platform that also supports HPC workloads. The marginal cost of this is fairly reasonable as much of the heavy lifting is in the cloud middleware development/integration that needs to be done anyway and adding different types of nodes/flavours is pretty trivial. 
> >  
> > This trend presents an interesting prospect for HPC centers in general: I’m willing to bet that in many places around the globe there is a niche for a vendor-independent, non-profit, regional, government-backed cloud service for critical public-sector workloads. HPC centers are be a good fit for providing this as many are already developing their own cloud services, procure and manage large quantities of scale-out hardware and have typically a very trustworthy reputation (and possibly certifications). 
> >  
> > Perhaps in the future the circle will close and we'll see some HPC centers become again providers of mission-critical general-puropse centralized computing resources in addition to HPC. :)
> >  
> > O-P
> > -- 
> > Olli-Pekka Lehto
> > Development Manager, Computing Platforms 
> > CSC - IT Center for Science Ltd.
> > E-Mail: olli-pekka.lehto at csc.fi // Tel: +358 50 381 8604 // skype: oplehto // twitter: @ople
> >  
> > On 10 May 2015, at 21:47, John Hearns <hearnsj at googlemail.com> wrote:
> > 
> > 
> > This article might be interesting:
> >  
> > http://www.information-age.com/technology/data-centre-and-it-infrastructure/123459441/inside-uks-first-collaborative-data-centre
> >  
> > As it says 'Data-centre-as-a-service'
> > A shared data centre, outside the centre of the city, used by several research inistitutes and universities.
> > I have been involved in preparing bids for equipment there, including the innovative eMedlab project.
> >  
> > Central London has its own problems in getting enough space and power for large computing setups, and this makes a lot of sense.
> >  
> >  
> >  
> >  
> >  
> > On 8 May 2015 at 20:58, Dimitris Zilaskos <dimitrisz at gmail.com> wrote:
> > Hi,
> > 
> > IBM Platform does provide IB for HPC with bare metal and cloudbursting, among other HPC services on the cloud. Detailed information including benchmarks can be found at http://www-03.ibm.com/systems/platformcomputing/products/cloudservice/ . Note that I work for IBM so I am obviously biased.
> > 
> > Best regards,
> > 
> > Dimitris
> >  
> > On Fri, May 8, 2015 at 2:40 PM, Prentice Bisbal <prentice.bisbal at rutgers.edu> wrote:
> > Mike,
> > 
> > What are the characteristics of your cluster workloads? Are they tightly coupled jobs, or are they embarassingly parallel or serial jobs? I find it hard to believe that a virtualized, ethernet shared network infrastructure can compete with FDR IB for performance on tightly coupled jobs. AWS HPC representatives came to my school to give a presentation on their offerings, and even they admitted as much.
> > 
> > If your workloads are communication intensive, I'd think harder about using the cloud, or find a cloud provider that provides IB for HPC (there are a few that do, but I can't remember their names).  If your workloads are loosely-coupled jobs or many serial jobs, AWS or similar might be fine. AWS does not provide IB, and in fact shares very little information about their network architecture, making it had to compare to other offerings without actually running benchmarks.
> > 
> > If your users primarily interact with the cluster through command-line logins, using the cloud shouldn't be noticeably different the hostname(s) they have to SSH to will be different, and moving data in an out might be different, but compiling and submitting jobs should be the same if you make the same tools available in the cloud that you have on your local clusters.
> > 
> > Prentice
> > 
> > 
> > 
> > 
> > On 05/07/2015 06:28 PM, Hutcheson, Mike wrote:
> > Hi.  We are working on refreshing the centralized HPC cluster resources
> > that our university researchers use.  I have been asked by our
> > administration to look into HPC in the cloud offerings as a possibility to
> > purchasing or running a cluster on-site.
> > 
> > We currently run a 173-node, CentOS-based cluster with ~120TB (soon to
> > increase to 300+TB) in our datacenter.  It¹s a standard cluster
> > configuration:  IB network, distributed file system (BeeGFS.  I really
> > like it), Torque/Maui batch.  Our users run a varied workload, from
> > fine-grained, MPI-based parallel aps scaling to 100s of cores to
> > coarse-grained, high-throughput jobs (We¹re a CMS Tier-3 site) with high
> > I/O requirements.
> > 
> > Whatever we transition to, whether it be a new in-house cluster or
> > something ³out there², I want to minimize the amount of change or learning
> > curve our users would have to experience.  They should be able to focus on
> > their research and not have to spend a lot of their time learning a new
> > system or trying to spin one up each time they have a job to run.
> > 
> > If you have worked with HPC in the cloud, either as an admin and/or
> > someone who has used cloud resources for research computing purposes, I
> > would appreciate learning your experience.
> > 
> > Even if you haven¹t used the cloud for HPC computing, please feel free to
> > share your thoughts or concerns on the matter.
> > 
> > Sort of along those same lines, what are your thoughts about leasing a
> > cluster and running it on-site?
> > 
> > Thanks for your time,
> > 
> > Mike Hutcheson
> > Assistant Director of Academic and Research Computing Services
> > Baylor University
> > 
> > 
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> > 
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> >  
> > 
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> > 
> >  
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> >  
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> 

> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


-- 
Gavin W. Burris
Senior Project Leader for Research Computing
The Wharton School
University of Pennsylvania


More information about the Beowulf mailing list