[Beowulf] SGI to offer Windows on clusters

Ryan Waite ryanw at windows.microsoft.com
Sun Jan 21 18:23:25 PST 2007


Hi Geoff, comments below.
 
 -----Original Message-----
From: Geoff Jacobs [mailto:gdjacobs at gmail.com] 
Sent: Friday, January 19, 2007 1:39 PM
To: Ryan Waite
Cc: Joe Landman; Jim Lux; Beowulf at beowulf.org; Mikhail Kuzminsky;
mike at etek.chalmers.se
Subject: Re: [Beowulf] SGI to offer Windows on clusters
> 
> Ryan Waite wrote:
>> I know some of you aren't, um, tolerant of Microsoft for various
reasons
>> but I thought I'd clear up a couple errors in some of the posts. If
you
>> hate Microsoft at least you now have an email address for when you're
>> feeling grumpy.
>> 
>> 
>> Pricing
>> 
>> Retail pricing for Windows Server is about $750. Retail pricing for
>> Compute Cluster Server (CCS) is around $470. Most users will get the
>> product through either an OEM or a volume licensing agreement. In
both
>> cases they pay less than retail. Academic users can purchase CCS for
>> less than $100.
>> 
>> CCS is comprised of two CDs. The first is Windows Server. The second
CD
>> contains the clustering tools. The second CD has three major
features:
>> 1) a job scheduler, 2) systems management tools, and 3) Microsoft's
MPI
>> stack. The majority of HPC systems sold are small (less than 256
nodes)
>> and we've designed for those customers. So, users get an OS, job
>> scheduler, management package, and MPI stack for < $500.
> What about compilers?

Compilers are available from PGI (Fortran and C++), Intel (Fortran and
C++), Lahey (Fortran, don't remember if this was 64-bit support or not),
and Microsoft (C++, etc.). Visual Studio 2005 includes a parallel
debugger and OpenMP support.

> 
>> Our MPI stack is based on MPICH2 but we've made performance and
security
>> enhancements. The folks at ANL are very talented UNIX developers but
>> Windows is more efficient using async overlapped I/O. We've made
other,
>> similar changes to our stack and we're providing those changes back
to
>> ANL for incorporation in future MPICH stacks. We're also the first
group
>> at Microsoft making these kinds of sizable contributions back to the
>> open source community.
> As much as many of us might have issues with, err, the more aggressive
> marketing strategies Microsoft has used in the past, I can certainly
> appreciate people such as yourself - wanting to succeed by creating
good
> software - no matter where they work.
> 
>> 
>> SGI
>> 
>> These folks are great and I'm sure they have a lot to teach from
their
>> years in HPC. Also, we've hired people onto our HPC team from places
>> like Platform Computing, Cray, Silverstorm and other related
companies.
>> While we may be new, and while v1 products may be a little rough, I
>> think we're going to help the community bring HPC into mainstream
>> computing.
> I'm not sure that HPC will ever be mainstream. By definition, HPC
> involves making trade-offs and pushing the envelope of what is
possible
> with modern computer technology. It is also somewhat limited in the
> class of problem which it tackles. Mainstream (in my view) is
synonymous
> with general purpose.

Yep, I think you're right. I'm oversimplifying but I think HPC will have
two divisions in the future. The first are the disciplined (read:
hard-core) HPC users, people who require the fruits that come from
careful and sometimes laborious optimization of their HPC environments.
These are also the people who have the skills to deploy large (>512
node) clusters. These users have sophistication with complex software
packages, development tools, middlewear (schedulers, MPI stacks), and/or
hardware.

The second division will be people who don't have sophistication with
programming or systems management. Instead of using C++ and Fortran they
use very high level environments like R, Matlab, Mathematica, and Excel.
While they aren't classic HPC users they do have a lot of computational
work, work that could be completed quicker on a cluster. In this case
you're right, it's much more general purpose.

> 
> -- 
> Geoffrey D. Jacobs
> 
> 
> 




More information about the Beowulf mailing list