Wired News :Linux Plus Itanium Equals Whoosh

eugene.leitl at lrz.uni-muenchen.de eugene.leitl at lrz.uni-muenchen.de
Thu Jan 18 10:37:09 PST 2001


 From Wired News, available online at:
http://www.wired.com/news/print/0,1294,41213,00.html

Linux Plus Itanium Equals Whoosh  
by Michelle Delio  

12:15 p.m. Jan. 16, 2001 PST 
Exploring big scientific mysteries requires big computing power. 

Ed Seidel, researcher at the National Center for Supercomputing
Applications and astrophysicist at the Max Planck Institute for
Gravitational Physics, studies black holes and neutron stars in an
attempt to understand the basic properties of space and time. 

Seidel and the research team he works with use Linux and an open
source program called Cactus they developed to aid them in their work.
And they are eagerly anticipating running those programs on one of the
first supercomputers with Intel's Itanium processor.     

The National Center for Supercomputing Applications (NCSA) at
University of Illinois at Champaign-Urbana announced on Tuesday that
NCSA will install two IBM Linux clusters at NCSA, creating the world's
fastest Linux supercomputer in academia, and the fourth fastest
supercomputer overall in the world.  

NCSA's new clusters will have 2 teraflops of computing power and will
be used by researchers to study some of the most fundamental questions
of science, such as the nature of gravitational waves first predicted
by Albert Einstein in his Theory of Relativity.  

The fastest computer in the world is IBM's ASCI White, with a
computational capability of 12.3 teraflops (capable of 12 trillion
calculations per second). The ASCI White covers an area the size of
two basketball courts and is used by the U.S. Department of Energy
program to help ensure the safety and reliability of the nation's
nuclear weapons stockpile without real-world testing. NCSA's new
supercomputer takes 4th place on the ASCI list of the world's fastest
computers, replacing the IBM ASCI Blue-Pacific, which runs at 1.2
Teraflops. 

"We believe that Linux clusters will soon be the most widely used
architecture for parallel computing, and that these two clusters from
IBM are the best way to deliver terascale performance," said Dan Reed,
director of NCSA and the National Computational Science Alliance.  

In June, Seidel, astrophysicist Wai-Mo Suen of Washington University
in St. Louis, and their international research teams were given access
to NCSA's 256-processor Origin2000.  

By the time Suen and Seidel had finished their simulations, they had
output nearly a terabyte of data and logged an astonishing 140,000
CPU-hours on the Origin2000. That's equivalent to more than 16 years
on a single-processor machine.  

Suen credited the success of the June runs to "more than a dozen of us
on both sides of the Atlantic cutting our daily allowance of sleep to
less than five hours for two weeks."  

The runs were monitored nonstop in both Germany and the United States.
Whoever first noticed a problem took the initiative to kill, correct
and restart the job.  

"The explosion of the open source community, the maturity of
clustering software, and the enthusiasm of the scientific community
all tell us that Linux clusters are the future of high-performance
computing," Seidel said.  

"It was very exciting (but) we are limited by how much time we can get
on such large machines. I hope very much that NCSA will be able to
continue to make such extraordinary high-end resources available in
order to make this kind of science more routine," Seidel added.  

Harnessing the firepower of 600 IBM eServer xSeries servers, the new
NCSA machine will process 2 trillion calculations per second. It would
take one person with a calculator more than 1.5 million years to
tabulate the number of calculations that the new NCSA machine will
handle in a single second.   
That kind of raw power and speed is necessary to tackle the kind of
jobs that Seidel and his team have in store for the new supercomputer
-- such as simulations of the violent collision of black holes and the
resulting gravitational waves.  

"Because of Linux clusters -- which are cost-effective, scalable and
integrate open source software with vendor solutions --
commodity-based supercomputing is now a reality. This allows us to
explore nature's most exotic phenomena, such as black holes, with much
more detail and at much lower cost than ever before," Seidel said.  

Seidel's team will also be using Cactus, an open source problem
solving application that originated in the academic research
community. Cactus has been widely used by a large international
association of physicists and computational scientists.  

The name Cactus is a visual reference to the application's basic
structures -- a central core that connects to application modules (the
"thorns") through an extendable interface. The thorns can implement
custom scientific or engineering applications, or standard
computational chores.  

Cactus also allows applications that are developed on standard
workstations or laptops, to be seamlessly run on clusters or
supercomputers.  

"The latest release of Cactus provides a middleware layer for enabling
many applications to scale from their desktop to the teraflop," said
Seidel.  

The first of the new NCSA clusters, to be installed in February by IBM
Global Services, will be based on IBM eServer x330 thin servers, each
with two 1 GHz Intel Pentium III processors, running Red Hat Linux.
The second cluster, to be installed this summer, will be one of the
first to use Intel's next generation 64-bit Itanium processor and will
run TurboLinux.  

Research scientists traditionally have had to build their own
computing systems and write their own code for each research project. 


Being tinkerers by trade, they want to have their hands deep into the
research from beginning to end. Ready-to-go but tinker-friendly Linux
and open source applications allows researchers to keep their hands
inside their projects while cutting down development and rollout time,
explained Dave Turek, vice-president of Deep Computing at IBM.  

"These IBM Linux clusters will enable scientists to focus more on the
results of their research initiatives, freeing them from the
additional burden of building their own clusters and writing code to
support their heavy computational demands," said Turek.   

Related Wired Links:  

Seti: Is Anybody Out There?  
Dec. 22, 2000 

Everybody Into the Research Pool  
Oct. 11, 2000 

Bio Gets Big Blue's Big Bucks  
Aug. 18, 2000 

Linux Books for You and Mom  
Aug. 1, 2000 

Copyright © 1994-2001 Wired Digital Inc. All rights reserved. 







More information about the Beowulf mailing list