WEST LAFAYETTE, Ind. – For the third year in a row, Purdue University has confirmed its lead in the rarified realm of supercomputing by unveiling Conte, the nation’s fastest university-owned supercomputer, developed in a collaboration with HP, Intel, and Mellanox.
Conte is the highest-ranking campus supercomputer on the June 2013 Top500.org list of international supercomputers.
Purdue’s latest supercomputer surpasses the nation’s previous fastest university-owned leading machine, Carter, which was built in 2011 and is still in operation at Purdue.
“We don’t do this only to be at the top of a list, although it’s nice to have an external measure of our success in delivering the most effective computational tools to our researchers” says Gerry McCartney, vice president for information technology, CIO, and Oesterle Professor of IT. “The reason we do this is because our faculty have a constantly growing need for more and faster computational resources.”
Conte clocked in with a sustained, measured maximum speed of 943.38 teraflops and a peak performance of 1.342 petaflops.
To give an idea of how fast that is, Conte would process a problem 15,000 times faster than a 15-inch Apple MacBook Pro, a high-end consumer laptop.
Conte was built with 580 HP ProLiant SL250 Generation 8 (Gen8) servers, each incorporating two Intel Xeon processors and two Intel Xeon Phi coprocessors, integrated with Mellanox 56Gb/S FDR InfiniBand.
Purdue names each of its supercomputers after a faculty member, staff member, or alumnus who made a significant contribution to computing at the university. The 2013 supercomputer is named for Samuel Conte, who helped establish the nation’s first computer science program at Purdue in 1962 and served as department head for 17 years.
Supercomputers, like college football teams, have national rankings. But unlike the disputable college rankings, supercomputers are ranked according to a standardized benchmarking test and then ranked by the nonprofit organization Top500.org. The rankings include supercomputers owned by governments, corporations, research centers and universities.
The results of the test are published twice each year in June and November on the organization’s website, http://www.Top500.org
In the most recent rankings, Conte ranked number 28 overall.
The top 10 university-owned supercomputers in the United States and their rankings on the June 2013 list are:
- Conte, Purdue University, 28
- Big Red II, Indiana University, 46
- HPCC, University of Southern California, 53
- BlueGene/Q, Rensselaer Polytechnic Institute, 76
- Palmetto2, Clemson University, 115
- BlueStreak, University of Rochester, 170
- Carter, Purdue University, 175
- Janus, University of Colorado, 239
- HPC, University of Southern California, 242
- Midway, University of Chicago, 301
“I’m afraid our campus has become a bit blasé about having the nation’s largest campus supercomputer,” McCartney says. “I think some people on campus have come to expect it.”
One person who is excited about Purdue’s computing resources is Joseph Francisco, the university’s William E. Moore Professor of Physical Chemistry and former national president of the American Chemical Society.
Francisco conducts research on atmospheric gasses that contribute to global warming, and he attributes at least part of his success to the computing resources at Purdue.
“The more of those interactions you account for in your calculations the closer it gets you to the real interactions in the atmosphere,” Francisco says. “What we’re able to do with Purdue’s community clusters is really push the envelope.”
Francisco’s ability to push the envelope of research contributed to his being elected to the National Academy of Sciences, one of the highest honors given to a scientist or engineer in the United States.
“I wouldn’t have been elected to the National Academy of Sciences without these clusters,” Francisco says. “Having the clusters, we were able to set a very high standard that led a lot of people around the world to use our work as a benchmark, which is the kind of thing that gets the attention of the National Academy.”
In addition to Francisco’s work on atmospheric chemistry, other research that will be done on Conte includes:
- Jeff Greeley, associate professor of chemical engineering, is working to make batteries smaller, lighter and longer-lasting through atom-scale models.
- Greg Blaisdell, professor of aeronautics and astronautics, is working to make jets quieter by modeling the exhaust flow using as many as one billion data points.
- Gerhard Klimeck, professor of electrical and computer engineering, is creating atom-scale models of future computer processor components and their quantum interactions.
- Michael Baldwin, associate professor of earth, atmospheric and planetary sciences, is developing high-resolution weather forecast models.
- Wen Jiang, associate professor of biology, is determining the structure of viruses at the atomic level and is developing high-resolution images of the actual viruses.
“The high performance computing is key,” Jiang says. “I can’t imagine what we would do without these large clusters.”
Conte is different from previous Purdue cluster-type supercomputers because of its heavy use of parallel processing conducted by the Intel Xenon Phi coprocessors. Conte has 580 servers (570 at the time of testing) with 9,120 standard cores and 68,400 Phi cores, for a total of 77,520 cores.
By comparison, the four other supercomputers also in operation at Purdue have a combined 46,152 cores.
McCartney says Purdue staff worked closely with two of the university’s “Foundational IT Partners” – Intel and Hewlett-Packard – to develop Conte.
“The work that these companies have done has been exceptional, but what really struck me was the commitment by both companies to help us develop top resources for our science and engineering faculty,” McCartney says.
Conte was built at a cost of $4.6 million, of which $2 million will come from faculty members contributing a portion of their federal research funding through the university’s “community cluster” program.
Purdue community-cluster supercomputer performance measures since 2008 include:
- May 2008: Steele, 26.8 teraflops sustained measured maximum speed, 67.33 teraflops peak speed
- July 2009: Coates: 52.2 teraflops sustained maximum, 79.44 peak
- Sept. 2010: Rossmann: 75.4 teraflops measured, 92.94 peak
- Sept. 2011: Hansen: not tested, 87.88 teraflops peak
- Nov. 2011: Carter: 186.9 teraflops measured, 215.65 peak
- June 2013: Conte: 943.38 teraflops measured, 1342.10 peak