Bill Dally, who became NVIDIA’s Chief Scientist and Senior Vice President of Research after leading the computer science department at Stanford University, will discuss “Future Challenges of Large-Scale Computing” as the conference keynote address at the 2013 International Supercomputing Conference.
In his talk on Monday, June 17, Dally will discuss how high performance computing and data analytics share challenges of power, programmability, and scalability to realize their potential, with energy efficiency playing a greater role in determining system performance. At the same time, the large-scale parallelism and storage hierarchy of future machines pose programming challenges. Dally will discuss both these challenges and some of the technologies being developed to address them.
The ISC’13 Communications Team recently caught up with him.
Q1. To what do you attribute your interest in computer science?
I’ve been interested in computers as long as I can remember. While in junior high school, I got the opportunity to program a computer and got hooked. When microcomputers first appeared in the mid-70s, they made computers much more accessible and made it easy to experiment with hardware. I’ve been playing with computer hardware and software ever since.
Q2. If you had not pursued a computer science career, what do you think you would have done? Was there ever a Plan B?
There has never been a Plan B. I do hold a commercial pilot’s license and at times considered a career in aviation. I also have been tinkering with cars for as long as I can remember, and so could have found something in the automotive space.
Q3. As a professor at MIT and Stanford, a founder of multiple commercial startups and now the Chief Scientist at NVIDIA, you’ve straddled the academia-commercial divide for some time. What do you like about maintaining these dual roles?
Keeping a foot in the academic space gives me a long-term, big-picture perspective on key issues in HPC architecture and software. It’s much easier to develop out-of-the-box new ideas in the academic space. Having the other foot firmly planted in industry keeps me grounded in reality and gives me a practical perspective. It’s also easier to mobilize the resources needed for large projects in industry.
Q4. Government funding for basic research and development (R&D) for science and engineering in the United States seems to have few champions these days. We could be looking at a prolonged period of reduced public support. How will this affect the computer industry?
It will slow the rate of innovation and result in more innovation occurring elsewhere — where research investments are still being made. Industry focuses its R&D dollars on the near term. Government funding is critical to long-term research progress.
Q5. Your specialty, computer chip architecture, is certainly a big focus for the industry now. How will the microprocessor of the future look different from the ones we have today? And how will applications adapt?
Future microprocessors will be heterogeneous multi-core chips — with a few cores optimized for single-thread performance (CPUs) and lots of cores optimized for throughput (pJ/op) (GPU cores). These chips will also have deep memory hierarchies. Applications will become more parallel and with more explicit identification of locality.
Q6. What is the biggest impediment to computer innovation today — hardware challenges, software challenges, dearth of talent, lack of R&D funding, or something else?
The biggest impediment to innovation is legacy software. Many innovations are held back by the need for backward compatibility — or by the excessive focus on yesterday’s software at the expense of tomorrow’s software. To address this challenge, at the same time we develop new architectures and software techniques, we work to develop a path for legacy software to migrate to the new architecture.