Software Tool Taps Power of Graphics Processing
Today’s computers rely on powerful graphics processing units (GPUs) to create the spectacular graphics in video games. In fact, these GPUs are now more powerful than the traditional central processing units (CPUs) – or brains of the computer. As a result, developers are trying to tap into this power, and a research team has developed software that could make it easier for traditional software programs to take advantage of GPU’s processing ability, essentially increasing complex computing brainpower.
This is a big deal. The CPU from an average computer has about 10 gigaflops of computing power — or 10 billion operations per second. That sounds like a lot until you consider that the GPU from an average modern computer has 1 teraflop of computing power — which is 1 trillion operations per second.
However, using a GPU for general computing functions isn’t easy. The actual architecture of the GPU itself is designed to process graphics, not other applications. Because GPUs focus on turning data into millions of pixels on a screen, the architecture is designed to have many operations taking place in isolation from each other. The operation telling one pixel what to do is separate from the operations telling other pixels what to do. This hardware design makes graphics processing more efficient, but presents a stumbling block for those who want to use GPUs for more complex computing processes.
A research team from North Carolina State University has developed a compiler that could make it easier for traditional software programs to take advantage of GPUs.
“We have developed a software tool that takes computer program A and translates it into computer program B — which ultimately does the same thing program A does, but does it more efficiently on a GPU,” says Huiyang Zhou, an associate professor of electrical and computer engineering at NC State and co-author of a paper describing the research.
Program A, which is the user-provided input, is called a “naïve” version — it doesn’t consider GPU optimization, but focuses on providing a clear series of commands that tell the computer what to do. Zhou’s compiler software takes the naïve version and translates it into a program that can effectively utilize the GPU’s hardware so that the program operates a lot more quickly.
Zhou’s research team tested a series of standard programs to determine whether programs translated by their compiler software actually operated more efficiently than code that had been manually optimized for GPU use by leading GPU developers. Their results showed that programs translated by their compiler software ran approximately 30 percent more quickly than those optimized by the GPU developers.
“Tapping into your GPU can turn your personal computer into a supercomputer,” Zhou says.
The paper, “A GPGPU Compiler for Memory Optimization and Parallelism Management,” was co-authored by Zhou, NC State Ph.D. student Yi Yang, and University of Central Florida Ph.D. students Ping Xiang and Jingfei Kong. It will be presented June 7, 2010, at the Programming Language Design and Implementation conference in Toronto.
The research was funded by the National Science Foundation.