Power Hungry Performance

Researcher Qinru
Qiu aims to control
voracious appetite
for energy

By Katherine Karlson '74

While our computers, cell phones and cameras become faster, smaller and better, the sophisticated system-on-chip (SoC) technology that powers them has a dirty little secret: it’s an energy hog.

The multiprocessor system, or “SoC,” is loaded with multiple processing units that enable it to be the workhorse that pulls the computing load within a device. But with its increased functionality comes an increase in energy consumption and thus heat that impairs the chip’s ability to perform its necessary tasks.

“Energy for computing has become a critical problem, both in terms of conservation and creating new sources of it,” explains Qinru Qiu, an associate professor of electrical and computer engineering who’s been with the Watson School since 2003. Her interest in this relatively new field of power management stems in part from the integral role energy sustainability plays in technological advances such as game systems and electronic gadgets.

In 2009, the National Science Foundation (NSF) held a workshop on the science of power management as a way to promote research in the field. And, that same year, Qiu received one of the NSF’s highly sought-after Faculty Early Career Development (CAREER) Program grants to research adaptive power management. Qiu will use hers — $409,000 over five years — to support her research into creating software that will manage the ebb and flow of multiprocessor energy demands.

“Qinru takes a comprehensive and well-integrated approach to the power management of a multiprocessor on a chip that includes theoretical analysis, simulation and actual implementation,” says Krishna Kant, NSF grant officer.

Because the SoC requires the maximum performance of its multiprocessors only during full workload, Qiu is developing a power management paradigm that will adapt to the multiprocessor’s predicted workload, slowing it down or switching it to low-power mode as needed.

“You can better manage the power if you know what will happen within that system in the future,” Qiu says. So, she examines previous behavior to predict future actions much like her colleagues in the social sciences. “A hard disk takes time and power to wake up after it’s been turned off. But, if you know you will have a request for it to perform work in the next 30 seconds, then you won’t turn it off, but put it into a low-power mode.”

Qinru Qiu“My work is to detect and model workload patterns of the multiprocessor and find the most energy-efficient management policy based on the mode,” Qiu says. “It’s a software issue, but I have to consider the hardware as well.”

Within the past year she has successfully developed an algorithm that predicts this future behavior. It’s actually an extension of reinforcement learning in which the machine receives a virtual reward if it makes the right decision to switch to a low-power mode.

Her research team consists of five graduate students, two of whom graduated in May 2010.

Yang Ge, a third-year graduate student, works on the algorithm that allows the processor to run as fast and perform as many tasks as possible within safe temperature parameters. He appreciates that the project exposes him to both hardware and software issues.

“My software lets the processor slow down or take a rest at the most appropriate times, when that rest will not hurt the performance very much. It also lets the processor speed up to the maximum when it is necessary,” Ge says.

Hao Shen is a first-year graduate student focusing on using software to run simulations. But he sees a broader horizon for power management in computing.

“It’s still in the early stage, and I can devote my career to it,” Shen says.

Qiu also notes that just as power can be better managed if the SoC modifies its energy demand, the same management can also allocate work more efficiently.

Her research will also improve the decision-making ability of the power management process by distributing tasks to individual processors in the system rather than rely on centralized control. If each processor can make its own decisions using only local information, and without communicating with other remote processors, it will reduce the complexity of the decision-making as well as the power requirements for communication.

“Power demands have risen exponentially in the last couple of decades, says Qiu. “Computers in the United States use about 100 gigawatts of power. Reducing even 1 percent of it is equivalent to saving a power plant.”

Watson Review Masthead