Energy Efficient Computing - by Prudence W.H. Wong


Team Information    Publications
The energy and power requirements of computing devices have been increasing exponentially. While the power density in microprocessors has doubled every three years, battery capacities have been growing at a linear rate. This results in great problems for battery operated mobile devices. Furthermore, a large part of the energy consumed by the processors is converted into heat, therefore, even systems that do not rely on batteries need to deal with power consumption. The heat generated by modern processors is becoming harder to dissipate and is particularly problematic when large numbers of them are in close proximity, such as in a supercomputer or a server farm. Exponential rise in heat density together with exponentially rising cooling cost has threatened the ability of industry to deploy new systems.

Energy efficient scheduling

Energy reduction and performance are two conflicting goals; in general, the more available energy is the better the performance can be achieved. To reduce energy consumption without sacrificing performance significantly, "energy awareness" becomes a crucial concept: a system should only deliver the required service so as to avoid superfluous energy consumption, i.e., to use the right amount of energy at the right time and in the right place. Based on this concept, there are two aspects to consider during the design: the technique that enables the awareness and the power management strategy that exploits it.

A straightforward technique of power management is to shut down the processor when it is idle. A more advanced and dominant technique is dynamic voltage / speed scaling. which is more effective than simply shutting down the processor. Modern processor technologies enable processors to operate at various processor speeds. The development of these technologies is based on the property that the power consumed of a processor is usually a convex and increasing function of processor speed, the convexity of the power function means that the more slowly a task is run, the less energy is used to complete the task. To take full advantage of this new technology, a power management strategy is required to determine the voltage/speed to be used to make sure that the energy is used efficiently.

Power consumption is usually estimated by the well-known cube-root rule: the speed s a processor operates at is roughly proportional to the cube-root of the power p, equivalently, p(s) = s3. For example, if we double the speed of the processor, we can halve the time spent on a task but the power consumption rate is increased eight times, and the total energy needed is therefore increased by four times. At the first glance, one may think that the problem can be solved by operating the processor as slow as possible. Unfortunately, the problem is not so trivial since there are usually other orthogonal objectives which aim at providing some kind of quality of service, like deadline feasibility, makespan (finish time), response time, throughput. Therefore, it is crucial that the right speed is used. The scheduling algorithm, therefore, has to determine the speed at which the processor should run at every time unit, this is known as dynamic speed-scaling (DVS) or dynamic voltage scaling.

Classical job scheduling schedules jobs on processors running at the same speed throughout and the decision to be made is which task to run at each time unit. With DVS, the scheduling algorithm also has to determine the speed at which the processor runs. We consider both the off-line version and the on-line version of the problem with different optimization objectives like minimizing total energy consumption while completing all jobs by deadlines, minimizing total energy plus response time, etc.

Energy Efficient GPU Computing

Power management is not only important to conserve energy but also to reduce temperature. Temperature is important because the processor's life time can be severely shortened if a processor exceeds its thermal threshold. Therefore, it is wise to use multiprocessor to provide high processing power while keeping temperature of each processor low. In this respect, a low-cost option is to use GPU - General Purpose Graphical Processing Unit. The main question is how to ensure energy efficient computation on GPU.


Additional information (Top)

Publications (Journals and Conferences): (Top)
Full publications see DBLP

Maintained by Prudence Wong