System Energy Efficiency Lab
Home People Research Publications Sponsors Contacts
   
Event-driven PM
Overview
TISMDP/Renewal
Results
Hw Implementation
References
 

Event-driven power management

We introduce two new models for power management at the system level that enable modeling system transitions with general distributions. They are event driven and can identify the policy that minimizes the energy consumption under performance constraint optimally.

One approach is based on Time-Indexed Semi-Markov Decision Process model (TISMDP). This model allows for transitions into low power states from any state, but it is also more complex than our other approach

 

Another approach is based on renewal theory. Renewal theory studies stochastic systems that have a state called renewal state, in which the process statistically begins anew. The time between successive visits to renewal state is called renewal time, and one cycle from renewal state, through other states and then back is called a renewal. In policy optimization for dynamic power management, the complete cycle of transition from the idle state, through the other power states and then back into the idle state can be viewed as one renewal of the system. A drawback of this optimization method is that the decision to transition to low power state can be made in only one state - idle.

 

The policy decisions are made only upon request arrival or upon finishing serving a request, instead of at every time increment as in discrete-time model. Since policy decisions are made in event-driven manner, more power is saved by not forcing policy re-evaluations as in discrete-time models.

In both cases, the policy optimization problem can be solved exactly and in polynomial time by solving a linear program. Clearly, since both approaches guarantee optimal solutions, they will give the same solution to a given optimization problem. The main advantage of Renewal model is that it guarantees globally optimal results with very fast optimization time.

 

RESULTS:

We measured large power savings on three different devices: laptop and desktop hard disks and the WLAN card. We also present simulation results showing savings in power consumption when our policy is implemented in a SmartBadge portable system.

Hard disks:

We measured and simulated three different policies based on stochastic models and compared them with two bounds: always-on and oracle policies. Always-on policy leaves the hard disk in the active state, and thus does not save any power. Oracle policy gives the lowest possible power consumption, as it transitions the disk into sleep state with the perfect knowledge of the future. It is computed off-line using a previously collected trace. Obviously, the oracle policy is an abstraction that cannot be used in run-time DPM.

  • Measurements within 11%of ideal oracle policy
  • factor of 2.4 lower than always-on
  • factor of 1.7 lower than default time-out

WLAN card:

  • up to 5 times lower power consumption relative to the default policy.

SmartBadge:

  • savings as much as 70% in power consumption

Finally, the comparison of policies obtained for the SmartBadge with the renewal model and TISMDP model clearly illustrate that whenever there is more than one decision point available, the TISMDP model should be used as it can utilize the extra degrees of freedom and thus obtainan optimal power management policy.

 

HARDWARE IMPLEMENTATION:

Realizing a part of, or the whole controller in hardware lowers the control overhead, with very minor additions to an already existing hardware power manager (e.g. ARM cores) or an on-chip FPGA. There are three different components to the optimal controller: the random number generator, the policy and the timer. The same policy evaluates even faster when synthesized into gates using Synopsis tools as shown in the table below Even the largest design takes only 15 registers and 855 gates.

Local PM Policy Synopsis Synthesis Results:

 

 

References: