Abstract-According to the traditional dynamic power management random policy can't also pointed out that moment of decision and state transition, dynamic power management improvement algorithm, SMBPP(System message based predict policy) is presented in this paper. Firstly, the existing dynamic power management policy algorithm which neglects the application characteristics of the workload is introduced. Secondly, based on the information of the system, the equipment utilization rate of the task is established, and the distribution is updated according to the actual interval time. Thirdly, the predict policy based on task equipment utilization is proposed, and the influence of the strategy parameters on the system sensitivity is analyzed.. Finally, the algorithm is used in wind power bearing state monitoring device. Experimental results show that with the performance constrain, the algorithm pointed out moment of decision and state transition, more stable and more effectively reduces the power consumption.Keywords-DPM; equipment utilization rate; predict policy.
I INTRODUCTIONWith the development of semiconductor technology and the improvement of embedded device performance, power consumption on the system performance constraints becomes focus of embedded system research [1]. At this stage, low-power system design in general policy including dynamic power management (DPM) and dynamic voltage scaling (DVS), usually as a kind of DPM.DPM policy in general can be divided into three categories: (a) Timeout policy, the basic idea is in the load after a timeout threshold will load the device into a low power state, including fixed timeout threshold method and adaptive timeout threshold policy [2]. (b) Predict policy, its essence is to forecast on the length of the idle time before making a decision, if the predicted value is large enough directly to PMC (power manageable component) to switch to the corresponding dormancy mode [3]. (c) Stochastic Markov policy, by establishing a Markov model to describe the stochastic behavior of the system device operation request and service, the DPM decision problem as a controlled Markov chain, thus more accurate choice decision time and equipment state transitions [4].Timeout policy has the characteristics of simple principle and applied widely, but it is a kind of exploratory policy. Predict policy, once the threshold of inaccurate prediction, it may be counterproductive to increase power consumption. The study of stochastic Markov policy is very active in recent years. It can be divided into the following categories according to the establishment of the DPM decision model: based on modified Markov stochastic process level DPM model; based on continuous time Markov decision process DPM model; based on discrete time Markov decision process DPM model. DPM model of semi-Markov decision process based on time index, although the parameters of historical events are added, its essence is a kind of random timeout policy. Wu Qi theoretically proves the optimal strategies of DPM is determini...