This brief proposes a pricing-based energy control strategy to remove the peak load for smart grid. According to the price, energy consumers control their energy consumption to make a tradeoff between the electricity cost and the load curtailment cost. The consumers are interactive with each other because of pricing based on the total load. We formulate the interactions among the consumers into a noncooperative game and give a sufficient condition to ensure a unique equilibrium in the game. We develop a distributed energy control algorithm and provide a sufficient convergence condition of the algorithm. The energy control algorithm starts at the beginning of each time slot, e.g., 15 min. Finally, the energy control strategy is applied to control the energy consumption of the consumers with heating ventilation air conditioning systems. The numerical results show that the energy control strategy is effective in removing the peak load and matching supply with demand, and the energy control algorithm can converge to the equilibrium.
A personal comfort model is an approach to thermal comfort modeling, for thermal environmental design and control, that predicts an individual's thermal comfort response, instead of the average response of a large population. We developed personal thermal comfort models using lab grade wearable in normal daily activities. We collected physiological signals (e.g., skin temperature, heart rate) of 14 subjects (6 female and 8 male adults) and environmental parameters (e.g., air temperature, relative humidity) for 2-4 weeks (at least 20 hours per day). Then we trained 14 models for each subject with different machine-learning algorithms to predict their thermal preference. The results show that the median prediction power could be up to 24% /78% /0.79 (Cohen's kappa/accuracy/AUC) with all features considered. The median prediction power reaches 21% /71% /0.7 after 200 subjective votes. We explored the importance of different features on the prediction performance by considering all subjects in one dataset. When all features included for the entire dataset, personal comfort models can generate the highest performance of 35% /76% /0.80 by the most predictive algorithm. Personal comfort models display the highest prediction power when occupants' thermal sensations is outside thermal neutrality. Skin temperature measured at the ankle is more predictive than measured at the wrist. We suggest that Cohen's kappa or AUC should be employed to assess the performance of personal thermal comfort models for imbalanced datasets due to the capacity to exclude random success.
Given a data set D containing millions of data points and a data consumer who is willing to pay for $X to train a machine learning (ML) model over D, how should we distribute this $X to each data point to reflect its "value"? In this paper, we define the "relative value of data" via the Shapley value, as it uniquely possesses properties with appealing real-world interpretations, such as fairness, rationality and decentralizability. For general, bounded utility functions, the Shapley value is known to be challenging to compute: to get Shapley values for all N data points, it requires O(2 N ) model evaluations for exact computation and O(N log N) for ( , δ)-approximation.In this paper, we focus on one popular family of ML models relying on K-nearest neighbors (KNN). The most surprising result is that for unweighted KNN classifiers and regressors, the Shapley value of all N data points can be computed, exactly, in O(N log N) time -an exponential improvement on computational complexity! Moreover, for ( , δ)-approximation, we are able to develop an algorithm based on Locality Sensitive Hashing (LSH) with only sublinear complexity O(N h( ,K) log N) when is not too small and K is not too large. We empirically evaluate our algorithms on up to 10 million data points and even our exact algorithm is up to three orders of magnitude faster than the baseline approximation algorithm. The LSH-based approximation algorithm can accelerate the value calculation process even further.We then extend our algorithms to other scenarios such as (1) weighed KNN classifiers, (2) different data points are clustered by different data curators, and (3) there are data analysts providing computation who also requires proper valuation. Some of these extensions, although also being improved exponentially, are less practical for exact computation (e.g., O(N K ) complexity for weighted KNN). We thus propose a Monte Carlo approximation algorithm, which is O(N(log N) 2 /(log K) 2 ) times more efficient than the baseline approximation algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.