Mobile edge computing is emerging as a new computing paradigm that provides enhanced experience to mobile users via low latency connections and augmented computation capacity. As the amount of user requests is time-varying, while the computation capacity of edge hosts is limited, the Cloud Assisted Mobile Edge (CAME) computing framework is introduced to improve the scalability of the edge platform. By outsourcing mobile requests to clouds with various types of instances, the CAME framework can accommodate dynamic mobile requests with diverse quality of service requirements. In order to provide guaranteed services at minimal system cost, the edge resource provisioning and cloud outsourcing of the CAME framework should be carefully designed in a cost-efficient manner. Specifically, two fundamental problems should be answered: (1) What is the optimal edge computation capacity configuration? (2) What types of cloud instances should be tenanted and what is the amount of each type? To solve these issues, we formulate the resource provisioning in CAME framework as an optimization problem. By exploiting the piecewise convex property of this problem, the Optimal Resource Provisioning (ORP) algorithms with different instances are developed, so as to optimize the computation capacity of edge hosts and meanwhile dynamically adjust the cloud tenancy strategy. The proposed algorithms are proved to be with polynomial computational complexity. To evaluate the performance of the ORP algorithms, extensive simulations and experiments are conducted based on both widely-used traffic models and Google cluster usage tracelogs, respectively. It is shown that the proposed ORP algorithms outperform the local-first and cloud-first benchmark algorithms in system flexibility and cost-efficiency.
Aspect-level sentiment classification is an interesting but challenging research problem, namely, the prediction of the sentiment polarity toward a specific aspect term of an opinionated sentence. Previous attention-based recurrent neural networks have been proposed to address this problem because attention mechanism is capable of finding out those words contributing more to the prediction than others and have shown great promise. However, the major drawback of these attention-based approaches is that the explicit position context is ignored. Drawing inspirations from the manner modeling the position context in information retrieval and question answering, we hypothesize that we should pay much more attention to the context words neighboring to the aspect than those far away, especially when one review sentence is a long sequence or contains multiple aspect terms. Based on this conjecture, in this paper, we put forward a new attentive LSTM model, dubbed PosATT-LSTM, which not only takes into account the importance of each context word but also incorporates the position-aware vectors, which represents the explicit position context between the aspect and its context words. We conduct substantial experiments on the SemEval 2014 datasets, and the encouraging results indicate the efficacy of our proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.