Trajectories of moving objects are collected in many applications. Raw trajectory data is typically very large, and has to be simplified before use. In this paper, we introduce the notion of directionpreserving trajectory simplification, and show both analytically and empirically that it can support a broader range of applications than traditional position-preserving trajectory simplification. We present a polynomial-time algorithm for optimal directionpreserving simplification, and another approximate algorithm with a quality guarantee. Extensive experimental evaluation with real trajectory data shows the benefit of the new techniques.
Abstract-Peer review has become the most common practice for judging papers submitted to a conference for decades. An extremely important task involved in peer review is to assign submitted papers to reviewers with appropriate expertise which is referred to as paper-reviewer assignment. In this paper, we study the paper-reviewer assignment problem from both the goodness aspect and the fairness aspect. For the goodness aspect, we propose to maximize the topic coverage of the paper-reviewer assignment. This objective is new and the problem based on this objective is shown to be NP-hard. To solve this problem efficiently, we design an approximate algorithm which gives a 1 3 -approximation. For the fairness aspect, we perform a detailed study on conflict-of-interest (COI) types and discuss several issues related to using COI, which, we hope, can raise some open discussions among researchers on the COI study. Finally, we conducted experiments on real datasets which verified the effectiveness of our algorithm and also revealed some interesting results of COI.
• A novel extended encoder-decoder long short-term memory neural network (ED-LSTME) for ionospheric TEC forecasting over China is developed • ED-LSTME shows a strong capability in improving TEC forecasting at different geographical locations, seasons, and geomagnetic conditions • ED-LSTME is robust and outperforms all the six selected baselines when comparing with the models in terms of their performance Accepted Article This article has been accepted for publication and undergone full peer review but has not been through the copyediting, typesetting, pagination and proofreading process, which may lead to differences between this version and the Version of Record. Please cite this article as
Abstract-Viral marketing has attracted considerable concerns in recent years due to its novel idea of leveraging the social network to propagate the awareness of products. Specifically, viral marketing is to first target a limited number of users (seeds) in the social network by providing incentives, and these targeted users would then initiate the process of awareness spread by propagating the information to their friends via their social relationships. Extensive studies have been conducted for maximizing the awareness spread given the number of seeds. However, all of them fail to consider the common scenario of viral marketing where companies hope to use as few seeds as possible yet influencing at least a certain number of users. In this paper, we propose a new problem, called -MIN-Seed, whose objective is to minimize the number of seeds while at least users are influenced. -MIN-Seed, unfortunately, is proved to be NP-hard in this work. In such case, we develop a greedy algorithm that can provide error guarantees for -MIN-Seed. Furthermore, for the problem setting where is equal to the number of all users in the social network, denoted by Full-Coverage, we design other efficient algorithms. Extensive experiments were conducted on real datasets to verify our algorithm.
In this paper, we study the optimal location query problem based on road networks. Specifically, we have a road network on which some clients and servers are located. Each client finds the server that is closest to her for service and her cost of getting served is equal to the (network) distance between the client and the server serving her multiplied by her weight or importance. The optimal location query problem is to find a location for setting up a new server such that the maximum cost of clients being served by the servers (including the new server) is minimized. This problem has been studied before, but the state-of-the-art is still not efficient enough. In this paper, we propose an efficient algorithm for the optimal location query problem, which is based on a novel idea of nearest location component. We also discuss three extensions of the optimal location query problem, namely the optimal multiple-location query problem, the optimal location query problem on 3D road networks, and the optimal location query problem with another objective. Extensive experiments were conducted which showed that our algorithms are faster than the state-of-the-art by at least an order of magnitude on large real benchmark datasets. For example, on our largest real datasets, the state-of-the-art ran for more than 10 hours but our algorithm ran within 3 minutes only (i.e., >200 times faster).
Abstract-With the proliferation of spatial-textual data such as location-based services and geo-tagged websites, spatial keyword queries are ubiquitous in real life. One example of spatial-keyword query is the so-called collective spatial keyword query (CoSKQ) which is to find for a given query consisting a query location and several query keywords a set of objects which covers the query keywords collectively and has the smallest cost wrt the query location. In the literature, many different functions were proposed for defining the cost and correspondingly, many different approaches were developed for the CoSKQ problem. In this paper, we study the CoSKQ problem systematically by proposing a unified cost function and a unified approach for the CoSKQ problem (with the unified cost function). The unified cost function includes all existing cost functions as special cases and the unified approach solves the CoSKQ problem with the unified cost function in a unified way. Experiments were conducted on both real and synthetic datasets which verified our proposed approach.
Online bipartite graph matching is attracting growing research attention due to the development of dynamic task assignment in sharing economy applications, where tasks need be assigned dynamically to workers. Past studies lack practicability in terms of both problem formulation and solution framework. On the one hand, some problem settings in prior online bipartite graph matching research are impractical for real-world applications. On the other hand, existing solutions to online bipartite graph matching are inefficient due to the unnecessary real-time decision making. In this paper, we propose the dynamic bipartite graph matching (DBGM) problem to be better aligned with real-world applications and devise a novel adaptive batch-based solution framework with a constant competitive ratio. As an effective and efficient implementation of the solution framework, we design a reinforcement learning based algorithm, called Restricted Q-learning (RQL), which makes near-optimal decisions on batch splitting. Extensive experimental results on both real and synthetic datasets show that our methods outperform the state-of-the-arts in terms of both effectiveness and efficiency.• We propose the dynamic bipartite graph matching (D-BGM) problem, which is a more practical formulation of dynamic task assignment in emerging intelligent transportation and spatial crowdsourcing applications. • We devise a novel adaptive batch-based framework to solve the DBGM problem and prove that its performance is guaranteed by a constant competitive ratio 1 C−1 under the adversarial model , where C is the maximum duration of a worker/task.• We propose an effective and efficient RL-based algorithm, Restricted Q-learning (RQL), to retrieve a near-optimal batch-based strategy. • We validate the effectiveness and efficiency of our methods on synthetic and real datasets. Experimental results show that our methods outperform state-of-the-arts in terms of the overall revenue and running time. In the rest of this paper, we review related work in Sec. II and formally define the DBGM problem in Sec. III. We introduce the adaptive batch-based framework and analyze its competitive ratio in Sec. IV and propose an RL-based solution to find the batch splitting strategy in Sec. V. We evaluate our solutions in Sec. VI and finally conclude in Sec. VII.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.