Raw GPS trajectory data are often very large and use up excessive storage space. The efficiency and accuracy of activity patterns analysis or individual-environment interaction modeling using such data may be compromised due to data size and computational needs. Line generalization algorithms may be used to simplify GPS trajectories. However, traditional algorithms focus on geometric characteristics of linear features. Trajectory data may record information beyond location. Examples include time and elevation, and inferred information such as speed, transportation mode, and activities. Effective trajectory simplification should preserve these characteristics in addition to location and orientation of spatial-temporal movement. This paper proposes an Enhanced Douglas-Peucker (EDP) algorithm that implements a set of Enhanced Spatial-Temporal Constraints (ESTC) when simplifying trajectory data. These constraints ensure that the essential properties of a trajectory be preserved through preserving critical points. Further, this study argues that speed profile can uniquely identify a trajectory and thus it can be used to evaluate the effectiveness of a trajectory simplification. The proposed ESTC-EDP simplification method is applied to two examples of GPS trajectory. The results of trajectory simplification are reported and compared with that from traditional DP algorithm. The effectiveness of simplification is evaluated.
As one of the key operators of automated map generalization, algorithms for the line simplification have been widely researched in the past decades. Although many of the currently available algorithms have revealed satisfactory simplification performances with certain data types and selected test areas, it still remains a challenging task to solve the problems of (a) how to properly divide a cartographic line when it is too long to be dealt with directly; and (b) how to make adaptable parameterizations for various geo-data in different areas. In order to solve these two problems, a new line-simplification approach based on the Oblique-Dividing-Curve (ODC) method has been proposed in this paper. In this proposed model, one cartographic line is divided into a series of monotonic curves by the ODC method. Then, the curves are categorized into different groups according to their shapes, sizes and other geometric characteristics. The curves in different groups will trigger different strategies as well as the associated criteria for line simplification. Whenever a curve is simplified, the whole simplified cartographic line will be re-divided and the simplification process restarts again, i.e., the proposed simplification approach is iteratively operated until the final simplification result is achieved. Experiment evidence demonstrates that the proposed approach is able to handle the holistic bend-trend of the whole cartographic line during the simplification process and thereby provides considerably improved simplification performance with respect to maintaining the essential shape/salient characteristics and keeping the topological consistency. Moreover, the produced simplification results are not sensitive to the parameterizations of the proposed approach.
The extraction of skeleton lines of buildings is a key step in building spatial analysis, which is widely performed for building matching and updating. Several methods for vector data skeleton line extraction have been established, including the improved constrained Delaunay triangulation (CDT) and raster data skeleton line extraction methods, which are based on image processing technologies. However, none of the existing studies have attempted to combine these methods to extract the skeleton lines of buildings. This study aimed to develop a building skeleton line extraction method based on vector–raster data integration. The research object was buildings extracted from remote sensing images. First, vector–raster data mapping relationships were identified. Second, the buildings were triangulated using CDT. The extraction results of the Rosenfeld thin algorithm for raster data were then used to remove redundant triangles. Finally, the Shi–Tomasi corner detection algorithm was used to detect corners. The building skeleton lines were extracted by adjusting the connection method of the type three triangles in CDT. The experimental results demonstrate that the proposed method can effectively extract the skeleton lines of complex vector buildings. Moreover, the skeleton line extraction results included a few burrs and were robust against noise.
As a perception enabling technology of the Internet of Things, RFID can quickly identify target objects. The tag-to-tag collision problem seriously affects the identification performance of the RFID system, which causes the reader to be unable to accurately identify any tag within the specific time. The mainstream anticollision algorithms are limited by the performance bottleneck under the standard framework. In this paper, we analyze the features and merits of three kinds of algorithms in detail and propose a new algorithm architecture for RFID anticollision. Through the extensive experimental results comparison, we prove that the new architecture is effective to improve the performance of DFSA algorithms. Finally, we summarize the future research trends in the RFID anticollision algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.