With modern LiDAR technology the amount of topographic data, in the form of massive point clouds, has increased dramatically. One of the most fundamental GIS tasks is to construct a grid digital elevation model (DEM) from these 3D point clouds. In this paper we present a simple yet very fast algorithm for constructing a grid DEM from massive point clouds using natural neighbor interpolation (NNI). We use a graphics processing unit (GPU) to significantly speed up the computation. To handle the large data sets and to deal with graphics hardware limitations clever blocking schemes are used to partition the point cloud. For example, using standard desktop computers and graphics hardware, we construct a high-resolution grid with 150 million cells from two billion points in less than thirty-seven minutes. This is about one-tenth of the time required for the same computer to perform a standard linear interpolation, which produces a much less smooth surface.
We consider the problem of automatically cleaning massive sonar data point clouds, that is, the problem of automatically removing noisy points that for example appear as a result of scans of (shoals of) fish, multiple reflections, scanner self-reflections, refraction in gas bubbles, and so on.We describe a new algorithm that avoids the problems of previous local-neighbourhood based algorithms. Our algorithm is theoretically I/O-efficient, that is, it is capable of efficiently processing massive sonar point clouds that do not fit in internal memory but must reside on disk. The algorithm is also relatively simple and thus practically efficient, partly due to the development of a new simple algorithm for computing the connected components of a graph embedded in the plane. A version of our cleaning algorithm has already been incorporated in a commercial product.
A fundamental problem in analyzing trajectory data is to identify common patterns between pairs or among groups of trajectories. In this paper, we consider the problem of matching similar portions between a pair of trajectories, each observed as a sequence of points sampled from it. We present new measures of trajectory similarity -both local and global -between a pair of trajectories to distinguish between similar and dissimilar portions. We then use this model to perform segmentation of a set of trajectories into fragments, contiguous portions of trajectories shared by many of them.Our model for similarity is robust under noise and sampling rate variations. The model also yields a score which can be used to rank multiple pairs of trajectories according to similarity, e.g. in clustering applications. We present quadratic time algorithms to compute the similarity between trajectory pairs under our measures together with algorithms to identify fragments in a large set of trajectories efficiently using the similarity model.Finally, we present an extensive experimental study evaluating the effectiveness of our approach on real datasets, comparing it with earlier approaches. Our experiments show that our model for similarity is highly accurate in distinguishing similar and dissimilar portions as compared to earlier methods even with sparse sampling. Further, our segmentation algorithm is able to identify a small set of fragments capturing the common parts of trajectories in the dataset.
Abstract. In the resilient memory model any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted cells. An upper bound, δ, on the number of corruptions and O(1) reliable memory cells are provided. In this model, a data structure is denoted resilient if it gives the correct output on the set of uncorrupted elements. We propose two optimal resilient static dictionaries, a randomized one and a deterministic one. The randomized dictionary supports searches in O(log n + δ) expected time using O(log δ) random bits in the worst case, under the assumption that corruptions are not performed by an adaptive adversary. The deterministic static dictionary supports searches in O(log n + δ) time in the worst case. We also introduce a deterministic dynamic resilient dictionary supporting searches in O(log n + δ) time in the worst case, which is optimal, and updates in O(log n + δ) amortized time. Our dynamic dictionary supports range queries in O(log n + δ + k) worst case time, where k is the size of the output.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.