The recent availability of detailed geographic data permits terrain applications to process large areas at high resolution. However the required massive data processing presents significant challenges, demanding algorithms optimized for both data movement and computation. One such application is viewshed computation, that is, to determine all the points visible from a given point p. In this paper, we present an efficient algorithm to compute viewsheds on terrain stored in external memory. In the usual case where the observer's radius of interest is smaller than the terrain size, the algorithm complexity is θ(scan(n 2 )) where n 2 is the number of points in an n × n DEM and scan(n 2 ) is the minimum number of I/O operations required to read n 2 contiguous items from external memory. This is much faster than existing published algorithms.
We present a better algorithm and implementation for external memory viewshed computation. It is about four times faster than the most recent and most efficient published methods. Ours is also much simpler. Since processing large datasets can take hours, this improvement is significant. To reduce the total number of I/O operations, our method is based on subdividing the terrain into blocks which are stored in a special data structure managed as a cache memory.The viewshed is that region of the terrain that is visible by a fixed observer, who may be on or above the terrain. Its applications range from visual nuisance abatement to radio transmitter siting and surveillance.
We describe a surface compression technique to lossily compress elevation datasets. Our approach first approximates the uncompressed terrain using an over-determined system of linear equations based on the Laplacian partial differential equation. Then the approximation is refined with respect to the uncompressed terrain using an error metric. These two steps work alternately until we find an approximation that is good enough. We then further compress the result to achieve a better overall compression ratio. We present experiments and measurements using different metrics and our method gives convincing results.
Abstract-We present an algorithm (and implementation) which sites multiple (perhaps hundreds) of observers on a DEM terrain that is too large to store in internal memory. Tests show it to use a median of fifteen percent fewer observers to obtain the same joint visibility index (coverage) on huge terrains, compared to a naive partitioning of the terrain into subregions. This will permit more efficient positioning of facilities such as mobile phone towers, fire observation towers, and vigilance systems.
We introduce a parallel approximation of an Over-determined Laplacian Partial Differential Equation solver (ODETLAP) applied to the compression and restoration of terrain data used for Geographical Information Systems (GIS). ODET-LAP can be used to reconstruct a compressed elevation map, or to generate a dense regular grid from airborne Light Detection and Ranging (LIDAR) point cloud data. With previous methods, the time to execute ODETLAP does not scale well with the size of the input elevation map, resulting in running times that are prohibitively long for large data sets. Our algorithm divides the data set into patches, runs ODET-LAP on each patch, and then merges the patches together. This method gives two distinct speed improvements. First, we provide scalability by reducing the complexity such that the execution time grows almost linearly with the size of the input, even when run on a single processor. Second, we are able to calculate ODETLAP on the patches concurrently in a parallel or distributed environment. Our new patchbased implementation takes 2 seconds to run ODETLAP on an 800 × 800 elevation map using 128 processors, while the original version of ODETLAP takes nearly 10 minutes on a single processor (271 times longer). We demonstrate the effectiveness of the new algorithm by running it on data sets as large as 16000 × 16000 on a cluster of computers. We also discuss our preliminary results from running on an IBM Blue Gene/L system with 32,768 processors.
We examine a smugglers and border guards scenario. We place observers on a terrain so as to optimize their visible coverage area. Then we compute a path that a smuggler would take so as to avoid detection, while also minimizing the path length. We also examine how our results are affected by using a lossy representation of the terrain instead.We propose three new application-specific error metrics for evaluating terrain compression. Our target terrain applications are the optimal placement of observers on a landscape and the navigation through the terrain by smugglers. Instead of using standard metrics such as average or maximum elevation error, we seek to optimize our compression on the specific real-world application of smugglers and border guards.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.