Road-matching processes establish links between multi-sourced road lines representing the same entities in the real world. Several road-matching methods have been developed in the last three decades. The main issue related to this process is selecting the most appropriate method. This selection depends on the data and requires a pre-process (i.e., accuracy assessment). This paper presents a new matching method for roads composed of different patterns. The proposed method matches road lines incrementally (i.e., from the most similar matching to the least similar). In the experimental testing, three road networks in Istanbul, Turkey, which are composed of tree, cellular, and hybrid patterns, provided by the municipality (authority), OpenStreetMap (volunteered), TomTom (private), and Basarsoft (private) were used. The similarity scores were determined using Hausdorff distance, orientation, sinuosity, mean perpendicular distance, mean length of triangle edges, and modified degree of connectivity. While the first four stages determined certain matches with regards to the scores, the last stage determined them with a criterion for overlapping areas among the buffers of the candidates. The results were evaluated with manual matching. According to the precision, recall, and F-value, the proposed method gives satisfactory results on different types of road patterns.
In this paper a new approach for generalization of contours is described. The aim of this approach is to obtain both simplified and smoothed contours lying on a minimum number of characteristic points and inside the error bands. Characteristic points of contours are defined in relation to the skeleton lines of the terrain and determined using the deviation angles at the contour points. Error bands for contours are constructed by means of the steepest slope lines and the mean square planimetric errors at the contour points. The new approach is compared to the Li-Openshaw algorithm according to the experimental testing results.
The requirements for the simplification of contours are explained, and existing approaches for the generalization (i.e., simplification and smoothing) of contours are briefly summarized. Skeleton lines (i.e., drainage and ridge lines) are supposed to provide information for the determination of characteristic parts of contours. Characteristic points are automatically determined during the process of deriving skeleton lines from contours in accordance with the method developed by Aumann, Ebner, and Tang (1991). Three widely used algorithms for the simplification of contours - nth point, distance tolerance, and Douglas-Peucker - are examined. They are analysed with respect to the retention of characteristic parts of contours, based on case studies. Finally, the algorithms are modified in such a way as to consider the determined characteristic points. A new simplification criterion is included in the algorithms, ensuring that they retain the characteristic parts of contours.
Multi-representation databases (MRDBs) are used in several geographical information system applications for different purposes. MRDBs are mainly obtained through model and cartographic generalizations. Simplification is the essential operator of cartographic generalization, and streams and lakes are essential features in hydrography. In this study, a new algorithm was developed for the simplification of streams and lakes. In this algorithm, deviation angles and error bands are used to determine the characteristic vertices and the planimetric accuracy of the features, respectively. The algorithm was tested using a high-resolution national hydrography dataset of Pomme de Terre, a sub-basin in the USA. To assess the performance of the new algorithm, the Bend Simplify and Douglas-Peucker algorithms, the medium-resolution hydrography dataset of the sub-basin, and Töpfer's radical law were used. For quantitative analysis, the vertex numbers, the lengths, and the sinuosity values were computed. Consequently, it was shown that the new algorithm was able to meet the main requirements (i.e., accuracy, legibility and aesthetics, and storage).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.