Many contour-based image corner detectors are based on the curvature scale-space (CSS). We identify the weaknesses of the CSS-based detectors. First, the 'curvature' itself by its 'definition' is very much sensitive to the local variation and noise on the curve, unless an appropriate smoothing is carried out beforehand. In addition, the calculation of curvature involves derivatives of up to second order, which may cause instability and errors in the result. Second, the Gaussian smoothing causes changes to the curve and it is difficult to select an appropriate smoothing-scale, resulting in poor performance of the CSS corner detection technique. We propose a complete corner detection technique based on the chord-to-point distance accumulation (CPDA) for the discrete curvature estimation. The CPDA discrete curvature estimation technique is less sensitive to the local variation and noise on the curve. Moreover, it does not have the undesirable effect of the Gaussian smoothing. We provide a comprehensive performance study. Our experiments showed that the proposed technique performs better than the existing CSS-based and other related methods in terms of both average repeatability and localization error.
a b s t r a c tThis paper presents an automatic building detection technique using LIDAR data and multispectral imagery. Two masks are obtained from the LIDAR data: a 'primary building mask' and a 'secondary building mask'. The primary building mask indicates the void areas where the laser does not reach below a certain height threshold. The secondary building mask indicates the filled areas, from where the laser reflects, above the same threshold. Line segments are extracted from around the void areas in the primary building mask. Line segments around trees are removed using the normalized difference vegetation index derived from the orthorectified multispectral images. The initial building positions are obtained based on the remaining line segments. The complete buildings are detected from their initial positions using the two masks and multispectral images in the YIQ colour system. It is experimentally shown that the proposed technique can successfully detect urban residential buildings, when assessed in terms of 15 indices including completeness, correctness and quality.
Automatic 3D extraction of building roofs from remotely sensed data is important for many applications including city modelling. This paper proposes a new method for automatic 3D roof extraction through an effective integration of LIDAR (Light Detection And Ranging) data and multispectral orthoimagery. Using the ground height from a DEM (Digital Elevation Model), the raw LIDAR points are separated into two groups. The first group contains the ground points that are exploited to constitute a 'ground mask'. The second group contains the non-ground points which are segmented using an innovative image line guided segmentation technique to extract the roof planes. The image lines are extracted from the grey-scale version of the orthoimage and then classified into several classes such as 'ground', 'tree', 'roof edge' and 'roof ridge' using the ground mask and colour and texture information from the orthoimagery. During segmentation of the non-ground LIDAR points, the lines from the latter two classes are used as baselines to locate the nearby LIDAR points of the neighbouring planes. For each plane a robust seed region is thereby defined using the nearby non-ground LIDAR points of a baseline and this region is iteratively grown to extract the complete roof plane. Finally, a newly proposed rule-based procedure is applied to remove planes constructed on trees. Experimental results show
Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging) data. Using the ground height from a DEM (digital elevation model), the raw LIDAR points are separated into two groups. The first group contains the ground points that form a "building mask". The second group contains non-ground points that are clustered using the building mask. A cluster of points usually represents an individual building or tree. During segmentation, the planar roof segments are extracted from each cluster of points and refined using rules, such as the coplanarity of points and their locality. Planes on trees are removed using information, such as area and point height difference. Experimental results on nine areas of six different data sets show that the proposed method can successfully remove vegetation and, so, offers a high success rate for building detection (about 90% correctness and completeness) and roof plane extraction (about 80% correctness and completeness), when LIDAR point density is as low as four points/m 2 . Thus, the proposed method can be exploited in various applications.
Corner detectors have many applications in computer vision and image identification and retrieval. Contour-based corner detectors directly or indirectly estimate a significance measure (e.g., curvature) on the points of a planar curve, and select the curvature extrema points as corners. While an extensive number of contour-based corner detectors have been proposed over the last four decades, there is no comparative study of recently proposed detectors. This paper is an attempt to fill this gap. The general framework of contour-based corner detection is presented, and two major issues-curve smoothing and curvature estimation, which have major impacts on the corner detection performance, are discussed. A number of promising detectors are compared using both automatic and manual evaluation systems on two large datasets. It is observed that while the detectors using indirect curvature estimation techniques are more robust, the detectors using direct curvature estimation techniques are faster.
Some performance evaluation systems for building extraction techniques are manual in the sense that only visual results are provided or human judgment is employed. Many evaluation systems that employ one or more thresholds to ascertain whether an extracted building or roof plane is correct are subjective and cannot be applied in general. There are only a small number of automatic and threshold-free evaluation systems, but these do not necessarily consider all special cases, e.g., when over-and undersegmentation occurs during the extraction of roof planes. This paper proposes an automatic and threshold-free evaluation system that offers robust object-based evaluation of building extraction techniques. It makes one-to-one correspondences between extracted and reference entities using the maximum overlaps. Its application to the evaluation of a building extraction technique shows that it estimates different performance indicators including segmentation errors. Consequently, it can be employed for bias-free evaluation of other techniques whose outputs consist of polygonal entities.
Effective separation of buildings from trees is a major challenge in image-based automatic building detection. This paper presents a three-step method for effective separation of buildings from trees using aerial imagery and lidar data. First, it uses cues such as height to remove objects of low height such as bushes, and width to exclude trees with small horizontal coverage. The height threshold is also used to generate a ground mask where buildings are found to be more separable than in so-called normalized DSM. Second, image entropy and color information are jointly applied to remove easily distinguishable trees. Finally, an innovative rule-based procedure is employed using the edge orientation histogram from the imagery to eliminate false positive candidates. The improved building detection algorithm has been tested on different test areas and it is shown that the algorithm offers high building detection rate in complex scenes which are hilly and densely vegetated.
There are many applications, such as image copyright protection, where transformed images of a given test image need to be identified. The solution to this identification problem consists of two main stages. In stage one, certain representative features, such as corners, are detected in all images. In stage two, the representative features of the test image and the stored images are compared to identify the transformed images for the test image. Curvature scale-space (CSS) corner detectors look for curvature maxima or inflection points on planar curves. However, the arc-length used to parameterize the planar curves by the existing CSS detectors is not invariant to geometric transformations such as scaling. As a solution to stage one, this paper presents an improved CSS corner detector using the affine-length parameterization which is relatively invariant to affine transformations. We then present an improved corner matching technique as a solution to the stage two. Finally, we apply the proposed corner detection and matching techniques to identify the transformed images for a given image and report the promising results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.