MPEG‐7 Visual Standard specifies a set of descriptors that can be used to measure similarity in images or video. Among them, the Edge Histogram Descriptor describes edge distribution with a histogram based on local edge distribution in an image. Since the Edge Histogram Descriptor recommended for the MPEG‐7 standard represents only local edge distribution in the image, the matching performance for image retrieval may not be satisfactory. This paper proposes the use of global and semi‐local edge histograms generated directly from the local histogram bins to increase the matching performance. Then, the global, semiglobal, and local histograms of images are combined to measure the image similarity and are compared with the MPEG‐7 descriptor of the local‐only histogram. Since we exploit the absolute location of the edge in the image as well as its global composition, the proposed matching method can retrieve semantically similar images. Experiments on MPEG‐7 test images show that the proposed method yields better retrieval performance by an amount of 0.04 in ANMRR, which shows a significant difference in visual inspection.
Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.
The presence of haze in the atmosphere degrades the quality of images captured by visible camera sensors. The removal of haze, called dehazing, is typically performed under the physical degradation model, which necessitates a solution of an ill-posed inverse problem. To relieve the difficulty of the inverse problem, a novel prior called dark channel prior (DCP) was recently proposed and has received a great deal of attention. The DCP is derived from the characteristic of natural outdoor images that the intensity value of at least one color channel within a local window is close to zero. Based on the DCP, the dehazing is accomplished through four major steps: atmospheric light estimation, transmission map estimation, transmission map refinement, and image reconstruction. This four-step dehazing process makes it possible to provide a step-by-step approach to the complex solution of the ill-posed inverse problem. This also enables us to shed light on the systematic contributions of recent researches related to the DCP for each step of the dehazing process. Our detailed survey and experimental analysis on DCP-based methods will help readers understand the effectiveness of the individual step of the dehazing process and will facilitate development of advanced dehazing algorithms.
The purpose of this paper is to show how the edge histogram descriptor for MPEG-7 can be efficiently utilized for image matching. Since the edge histogram descriptor recommended for the MPEG-7 standard represents only local edge distribution in an image, the matching performance for image retrieval may not be satisfactory. In this paper, to increase the matching performance, we propose to use the global and semi-local edge histograms generated directly from the local histogram bins. Then, the global, semi-global, and local histograms of two images are compared to evaluate the similarity measure. Since we exploit the absolute locations of edge in the image as well as its global composition, the proposed matching method is considered to be a more image content-based retrieval. Experimental results support this claim. Experiments on test images for MPEG-7 core experiment show that the proposed method yields better retrieval performance especially for semantic similarity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.