The EU-supported TeDUB (Technical Drawings Understanding for the Blind) project is developing a software system that aims to make technical diagrams accessible to blind and visually impaired people. It consists of two separate modules: one that analyses drawings either semi-automatically or automatically, and one that presents the results of this analysis to blind people and allows them to interact with it. The system is capable of analysing and presenting diagrams from a number of formally defined domains. A diagram enters the system as one of two types: first, diagrams contained in bitmap images, which do not explicitly contain the semantic structure of their content and thus have to be interpreted by the system, and second, diagrams obtained in a semantically enriched format that already yields this structure. The TeDUB system provides blind users with an interface to navigate and annotate these diagrams using a number of input and output devices. Extensive user evaluations have been carried out and an overall positive response from the participants has shown the effectiveness of the approach
Recently, a lot of research has been done on the matching of images and their structures. Although the approaches are very different, most methods use some kind of point selection from which descriptors or a hierarchy are derived. We focus here on the methods that are related to the detection of points and regions that can be detected in an affine invariant way. Most of the previous research concentrated on intensity based methods. However, we show in this work that color information can make a significant contribution to feature detection and matching. Our color based detection algorithms detect the most distinctive features and the experiments suggest that to obtain optimal performance, a tradeoff should be made between invariance and distinctiveness by an appropriate weighting of the intensity and color information.
Global features are commonly used to describe the image content. The problem with this approach is that these features cannot capture all parts of the image having different characteristics. Therefore, local computation of image information is necessary. By using salient points to represent local information, more discriminative features can be computed. This research is based on an existing affine invariant local feature detector, in which the features are assumed to be intensity corners. First, the existing algorithm is extended with the intensity based SUSAN corner detector which fundamentally differs from the original Harris corner detector. Second, the algorithm is extended to incorporate color information into the detection process. This results in a comparison between three different detection algorithms: the intensity based algorithm using the Harris or SUSAN detector and a color based algorithm that uses two color extended Harris detectors. The different algorithms are compared in terms of invariance and distinctiveness of the regions and computational complexity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.