Color vision deficiency (CVD) is caused by anomalies in the cone cells of the human retina. It affects approximately 200 million individuals throughout the world. Although previous studies have proposed compensation methods, contrast and naturalness preservation have not been adequately and simultaneously addressed in the state-of-the-art studies. This paper focuses on red-green dichromats' compensation and proposes a recoloring algorithm that combines contrast enhancement and naturalness preservation in a unified optimization model. In this implementation, representative color extraction and edit propagation methods are introduced to maintain global and local information in the recolored image. The quantitative evaluation results showed that the proposed method is competitive with state-of-the-art methods. A subjective experiment was also conducted and the evaluation results revealed that the proposed method obtained the best scores in preserving both naturalness and information for individuals with severe red-green CVD.
Human anatomical specimen museums are commonly used by medical, nursing, and paramedical students. Through dissection and prosection, the specimens housed in these museums allow students to appreciate the complex relationships of organs and structures in more detail than textbooks could provide. However, it may be difficult for students, particularly novices, to identify the various parts of these anatomical structures without additional explanations from a docent or supplemental illustrations. Recently, augmented reality (AR) has been used in many museum exhibits to display virtual objects in videos captured from the real world. This technology can significantly enhance the learning experience. In this study, three AR‐based support systems for tours in medical specimen museums were developed, and their usability and effectiveness for learning were examined. The first system was constructed using an AR marker. This system could display virtual label information for specimens by capturing AR markers using a tablet camera. Individual AR markers were required for all specimens, but their presence in and on the prosected specimens could also be obtrusive. The second system was developed to set the specimen image itself as an image marker, as most specimens were displayed in cross section. Visitors could then obtain the label information presented by AR without any markers intruding on the display or anatomical specimens. The third system was comprised of a head‐mounted display combined with a natural click interface. The system could provide visitors with an environment for the natural manipulation of virtual objects with future scalability.
Several image recoloring methods have been proposed to compensate for the loss of contrast caused by color vision deficiency (CVD). However, these methods only work for dichromacy (a case in which one of the three types of cone cells loses its function completely), while the majority of CVD is anomalous trichromacy (another case in which one of the three types of cone cells partially loses its function). In this paper, a novel degree-adaptable recoloring algorithm is presented, which recolors images by minimizing an objective function constrained by contrast enhancement and naturalness preservation. To assess the effectiveness of the proposed method, a quantitative evaluation using common metrics and subjective studies involving 14 volunteers with varying degrees of CVD are conducted. The results of the evaluation experiment show that the proposed personalized recoloring method outperforms the stateof-the-art methods, achieving desirable contrast enhancement adapted to different degrees of CVD while preserving naturalness as much as possible.
We present stego-texture, a unique texture synthesis method that allows users to deliver personalized messages with beautiful, decorative textures. Our approach was inspired by the success of recent work generating marbling textures using mathematical functions. We were able to transform an input image or a text message into an intricate texture by combining the seven basic, reversible functions provided in the system. Later, the input image or message could be recovered by reversing the process of these functions. During the design process, the parameters of operations were automatically recorded, encrypted and invisibly embedded into the final pattern to create a stego-texture. In this way, the receiver could extract the hidden message from the stego-texture without the need for extra information from the sender. To ensure that the delivered message is unnoticeably covered by the texture, we propose a new technique for automatically creating a background that is harmonious with the message based on a set of visual perception cues. Electronic supplementary materialThe online version of this article (
An artist usually does not draw all the areas in a picture homogeneously but tries to make the work more expressive by emphasizing what is important while eliminating irrelevant details. Creating expressive painterly images with such accentuation effect remains to be a challenge because of the subjectivity of information selection. This paper presents a novel technique for automatically converting an input image into a pencil drawing with such emphasis and elimination effect. The proposed technique utilizes saliency map, a computational model for visual attention, to predict the focus of attention in the input image. A new level of detail controlling algorithm using multi-resolution pyramid is also developed for locally adapting the rendering parameters, such as the density, orientation and width of pencil strokes, to the degree of attention defined by saliency map. Experimental results show that the images generated with the proposed method present the visual effect similar to that of the real pencil drawing and can successfully direct the viewer's attention toward the focus.
This paper introduces novel methods to improve the aesthetic appearance of rendered grayscale digital images on woven fabric by smoothly changing tones and improving the reproducibility of fine details. The methods are based on stepping dithering, a recently developed dithering method for automatically generating jacquard weave patterns for arbitrary given images. The existing stepping dithering method suffers from two problems. The first problem is the visually unappealing repetition of patterns for input images containing low frequency, smooth gradation regions. The second problem is the low reproducibility of small structures with high frequency relative to mask size. This paper proposes new methods for faithfully rendering arbitrary natural images on jacquard fabric by solving the pattern repetition and low reproducibility problems. The new methods combine two approaches. The first problem is addressed by optimizing the distribution of thresholds in dither masks, while the second problem is addressed by adopting a dynamic binarizing process for an appropriate area of the stepping dither mask. The experiments described herein show that the proposed method successfully improves the appearance of the resulting woven fabric.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.