We describe a color image reconstruction method that enables both direct visualization and direct digital image acquisition from one oral tissue by using various light sources and color compensating filters. In this method, the image of the oral tissue with white light emitting diodes (LEDs) with blue color compensating filter has a larger color difference between the normal and inflamed tissues. The enhanced visualization comes from the white light color mixing between the red normal tissue and bluish white light from the LEDs. With our method, we evaluate the perceived tissue reflectance in each pixel of the image and color reproduction with different illuminated spectra. Our approach to enhancement of visually perceived color difference between normal and inflamed oral tissue involves optimization of illumination and observation conditions by allowing a significant optical contrast of illuminated spectrum to reach the observer's eyes. In comparison with a conventional daylight LED flashlight, a LED with blue filter as the illuminant for oral cavity detection enhances the color difference between normal and inflamed tissues by 32%.
The active shape model (ASM) has been successfully applied to locate facial landmarks. However, in some exaggerated facial expressions, such as surprise, laugh and provoked eyebrows, it is prone to make mistaken detection. To overcome this difficulty, we propose a two-stage facial landmark detection algorithm. n the first stage, we focus on detecting the individual salient facial landmarks by applying a commonly-used Adaboosting-based algorithm, and then further apply a global ASM to refine the positions of these landmarks iteratively. All the salient facial landmarks are corner-type points, they are left/right eye inner and outer corners, left/right eyebrow inner and outer corners, and left/right mouth corners. From the 10 salient landmarks, a global active shape model of facial landmarks is constructed. In the second stage, the individual detection results of facial landmarks serve as the initial positions of active shape model which can be further refined iteratively by an ASM algorithm. Experimental results demonstrate that the proposed method can achieve very good performance in locating facial landmarks and it consistently and considerably outperforms the traditional ASM method.
This paper describes typical research on Chinese optical character recognition in Taiwan. Chinese characters can be represented by a set of basic line segments called strokes. Several approaches to the recognition of handwritten Chinese characters by stroke analysis are described here. A typical optical character recognition (OCR) system consists of four main parts: image preprocessing, feature extraction, radical extraction and matching. Image preprocessing is used to provide the suitable format for data processing. Feature extraction is used to extract stable features from the Chinese character. Radical extraction is used to decompose the Chinese character into radicals. Finally, matching is used to recognize the Chinese character. The reasons for using strokes as the features for Chinese character recognition are the following. First, all Chinese characters can be represented by a combination of strokes. Second, the algorithms developed under the concept of strokes do not have to be modified when the number of characters increases. Therefore, the algorithms described in this paper are suitable for recognizing large sets of Chinese characters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.