Underwater images present blur and color cast, caused by light absorption and scattering in water medium. To restore underwater images through image formation model (IFM), the scene depth map is very important for the estimation of the transmission map and background light intensity. In this paper, we propose a rapid and effective scene depth estimation model based on underwater light attenuation prior (ULAP) for underwater images and train the model coefficients with learning-based supervised linear regression. With the correct depth map, the background light (BL) and transmission maps (TMs) for R-G-B light are easily estimated to recover the true scene radiance under the water. In order to evaluate the superiority of underwater image restoration using our estimated depth map, three assessment metrics demonstrate that our proposed method can enhance perceptual effect with less running time, compared to four state-of-theart image restoration methods.
Underwater images play a key role in ocean exploration, but often suffer from severe quality degradation due to light absorption and scattering in water medium. Although major breakthroughs have been made recently in the general area of image enhancement and restoration, the applicability of new methods for improving the quality of underwater images has not specifically been captured. In this paper, we review the image enhancement and restoration methods that tackle typical underwater image impairments, including some extreme degradations and distortions. Firstly, we introduce the key causes of quality reduction in underwater images, in terms of the underwater image formation model (IFM). Then, we review underwater restoration methods, considering both the IFM-free and the IFM-based approaches. Next, we present an experimental-based comparative evaluation of state-of-the-art IFM-free and IFM-based methods, considering also the prior-based parameter estimation algorithms of the IFM-based methods, using both subjective and objective analysis (the used code is freely available at https://github.com/wangyanckxx/Single-Underwater-Image-Enhancement-and-Color-Restoration). Starting from this study, we pinpoint the key shortcomings of existing methods, drawing recommendations for future research in this area. Our review of underwater image enhancement and restoration provides researchers with the necessary background to appreciate challenges and opportunities in this important field.
INDEX TERMSUnderwater image formation model, single underwater image enhancement, single underwater image restoration, background light estimation, transmission map estimation
Facial medical analysis, including the inspection of the face and inner facial components, has always been a primary part of the diagnostic method in Traditional Chinese Medicine (TCM). The existing literature merely focus on detecting or segmenting single face organs such as tongue, eyes, or lips. In this paper, we make the first attempt to deal with multiple organs simultaneously and develop an end-to-end hybrid network with context aggregation (named TCMINet) to achieve face parsing for Traditional Chinese Medicine Inspection (TCMI). Additionally, we construct a new dataset named TCMID to overcome the lackness of accurate annotated data. In order to verify the generalization ability of TCMINet, we manually relabel images in two popular face parsing datasets referred to as LFW-PL and HELEN for test. The extensive ablation evaluations and experimental comparisons demonstrate that the proposed TCMINet outperforms state-of-the-art methods under various evaluation metrics. It runs at 267ms per face (512 × 512 image) on Nvidia Titan Xp GPU, being possible to be integrated into engineering solutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.