Image-to-image conversion tasks are more accurate and sophisticated than ever thanks to advances in deep learning. However, since typical deep learning models are trained to perform only one task, multiple trained models are required to perform each task even if they are related to each other. For example, the popular image-to-image convolutional neural network, U-Net, is normally trained for a single task. Based on U-Net, this study proposes a model that outputs variable results using only one trained model. The proposed method produces a continuously changing output by setting an external parameter. We confirm the robustness of our proposed model by evaluating it on binarization and background blurring. According to these evaluations, we confirmed that the proposed model can generate well-predicted outputs by using un-trained tuning parameters as well as the outputs by using trained tuning parameters. Furthermore, the proposed model can generate extrapolated outputs outside the learning range.INDEX TERMS Image-to-image conversion, multiple tasks, U-Net, image binarization, background blur
This paper proposes a robust vehicle detecting method by using Adaboost and CLAHE(Contrast-Limit Adaptive Histogram Equalization). We propose two method to detect vehicle effectively. First, we are able to judge rainy and night by converting RGB value to brightness. Second, we can detect a taillight, designate a ROI(Region Of Interest) by using CLAHE. And then, we choose an Adaboost algorithm by comparing traditional vehicle detecting method such as GMM(Gaussian Mixture Model), Optical flow and Adaboost. In this paper, we use proposed method and get better performance of detecting vehicle. The precision and recall score of proposed method are 0.85 and 0.87. That scores are better than GMM and optical flow.
In this paper, a text region extraction system with high contrasting text images for self-driving cars is proposed. The maximally stable extremal regions (MSER) method is usually used to extract text regions. Images must be converted to grayscale to process with the MSER method. However, the performance of MSER by using grayscale images has a poor ability of capturing regions of interest in bad conditions such as high-contrast, low-luminance, much light reflection, and so on. An MSER system with a contrast-limited adaptive histogram equalization (CLAHE) instead of conventional MSER is therefore proposed. CLAHE is utilized as a pre-processing method in MSER to detect text regions. The proposed method achieves a precision of 81% and a recall of 82%. However, those for the MSER with grayscale are 63% and 55%, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.