Over the last decade, the process of automatic image colorization has been of significant interest for several application areas including restoration of aged or degraded images. This problem is highly ill-posed due to the large degrees of freedom during the assignment of color information. Many of the recent developments in automatic colorization involve images that contain a common theme or require highly processed data such as semantic maps as input. In our approach, we attempt to fully generalize the colorization procedure using a conditional Deep Convolutional Generative Adversarial Network (DCGAN). The network is trained over datasets that are publicly available such as CIFAR-10 and Places365. The results between the generative model and traditional deep neural networks are compared.
Nonlocal-means (NL-means) is an image denoising method that replaces each pixel by a weighted average of all the pixels in the image. Unfortunately, the method requires the computation of the weighting terms for all possible pairs of pixels, making it computationally expensive. Some short-cuts assign a weight of zero to any pixel pairs whose neighbourhood averages are too dissimliar. In this paper, we propose an alternative strategy that uses the SVD to more efficiently eliminate pixel pairs that are dissimilar. Experiments comparing this method against other NL-means speed-up strategies show that its refined discrimination between similar and dissimilar pixel neighbourhoods significantly improves the denoising effect.
This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first "patch-wise" network acts as an auto-encoder that extracts the most salient features of image patches while the second "image-wise" network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95% accuracy on the validation set compared to previously reported 77% accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018.
The presence of an iodinated contrast agent in the breast produced small, but significant changes in the power law parameters of unprocessed CEDM images compared to the precontrast images. Image subtraction in SE CEDM significantly reduced anatomical noise compared to conventional DM, with a reduction in both α and β by about a factor of 2. The data presented here, and in Part II of this work, will be useful for modeling of CEDM backgrounds, for systems characterization and for lesion detectability experiments using models that account for anatomical noise.
Greyscale image colorization for applications in image restoration has seen significant improvements in recent years. Many of these techniques that use learning-based methods struggle to effectively colorize sparse inputs. With the consistent growth of the anime industry, the ability to colorize sparse input such as line art can reduce significant cost and redundant work for production studios by eliminating the in-between frame colorization process. Simply using existing methods yields inconsistent colors between related frames resulting in a flicker effect in the final video. In order to successfully automate key areas of large-scale anime production, the colorization of line arts must be temporally consistent between frames. This paper proposes a method to colorize line art frames in an adversarial setting, to create temporally coherent video of large anime by improving existing image to image translation methods. We show that by adding an extra condition to the generator and discriminator, we can effectively create temporally consistent video sequences from anime line arts.
The objective of this paper is to explore, through a qualitative study of small regional airports, how sustainability issues are taken into account in remote small-and medium-sized enterprises (SMEs). Based on 42 semi-structured interviews conducted with managers of small regional Canadian airports and experts in this area, this study shows the quasi-absence of specific measures for sustainability, despite the seriousness of environmental issues, which tend to be subordinated to economic priorities and operational activities. The paper contributes to the literature on sustainability in SMEs by focusing on passive organizations located in remote areas and the complex reasons underlying their lack or absence of environmental commitment. The paper sheds more light on the essential role of stakeholders in providing the resources and skills necessary for the development of sustainability initiatives in passive SMEs. The study's managerial contributions and implications for stakeholders are also discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.