<abstract>
<p>The coronavirus disease 2019 (COVID-19) outbreak has resulted in countless infections and deaths worldwide, posing increasing challenges for the health care system. The use of artificial intelligence to assist in diagnosis not only had a high accuracy rate but also saved time and effort in the sudden outbreak phase with the lack of doctors and medical equipment. This study aimed to propose a weakly supervised COVID-19 classification network (W-COVNet). This network was divided into three main modules: weakly supervised feature selection module (W-FS), deep learning bilinear feature fusion module (DBFF) and Grad-CAM++ based network visualization module (Grad-Ⅴ). The first module, W-FS, mainly removed redundant background features from computed tomography (CT) images, performed feature selection and retained core feature regions. The second module, DBFF, mainly used two symmetric networks to extract different features and thus obtain rich complementary features. The third module, Grad-Ⅴ, allowed the visualization of lesions in unlabeled images. A fivefold cross-validation experiment showed an average classification accuracy of 85.3%, and a comparison with seven advanced classification models showed that our proposed network had a better performance.</p>
</abstract>
Multi-exposure image fusion has emerged as an increasingly important and interesting research topic in information fusion. It aims at producing an image with high quality by fusing a set of differently exposed images. In this article, we present a pixel-level method for multi-exposure image fusion based on an information-theoretic approach. In our scheme, an information channel between two source images is used to compute the Rényi entropy associated with each pixel in one image with respect to the other image and hence to produce the weight maps for the source images. Since direct weight-averaging of the source images introduce unpleasing artifacts, we employ Laplacian multi-scale fusion. Based on this pyramid scheme, images at every scale are fused by weight maps, and a final fused image is inversely reconstructed. Multi-exposure image fusion with the proposed method is easy to construct and implement and can deliver, in less than a second for a set of three input images of size 512$\times $340, competitive and compelling results versus state-of-art methods through visual comparison and objective evaluation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.