Overuse of antibiotics in clinical medicine has contributed to the global spread of multidrug-resistant bacterial pathogens, including Acinetobacter baumannii. We present a case of an 88-year-old Chinese man who developed hospital-acquired pneumonia caused by carbapenem-resistant A. baumannii (CRAB). A personalized lytic pathogen-specific single-phage preparation was nebulized to the patient continuously for 16 days in combination with tigecycline and polymyxin E. The treatment was well tolerated and resulted in clearance of the pathogen and clinical improvement of the patient’s lung function.
When traditional super-resolution reconstruction methods are applied to infrared thermal images, they often ignore the problem of poor image quality caused by the imaging mechanism, which makes it difficult to obtain high-quality reconstruction results even with the training of simulated degraded inverse processes. To address these issues, we proposed a thermal infrared image super-resolution reconstruction method based on multimodal sensor fusion, aiming to enhance the resolution of thermal infrared images and rely on multimodal sensor information to reconstruct high-frequency details in the images, thereby overcoming the limitations of imaging mechanisms. First, we designed a novel super-resolution reconstruction network, which consisted of primary feature encoding, super-resolution reconstruction, and high-frequency detail fusion subnetwork, to enhance the resolution of thermal infrared images and rely on multimodal sensor information to reconstruct high-frequency details in the images, thereby overcoming limitations of imaging mechanisms. We designed hierarchical dilated distillation modules and a cross-attention transformation module to extract and transmit image features, enhancing the network’s ability to express complex patterns. Then, we proposed a hybrid loss function to guide the network in extracting salient features from thermal infrared images and reference images while maintaining accurate thermal information. Finally, we proposed a learning strategy to ensure the high-quality super-resolution reconstruction performance of the network, even in the absence of reference images. Extensive experimental results show that the proposed method exhibits superior reconstruction image quality compared to other contrastive methods, demonstrating its effectiveness.
The deep convolutional neural network has achieved great success in the Single Image Super-resolution task. It is obviously that among the well-known super-resolution methods, the deep learning-based algorithms show the most advanced performance. However, the most advanced algorithms currently use complex networks with a large number of parameters, which makes it difficult to apply deep learning algorithms on mobile devices. To solve this problem, we propose a lightweight dualresidual network(LDRN) for single image super-resolution, which has better reconstruction quality than most current advanced lightweight algorithms. Due to its fewer parameters and computational expense, real-time and mobile applications of our networks can be easily realized. On the basis of the residual module, we propose a new residual unit, which uses two depthwise separable(DW) convolution to obtain better balance between feature extraction capacity and lightweight performance. We further design a dualstream residual block, which contains a multiplication branch and an addition branch. The dual-stream residual block can improve the reconstruction performance more effectively than expanding the network width. In addition, we also designed a new up-sampling module to simplify the previous up-sampling methods. Extensive experimental results show that our network has better reconstruction performance and lightweight performance than most existing state-of-the-art algorithms.
Image fusion operation is beneficial to many applications and is also one of the most common and critical computer vision challenges. The perfect infrared and visible image fusion results should include the important infrared targets while preserving visible textural detail information as much as possible. A novel infrared and visible image fusion framework is proposed for this purpose. In this paper, the proposed fusion network (MIFFuse) is an end-to-end, multi-level-based fusion network for infrared and visible images. The presented approach makes effective use of the intermediate convolution layer's output features to preserve the primary image fusion information. We also build a cat_block to swap information between two paths to gain more sufficient information during the convolution steps. To reduce the model's running time even further, the proposed method that reduces the number of feature channels while maintaining the accuracy of the fusion performance. Extensive experiments on the TNO and CVC-14 image fusion datasets show that our MIFFuse outperforms the other methods in terms of both subjective visual effects and quantitative metrics. Furthermore, MIFFuse is approximately twice as fast as the most recent state-ofthe-art methods. Our code and models can be found at https://github.com/depeng6/MIFFuse.INDEX TERMS end-to-end framework, multi-level features, image fusion, concatenation block.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.