Microbially induced corrosion (MIC) of metal surfaces caused by biofilms has wide-ranging consequences. Analysis of biofilm images to understand the distribution of morphological components in images such as microbial cells, MIC byproducts, and metal surfaces non-occluded by cells can provide insights into assessing the performance of coatings and developing new strategies for corrosion prevention. We present an automated approach based on self-supervised deep learning methods to analyze Scanning Electron Microscope (SEM) images and detect cells and MIC byproducts. The proposed approach develops models that can successfully detect cells, MIC byproducts, and non-occluded surface areas in SEM images with a high degree of accuracy using a low volume of data while requiring minimal expert manual effort for annotating images. We develop deep learning network pipelines involving both contrastive (Barlow Twins) and non-contrastive (MoCoV2) self-learning methods and generate models to classify image patches containing three labels—cells, MIC byproducts, and non-occluded surface areas. Our experimental results based on a dataset containing seven grayscale SEM images show that both Barlow Twin and MoCoV2 models outperform the state-of-the-art supervised learning models achieving prediction accuracy increases of approximately 8 and 6%, respectively. The self-supervised pipelines achieved this superior performance by requiring experts to annotate only ~10% of the input data. We also conducted a qualitative assessment of the proposed approach using experts and validated the classification outputs generated by the self-supervised models. This is perhaps the first attempt toward the application of self-supervised learning to classify biofilm image components and our results show that self-supervised learning methods are highly effective for this task while minimizing the expert annotation effort.
Scanning electron microscopy (SEM) techniques have been extensively performed to image and study bacterial cells with high-resolution images. Bacterial image segmentation in SEM images is an essential task to distinguish an object of interest and its specific region. These segmentation results can then be used to retrieve quantitative measures (e.g., cell length, area, cell density) for the accurate decision-making process of obtaining cellular objects. However, the complexity of the bacterial segmentation task is a barrier, as the intensity and texture of foreground and background are similar, and also, most clustered bacterial cells in images are partially overlapping with each other. The traditional approaches for identifying cell regions in microscopy images are labor intensive and heavily dependent on the professional knowledge of researchers. To mitigate the aforementioned challenges, in this study, we tested a U-Net-based semantic segmentation architecture followed by a post-processing step of morphological over-segmentation resolution to achieve accurate cell segmentation of SEM-acquired images of bacterial cells grown in a rotary culture system. The approach showed an 89.52% Dice similarity score on bacterial cell segmentation with lower segmentation error rates, validated over several cell overlapping object segmentation approaches with significant performance improvement.
The success of deep networks for the semantic segmentation of images is limited by the availability of annotated training data. The manual annotation of images for segmentation is a tedious and time-consuming task that often requires sophisticated users with significant domain expertise to create high-quality annotations over hundreds of images. In this paper, we propose the segmentation with scant pixel annotations (SSPA) approach to generate high-performing segmentation models using a scant set of expert annotated images. The models are generated by training them on images with automatically generated pseudo-labels along with a scant set of expert annotated images selected using an entropy-based algorithm. For each chosen image, experts are directed to assign labels to a particular group of pixels, while a set of replacement rules that leverage the patterns learned by the model is used to automatically assign labels to the remaining pixels. The SSPA approach integrates active learning and semi-supervised learning with pseudo-labels, where expert annotations are not essential but generated on demand. Extensive experiments on bio-medical and biofilm datasets show that the SSPA approach achieves state-of-the-art performance with less than 5% cumulative annotation of the pixels of the training data by the experts.
High Efficiency Video Coding (HEVC) is the most recent video codec standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goal of this newly introduced standard is for catering to high-resolution video in low bandwidth environments with a higher compression ratio. This paper provides a performance comparison between HEVC and H.264/AVC video compression standards in terms of objective quality, delay, and complexity in the broadcasting environment. The experimental investigation was carried out using six test sequences in the random access configuration of the HEVC test model (HM), the HEVC reference software. This was also carried out in similar configuration settings of the Joint Scalable Video Module (JSVM), the official scalable H.264/AVC reference implementation, running on a single layer mode. According to the results obtained, the HM achieves more than double the compression ratio compared to that of JSVM and delivers the same video quality at half the bitrate. Yet, the HM encodes two times slower (at most) than JSVM. Hence, it can be concluded that the application scenarios of HM and JSVM should be judiciously selected considering the availability of system resources. For instance, HM is not suitable for low delay applications, but it can be used effectively in low bandwidth environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.