In vivo confocal microscopy (IVCM) is a noninvasive, reproducible, and inexpensive diagnostic tool for corneal diseases. However, widespread and effortless image acquisition in IVCM creates serious image analysis workloads on ophthalmologists, and neural networks could solve this problem quickly. We have produced a novel deep learning algorithm based on generative adversarial networks (GANs), and we compare its accuracy for automatic segmentation of subbasal nerves in IVCM images with a fully convolutional neural network (U-Net) based method.
Methods:We have collected IVCM images from 85 subjects. U-Net and GAN-based image segmentation methods were trained and tested under the supervision of three clinicians for the segmentation of corneal subbasal nerves. Nerve segmentation results for GAN and U-Net-based methods were compared with the clinicians by using Pearson's R correlation, Bland-Altman analysis, and receiver operating characteristics (ROC) statistics. Additionally, different noises were applied on IVCM images to evaluate the performances of the algorithms with noises of biomedical imaging.
Results:The GAN-based algorithm demonstrated similar correlation and Bland-Altman analysis results with U-Net. The GAN-based method showed significantly higher accuracy compared to U-Net in ROC curves. Additionally, the performance of the U-Net deteriorated significantly with different noises, especially in speckle noise, compared to GAN.
Conclusions:This study is the first application of GAN-based algorithms on IVCM images. The GAN-based algorithms demonstrated higher accuracy than U-Net for automatic corneal nerve segmentation in IVCM images, in patient-acquired images and noise applied images. This GAN-based segmentation method can be used as a facilitating diagnostic tool in ophthalmology clinics.Translational Relevance: Generative adversarial networks are emerging deep learning models for medical image processing, which could be important clinical tools for rapid segmentation and analysis of corneal subbasal nerves in IVCM images.
Depth map estimation and 3-D reconstruction from a single or a few face images is an important research field in computer vision. Many approaches have been proposed and developed over the last decade. However, issues like robustness are still to be resolved through additional research. With the advent of the GPU computational methods, convolutional neural networks are being applied to many computer vision problems. Later, conditional generative adversarial networks (CGAN) have attracted attention for its easy adaptation for many picture-to-picture problems. CGANs have been applied for a wide variety of tasks, such as background masking, segmentation, medical image processing, and superresolution. In this work, we developed a GAN-based method for depth map estimation from any given single face image. Many variants of GANs have been tested for the depth estimation task for this work. We conclude that conditional Wasserstein GAN structure offers the most robust approach. We have also compared the method with other two state-of-the-art methods based on deep learning and traditional approaches and experimentally shown that the proposed method offers great opportunities for estimation of face depth maps from face images.
INDEX TERMS3D face reconstruction, generative adversarial networks, deep learning. ABDULLAH TAHA ARSLAN received the B.S. degree in electronics and communications engineering from Istanbul Technical University and the MBA degree from California State University, East Bay. He is currently pursuing the Ph.D. degree with the
Verification and validation (V&V) of systems, and system of systems, in an industrial context has never been as important as today. The recent developments in automated cyber-physical systems, digital twin environments, and Industry 4.0 applications require effective and comprehensive V&V mechanisms. Verification and Validation of Automated Systems' Safety and Security (VALU3S), a Horizon 2020 Electronic Components and Systems for European Leadership Joint Undertaking (ECSEL-JU) project started in May 2020, aims to create and evaluate a multi-domain V&V framework that facilitates evaluation of automated systems from component level to system level, with the aim of reducing the time and effort needed to evaluate these systems. VALU3S focuses on V&V for the requirements of safety, cybersecurity, and privacy (SCP). This paper mainly focuses on the elaboration of one of the 13 use cases of VALU3S to identify the SCP issues in an automated robot inspection cell that is being actively used for the quality control assessment of automotive body-in-white. The joint study here embarks on a collaborative approach that puts the V&V methods and workflows for the robotic arms safety trajectory planning and execution, fault injection techniques, cyber-physical security vulnerability assessment, anomaly detection, and SCP countermeasures required for remote control and inspection. The paper also presents cross-links with ECSEL-JU goals and the current advancements in the market and scientific and technological state-of-play.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.