Background: Artificial intelligence can be trained to outperform dermatologists in image-based skin cancer diagnostics. However, the networks' sensitivity to biases and overfitting may hamper their clinical applicability. Objectives:The aim of this study was to explain the potential consequences of implementing convolutional neural networks for stand-alone melanoma diagnostics and skin lesion triage. Methods:In this algorithm validation study on retrospective data, we reproduced and evaluated the performance of state-of-the-art artificial intelligence (convolutional neural networks) for skin cancer diagnostics. The networks were trained on 25,331 annotated dermoscopic skin lesion images from an open-source data set (ISIC-2019) and tested using a novel data set (AISC-2021) consisting of 26,591 annotated dermoscopic skin lesion images. We tested the trained algorithms' ability to generalize to new data and their diagnostic performance in two simulations (melanoma diagnostics and skin lesion triage). Results: The trained algorithms performed significantly less accurate diagnostics on images of nevi, melanomas and actinic keratoses from the AISC-2021 data set than the ISIC-2019 data set (p < 0.003). Almost one-third (31.1%) of the melanomas were misclassified during the melanoma diagnostics simulation, irrespective of their Breslow thickness. Furthermore, the algorithms marked 92.7% of the lesions 'suspicious' during the triage simulation, which yielded a triage sensitivity and specificity of 99.7% and 8.2%, respectively. Conclusions: Although state-of-the-art artificial intelligence outperforms dermatologists on image-based skin lesion classification within an artificial
When doctors are trained to diagnose a specific disease, they learn faster when presented with cases in order of increasing difficulty. This creates the need for automatically estimating how difficult it is for doctors to classify a given case. In this paper, we introduce methods for estimating how hard it is for a doctor to diagnose a case represented by a medical image, both when ground truth difficulties are available for training, and when they are not. Our methods are based on embeddings obtained with deep metric learning. Additionally, we introduce a practical method for obtaining ground truth human difficulty for each image case in a dataset using self-assessed certainty. We apply our methods to two different medical datasets, achieving high Kendall rank correlation coefficients on both, showing that we outperform existing methods by a large margin on our problem and data.
SUMMARY This paper presents a novel solution for precision draping of prepreg composite fabrics onto double curved molds. Our contributions relate to system design, including hardware and software components, and to system integration. On the hardware side, design and implementation of a drape tool with up to 120 suction cups positioned individually by linear actuators are described. On the software side, design and implementation of the software architecture are presented, along with necessary algorithms within sensor technologies and mathematical modeling. The essential system’s components were verified individually, and the entire integrated system was successfully validated in the Proof-of-Concept experiments, performed on an experimental physical model of the system.
The most prevalent routine for camera calibration is based on the detection of well-defined feature points on a purpose-made calibration artifact. These could be checkerboard saddle points, circles, rings or triangles, often printed on a planar structure. The feature points are first detected and then used in a nonlinear optimization to estimate the internal camera parameters. We propose a new method for camera calibration using the principle of inverse rendering. Instead of relying solely on detected feature points, we use an estimate of the internal parameters and the pose of the calibration object to implicitly render a non-photorealistic equivalent of the optical features. This enables us to compute pixel-wise differences in the image domain without interpolation artifacts. We can then improve our estimate of the internal parameters by minimizing pixel-wise least-squares differences. In this way, our model optimizes a meaningful metric in the image space assuming normally distributed noise characteristic for camera sensors. We demonstrate using synthetic and real camera images that our method improves the accuracy of estimated camera parameters as compared with current state-of-the-art calibration routines. Our method also estimates these parameters more robustly in the presence of noise and in situations where the number of calibration images is limited.
When 3D scanning objects, the objective is usually to obtain a continuous surface. However, most surface scanning methods, such as structured light scanning, yield a point cloud. Obtaining a continuous surface from a point cloud requires a subsequent surface reconstruction step, which is directly affected by any error from the computation of the point cloud. In this work, we propose a one-step approach in which we compute the surface directly from structured light images. Our method minimizes the least-squares error between photographs and renderings of a triangle mesh, where the vertex positions of the mesh are the parameters of the minimization problem. To ensure fast iterations during optimization, we use differentiable rendering, which computes images and gradients in a single pass. We present simulation experiments demonstrating that our method for computing a triangle mesh has several advantages over approaches that rely on an intermediate point cloud. Our method can produce accurate reconstructions when initializing the optimization from a sphere. We also show that our method is good at reconstructing sharp edges and that it is robust with respect to image noise. In addition, our method can improve the output from other reconstruction algorithms if we use these for initialization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.