Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).
and are important in various technological fields such as energy, electronics, medicine, and many more. [1][2][3][4][5] However, as a consequence of industrial processes and man-made pollution, unwanted nanoparticle size distributions and concentrations [6] give rise to concerns with respect to human health and environmental pollution. While the nanoparticles' physicochemical properties (size, shape, surface chemistry, etc.) determine the quality of products, [7,8] such characteristics are also important in order to evaluate the biological impact of nanoparticles at a molecular, cellular, and systemic level for any risk assessment for environmental and human health. [9] Characterizing nanoparticles in a dynamic context and on a case-by-case basis, microscopic imaging techniques including those that use focused electron or ion beams in scanning electron microscopes (SEMs) or helium ion microscopes [10] (HIMs) to generate nanometer scale spatial resolution are frequently applied in the scientific community. Given the substantial information content of digital images, these techniques often benefit from, or require, automated high-throughput data analysis that enables the accurate identification of large numbers of particles in a robust way.Nanoparticles occur in various environments as a consequence of man-made processes, which raises concerns about their impact on the environment and human health. To allow for proper risk assessment, a precise and statistically relevant analysis of particle characteristics (such as size, shape, and composition) is required that would greatly benefit from automated image analysis procedures. While deep learning shows impressive results in object detection tasks, its applicability is limited by the amount of representative, experimentally collected and manually annotated training data. Here, an elegant, flexible, and versatile method to bypass this costly and tedious data acquisition process is presented. It shows that using a rendering software allows to generate realistic, synthetic training data to train a state-of-the art deep neural network. Using this approach, a segmentation accuracy can be derived that is comparable to man-made annotations for toxicologically relevant metal-oxide nanoparticle ensembles which were chosen as examples. The presented study paves the way toward the use of deep learning for automated, highthroughput particle detection in a variety of imaging techniques such as in microscopies and spectroscopies, for a wide range of applications, including the detection of micro-and nanoplastic particles in water and tissue samples.
Diagnosis and severity staging of Parkinsons disease (PD) relies mainly on subjective clinical examination. To better monitor disease progression and therapy success in PD patients, new objective and rater independent parameters are required. Surface electromyography (EMG) during dynamic movements is one possible modality. However, EMG signals are often difficult to understand and interpret clinically. In this study pattern recognition was applied to find suitable parameters to differentiate PD patients from healthy controls. EMG signals were recorded from 5 patients with PD and 5 younger healthy controls, while performing a series of standardized gait tests. Wireless surface electrodes were placed bilaterally on tibialis anterior and gastrocnemius medialis and lateralis. Accelerometers were positioned on both heels and used for step segmentation. Statistical and frequency features were extracted and used to train a Support Vector Machine classifier. Sensitivity and specificity were high at 0.90 using leave-one-subject-out cross-validation. Feature selection revealed kurtosis and mean frequency as best features, with a significant difference in kurtosis (p=0.013). Evaluated on a bigger population, this could lead to objective diagnostic and staging tools for PD.
The forming limit curve (FLC) is used to model the onset of sheet metal instability during forming processes e.g., in the area of finite element analysis, and is usually determined by evaluation of strain distributions, derived with optical measurement systems during Nakajima tests. Current methods comprise of the standardized DIN EN ISO 12004-2 or time-dependent approaches that heuristically limit the evaluation area to a fraction of the available information and show weaknesses in the context of brittle materials without a pronounced necking phase. To address these limitations, supervised and unsupervised pattern recognition methods were introduced recently. However, these approaches are still dependent on prior knowledge, time, and localization information. This study overcomes these limitations by adopting a Siamese convolutional neural network (CNN), as a feature extractor. Suitable features are automatically learned using the extreme cases of the homogeneous and inhomogeneous forming phase in a supervised setup. Using robust Student’s t mixture models, the learned features are clustered into three distributions in an unsupervised manner that cover the complete forming process. Due to the location and time independency of the method, the knowledge learned from formed specimen up until fracture can be transferred on to other forming processes that were prematurely stopped and assessed using metallographic examinations, enabling probabilistic cluster membership assignments for each frame of the forming sequence. The generalization of the method to unseen materials is evaluated in multiple experiments, and additionally tested on an aluminum alloy AA5182, which is characterized by Portevin-LE Chatlier effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.