Ultrasound elastography can quantify stiffness distribution of tissue lesions and complements conventional B-mode ultrasound for breast cancer screening. Recently, the development of computer-aided diagnosis has improved the reliability of the system, whilst the inception of machine learning, such as deep learning, has further extended its power by facilitating automated segmentation and tumour classification. The objective of this review was to summarize application of the machine learning model to ultrasound elastography systems for breast tumour classification. Review databases included PubMed, Web of Science, CINAHL, and EMBASE. Thirteen (n = 13) articles were eligible for review. Shear-wave elastography was investigated in six articles, whereas seven studies focused on strain elastography (5 freehand and 2 Acoustic Radiation Force). Traditional computer vision workflow was common in strain elastography with separated image segmentation, feature extraction, and classifier functions using different algorithm-based methods, neural networks or support vector machines (SVM). Shear-wave elastography often adopts the deep learning model, convolutional neural network (CNN), that integrates functional tasks. All of the reviewed articles achieved sensitivity ³ 80%, while only half of them attained acceptable specificity ³ 95%. Deep learning models did not necessarily perform better than traditional computer vision workflow. Nevertheless, there were inconsistencies and insufficiencies in reporting and calculation, such as the testing dataset, cross-validation, and methods to avoid overfitting. Most of the studies did not report loss or hyperparameters. Future studies may consider using the deep network with an attention layer to locate the targeted object automatically and online training to facilitate efficient re-training for sequential data.
Swallowing disorders, especially dysphagia, might lead to malnutrition and dehydration and could potentially lead to fatal aspiration. Benchmark swallowing assessments, such as videofluoroscopy or endoscopy, are expensive and invasive. Wearable technologies using acoustics and accelerometric sensors could offer opportunities for accessible and home-based long-term assessment. Identifying valid swallow events is the first step before enabling the technology for clinical applications. The objective of this review is to summarize the evidence of using acoustics-based and accelerometric-based wearable technology for swallow detection, in addition to their configurations, modeling, and assessment protocols. Two authors independently searched electronic databases, including PubMed, Web of Science, and CINAHL. Eleven (n = 11) articles were eligible for review. In addition to swallowing events, non-swallowing events were also recognized by dry (saliva) swallowing, reading, yawning, etc., while some attempted to classify the types of swallowed foods. Only about half of the studies reported that the device attained an accuracy level of >90%, while a few studies reported poor performance with an accuracy of <60%. The reviewed articles were at high risk of bias because of the small sample size and imbalanced class size problem. There was high heterogeneity in assessment protocol that calls for standardization for swallowing, dry-swallowing and non-swallowing tasks. There is a need to improve the current wearable technology and the credibility of relevant research for accurate swallowing detection before translating into clinical screening for dysphagia and other swallowing disorders.
Dysphagia is one of the most common problems among older adults, which might lead to aspiration pneumonia and eventual death. It calls for a feasible, reliable, and standardized screening or assessment method to prompt rehabilitation measures and mitigate the risks of dysphagia complications. Computer-aided screening using wearable technology could be the solution to the problem but is not clinically applicable because of the heterogeneity of assessment protocols. The aim of this paper is to formulate and unify a swallowing assessment protocol, named the Comprehensive Assessment Protocol for Swallowing (CAPS), by integrating existing protocols and standards. The protocol consists of two phases: the pre-test phase and the assessment phase. The pre-testing phase involves applying different texture or thickness levels of food/liquid and determining the required bolus volume for the subsequent assessment. The assessment phase involves dry (saliva) swallowing, wet swallowing of different food/liquid consistencies, and non-swallowing (e.g., yawning, coughing, speaking, etc.). The protocol is designed to train the swallowing/non-swallowing event classification that facilitates future long-term continuous monitoring and paves the way towards continuous dysphagia screening.
Aspiration caused by dysphagia is a prevalent problem that causes serious health consequences and even death. Traditional diagnostic instruments could induce pain, discomfort, nausea, and radiation exposure. The emergence of wearable technology with computer-aided screening might facilitate continuous or frequent assessments to prompt early and effective management. The objectives of this review are to summarize these systems to identify aspiration risks in dysphagic individuals and inquire about their accuracy. Two authors independently searched electronic databases, including CINAHL, Embase, IEEE Xplore® Digital Library, PubMed, Scopus, and Web of Science (PROSPERO reference number: CRD42023408960). The risk of bias and applicability were assessed using QUADAS-2. Nine (n = 9) articles applied accelerometers and/or acoustic devices to identify aspiration risks in patients with neurodegenerative problems (e.g., dementia, Alzheimer’s disease), neurogenic problems (e.g., stroke, brain injury), in addition to some children with congenital abnormalities, using videofluoroscopic swallowing study (VFSS) or fiberoptic endoscopic evaluation of swallowing (FEES) as the reference standard. All studies employed a traditional machine learning approach with a feature extraction process. Support vector machine (SVM) was the most famous machine learning model used. A meta-analysis was conducted to evaluate the classification accuracy and identify risky swallows. Nevertheless, we decided not to conclude the meta-analysis findings (pooled diagnostic odds ratio: 21.5, 95% CI, 2.7–173.6) because studies had unique methodological characteristics and major differences in the set of parameters/thresholds, in addition to the substantial heterogeneity and variations, with sensitivity levels ranging from 21.7% to 90.0% between studies. Small sample sizes could be a critical problem in existing studies (median = 34.5, range 18–449), especially for machine learning models. Only two out of the nine studies had an optimized model with sensitivity over 90%. There is a need to enlarge the sample size for better generalizability and optimize signal processing, segmentation, feature extraction, classifiers, and their combinations to improve the assessment performance.Systematic Review Registration: (https://www.crd.york.ac.uk/prospero/), identifier (CRD42023408960).
Elastography complements traditional medical imaging modalities by mapping tissue stiffness to identify tumors in the endocrine system, and machine learning models can further improve diagnostic accuracy and reliability. Our objective in this review was to summarize the applications and performance of machine-learning-based elastography on the classification of endocrine tumors. Two authors independently searched electronic databases, including PubMed, Scopus, Web of Science, IEEEXpress, CINAHL, and EMBASE. Eleven (n = 11) articles were eligible for the review, of which eight (n = 8) focused on thyroid tumors and three (n = 3) considered pancreatic tumors. In all thyroid studies, the researchers used shear-wave ultrasound elastography, whereas the pancreas researchers applied strain elastography with endoscopy. Traditional machine learning approaches or the deep feature extractors were used to extract the predetermined features, followed by classifiers. The applied deep learning approaches included the convolutional neural network (CNN) and multilayer perceptron (MLP). Some researchers considered the mixed or sequential training of B-mode and elastographic ultrasound data or fusing data from different image segmentation techniques in machine learning models. All reviewed methods achieved an accuracy of ≥80%, but only three were ≥90% accurate. The most accurate thyroid classification (94.70%) was achieved by applying sequential training CNN; the most accurate pancreas classification (98.26%) was achieved using a CNN–long short-term memory (LSTM) model integrating elastography with B-mode and Doppler images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.