ObjectiveThe purpose of this study was to determine whether a CT interpretation with imaging pattern analysis differentiates Kikuchi disease (KD) from the two more frequently encountered differential lymph nodes diagnoses of tuberculous lymphadenopathy (TL) and reactive hyperplasia (RH).Materials and methodsBetween January 2012 and July 2015, 20 patients with KD (6 men, 14 women; mean age, 27.80 years), 36 patients with RH (10 men, 26 women; mean age, 33.08 years) and 34 patients with TL (17 men, 17 women; mean age, 39.82 years) were pathologically diagnosed using US-guided fine needle aspiration biopsy, core needle biopsy, or surgical excisional biopsy. We recorded the total number, location, and size of the affected cervical lymph nodes, and two radiologists reviewed the characteristic imaging findings, including the presence of necrosis, cortical enhancement pattern, perinodal infiltration, conglomeration and nodal calcification, to form a consensus. In addition, we compared two attenuation indices on the nonnecrotic portion of the affected lymph nodes, nodal cortical attenuation (NCA) and the ratio of NCA to the adjacent muscle (NCA/M).ResultsConglomeration, enhancement pattern and NCA/M values were independent predictive CT features to distinguish KD from RH. Age and enhancement pattern discriminated KD from TL. Only the mean NCA/M value was a statistically significant CT feature (p = .008) in differentiating KD from both RH+TL. The mean NCA/M of KD (1.67 ± 0.20) was significantly higher than that of RH (1.49 ± 0.20) or TL (1.47 ± 0.21).ConclusionOur results indicate that in case of nonnecrotic lymphadenopathy, a higher NCA/M index can differentiate KD from RH and TL. In addition, the enhancement pattern according to the degree of necrosis discriminated between KD and TL in the case of necrotic lymphadenopathy.
Purpose This study aimed to propose an effective end-to-end process in medical imaging using an independent task learning (ITL) algorithm and to evaluate its performance in maxillary sinusitis applications. Materials and Methods For the internal dataset, 2122 Waters’ view X-ray images, which included 1376 normal and 746 sinusitis images, were divided into training (n=1824) and test (n=298) datasets. For external validation, 700 images, including 379 normal and 321 sinusitis images, from three different institutions were evaluated. To develop the automatic diagnosis system algorithm, four processing steps were performed: 1) preprocessing for ITL, 2) facial patch detection, 3) maxillary sinusitis detection, and 4) a localization report with the sinusitis detector. Results The accuracy of facial patch detection, which was the first step in the end-to-end algorithm, was 100%, 100%, 99.5%, and 97.5% for the internal set and external validation sets #1, #2, and #3, respectively. The accuracy and area under the receiver operating characteristic curve (AUC) of maxillary sinusitis detection were 88.93% (0.89), 91.67% (0.90), 90.45% (0.86), and 85.13% (0.85) for the internal set and external validation sets #1, #2, and #3, respectively. The accuracy and AUC of the fully automatic sinusitis diagnosis system, including site localization, were 79.87% (0.80), 84.67% (0.82), 83.92% (0.82), and 73.85% (0.74) for the internal set and external validation sets #1, #2, and #3, respectively. Conclusion ITL application for maxillary sinusitis showed reasonable performance in internal and external validation tests, compared with applications used in previous studies.
Purpose: Intracranial vertebral artery dissection (VAD) is being increasingly recognized as a leading cause of Wallenberg syndrome and subarachnoid hemorrhage. Conventional angiography is considered the standard diagnostic modality, but the diagnosis of VAD remains challenging. This study aimed to compare the diagnostic performance of high-resolution vessel wall imaging (HR-VWI) with digital subtraction angiography (DSA) for intracranial VAD. Materials and methods: Twenty-four patients with 27 VADs, who underwent both HR-VWI and DSA within 2 weeks, were consecutively enrolled in the study from March 2016 to September 2020. HR-VWI and DSA were performed to diagnose VAD and to categorize its angiographic features as either definite dissection or suspicious dissection. Features of HR-VWI were used to evaluate direct arterial wall imaging. The reference standard was set from the clinicoradiologic diagnosis. Two independent raters evaluated the angiographic features, dissection signs, and interrater agreement. Each subject was also dichotomized into two groups (suspicious or definite VAD) in each modality, and diagnosis from HR-VWI and DSA was compared with the final diagnosis by consensus. Results: HR-VWI had higher agreement (90.6% vs. 53.1%) with the final diagnosis and better interrater reliability (kappa value (κ) = 0.91; 95% confidence interval (CI) = 0.64–1.00) compared with DSA (κ = 0.58; 95% CI = 0.35–1.00). HR-VWI provided a more detailed identification of dissection signs (77.7% vs. 22.2%) and better reliability (κ = 0.88; 95% CI = 0.58–1.00 vs. κ = 0.75; 95% CI = 0.36–1.00), compared to DSA. HR-VWI was comparable to DSA for the depiction of angiographic features for VAD. Conclusions: HR-VWI may be useful to evaluate VAD, with better diagnostic confidence compared to DSA.
In recent years, artificial intelligence, especially object detection-based deep learning in computer vision, has made significant advancements, driven by the development of computing power and the widespread use of graphic processor units. Object detection-based deep learning techniques have been applied in various fields, including the medical imaging domain, where remarkable achievements have been reported in disease detection. However, the application of deep learning does not always guarantee satisfactory performance, and researchers have been employing trial-and-error to identify the factors contributing to performance degradation and enhance their models. Moreover, due to the black-box problem, the intermediate processes of a deep learning network cannot be comprehended by humans; as a result, identifying problems in a deep learning model that exhibits poor performance can be challenging. This article highlights potential issues that may cause performance degradation at each deep learning step in the medical imaging domain and discusses factors that must be considered to improve the performance of deep learning models. Researchers who wish to begin deep learning research can reduce the required amount of trial-and-error by understanding the issues discussed in this study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.