“…In the field of cardiac diseases, [112] introduced an automatic heart segmentation approach in 2022 with high accuracy. This approach shows promise in the field of cardiology, but it also faces challenges in terms of generalization and integration.…”
This scientific review presents a comprehensive overview of medical imaging modalities and their diverse applications in artificial intelligence (AI)-based disease classification and segmentation. The paper begins by explaining the fundamental concepts of AI, machine learning (ML), and deep learning (DL). It provides a summary of their different types to establish a solid foundation for the subsequent analysis. The prmary focus of this study is to conduct a systematic review of research articles that examine disease classification and segmentation in different anatomical regions using AI methodologies. The analysis includes a thorough examination of the results reported in each article, extracting important insights and identifying emerging trends. Moreover, the paper critically discusses the challenges encountered during these studies, including issues related to data availability and quality, model generalization, and interpretability. The aim is to provide guidance for optimizing technique selection. The analysis highlights the prominence of hybrid approaches, which seamlessly integrate ML and DL techniques, in achieving effective and relevant results across various disease types. The promising potential of these hybrid models opens up new opportunities for future research in the field of medical diagnosis. Additionally, addressing the challenges posed by the limited availability of annotated medical images through the incorporation of medical image synthesis and transfer learning techniques is identified as a crucial focus for future research efforts.
“…In the field of cardiac diseases, [112] introduced an automatic heart segmentation approach in 2022 with high accuracy. This approach shows promise in the field of cardiology, but it also faces challenges in terms of generalization and integration.…”
This scientific review presents a comprehensive overview of medical imaging modalities and their diverse applications in artificial intelligence (AI)-based disease classification and segmentation. The paper begins by explaining the fundamental concepts of AI, machine learning (ML), and deep learning (DL). It provides a summary of their different types to establish a solid foundation for the subsequent analysis. The prmary focus of this study is to conduct a systematic review of research articles that examine disease classification and segmentation in different anatomical regions using AI methodologies. The analysis includes a thorough examination of the results reported in each article, extracting important insights and identifying emerging trends. Moreover, the paper critically discusses the challenges encountered during these studies, including issues related to data availability and quality, model generalization, and interpretability. The aim is to provide guidance for optimizing technique selection. The analysis highlights the prominence of hybrid approaches, which seamlessly integrate ML and DL techniques, in achieving effective and relevant results across various disease types. The promising potential of these hybrid models opens up new opportunities for future research in the field of medical diagnosis. Additionally, addressing the challenges posed by the limited availability of annotated medical images through the incorporation of medical image synthesis and transfer learning techniques is identified as a crucial focus for future research efforts.
“…Its U-shaped design and inclusion of skip connections allow for effective extraction and fusion of multi-scale features. Consequently, this architecture demonstrates enhanced segmentation performance and robustness in a variety of medical image segmentation tasks, such as brain [14], lung [15], and heart [16]. The U-Net network structure comprises two main components: an encoder and a decoder.…”
Using deep learning technology to segment oral CBCT images for clinical diagnosis and treatment is one of the important research directions in the field of clinical dentistry. However, the blurred contour and the scale difference limit the segmentation accuracy of the crown edge and the root part of the current methods, making these regions become difficult-to-segment samples in the oral CBCT segmentation task. Aiming at the above problems, this paper proposed a Difficult-to-Segment Focus Network (DSFNet) for segmenting oral CBCT images. The network utilizes a Feature Capturing Module (FCM) to efficiently capture local and long-range features, enhancing the feature extraction performance. Additionally, a Multi-Scale Feature Fusion Module (MFFM) is employed to merge multiscale feature information. To further improve the loss ratio for difficult-to-segment samples, a hybrid loss function is proposed, combining Focal Loss and Dice Loss. By utilizing the hybrid loss function, DSFNet achieves 91.85% Dice Similarity Coefficient (DSC) and 0.216 mm Average Surface-to-Surface Distance (ASSD) performance in oral CBCT segmentation tasks. Experimental results show that the proposed method is superior to current dental CBCT image segmentation techniques and has real-world applicability.
“…In the field of medical imaging, U-Net is widely used for the segmentation of medical images such as CT and MRI, including specific regions such as liver [21], lung [22], and heart [23]. In addition, there are some studies that combine U-Net with other neural networks to further improve the performance of the model, such as Attention UNet [24], TransUNet [25], and Swin-UNet [26].…”
In the field of clinical dental medicine, Cone Beam Computed Tomography (CBCT) is a useful tool for the measurement of various dimensions related to the oral cavity, including height and thickness. This provides invaluable guidance and reference for risk assessment in orthodontic treatment, selection of treatment plans and implant treatment. However, segmentation of the teeth region from CBCT images is a daunting task due to complex root morphology and indistinct boundaries between the root and the alveolar bone. Manual annotation of the teeth area is resource-intensive, and deep learning-based segmentation methods are susceptible to noise, reducing their efficiency. To tackle these complexities, a multi-filter attention module is proposed in this paper, which effectively minimizes the noise in CBCT images through utilization of multiple filters and self-attention techniques. Additionally, an Improved U-Net model is proposed, where the original convolution block in the U-Net is replaced with a Double ConvNeXt block to yield better network performance. Experimentally, the proposed Improved U-Net method showed remarkable progress as it achieved a Dice Similarity Coefficient of 86.95% in oral CBCT image segmentation, surpassing existing models and affirming the effectiveness and advancedness of the proposed model and method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.