In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.
Abstract-Smart houses are considered a good alternative for the independent life of older persons and persons with disabilities. Numerous intelligent devices, embedded into the home environment, can provide the resident with both movement assistance and 24-h health monitoring. Modern home-installed systems tend to be not only physically versatile in functionality but also emotionally human-friendly, i.e., they may be able to perform their functions without disturbing the user and without causing him/her any pain, inconvenience, or movement restriction, instead possibly providing him/her with comfort and pleasure. Through an extensive survey, this paper analyzes the building blocks of smart houses, with particular attention paid to the health monitoring subsystem as an important component, by addressing the basic requirements of various sensors implemented from both research and clinical perspectives. The paper will then discuss some important issues of the future development of an intelligent residential space with a human-friendly health monitoring functional system. Index Terms-Health monitoring, intelligent house, smart house, wearable sensor.
Ultrasound (US)-based thermal imaging is very sensitive to tissue motion, which is a major obstacle to apply US temperature monitoring to noninvasive thermal therapies of in vivo subjects. In this study, we aim to develop a motion compensation method for stable US thermal imaging in in vivo subjects. Based on the assumption that the major tissue motion is approximately periodic caused by respiration, we propose a motion compensation method for change in backscattered energy (CBE) with multiple reference frames. Among the reference frames, the most similar reference to the current frame is selected to subtract the respiratory-induced motions. Since exhaustive reference searching in all stored reference frames can impede real-time thermal imaging, we improve the reference searching by using a motion-mapped reference model. We tested our method in six tumor-bearing mice with high intensity focused ultrasound (HIFU) sonication in the tumor volume until the temperature had increased by 7°C. The proposed motion compensation was evaluated by root-mean-square-error (RMSE) analysis between the estimated temperature by CBE and the measured temperature by thermocouple. As a result, the mean ±SD RMSE in the heating range was 1.1±0.1°C with the proposed method, while the corresponding result without motion compensation was 4.3±2.6°C. In addition, with the idea of motion-mapped reference frame, total processing time to produce a frame of thermal image was reduced in comparison with the exhaustive reference searching, which enabled the motion-compensated thermal imaging in 15 frames per second with 150 reference frames under 50% HIFU duty ratio.
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.
Reviews of the most recent applications of deep learning on ultrasound imaging applications are presented. Architectures of deep learning networks are briefly explained for medical imaging application categories of classification, detection, segmentation, and generation. Ultrasonography applications are then reviewed and summarized for image processing and diagnosis along with some representative study cases of breast, thyroid, heart, kidney, liver, and fetal head. Efforts on workflow enhancement are also reviewed with emphasis on view recognition, scanning guide, image quality assessment, and quantification and measurement. Finally some future prospects are presented on image quality Enhancement, diagnostic support, and improving workflow efficiency, along with remarks on hurdles, benefits, and necessary collaborations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.