Objective: Alterations of Young's modulus (YM) and Poisson's ratio (PR) in biological tissues are often early indicators of the onset of pathological conditions. Knowledge of these parameters has been proven to be of great clinical significance for the diagnosis, prognosis and treatment of cancers. Currently, however, there are no non-invasive modalities that can be used to image and quantify these parameters in vivo without assuming incompressibility of the tissue, an assumption that is rarely justified in human tissues. Methods: In this paper, we develop a new method to simultaneously reconstruct YM and PR of a tumor and of its surrounding tissues, irrespective of the boundary conditions and the shape of the tumor based on ellipsoidal approximation. This new non-invasive method allows the generation of high spatial resolution YM and PR maps from axial and lateral strain data obtained via ultrasound elastography. The method was validated using finite element (FE) simulations and controlled experiments performed on phantoms with known mechanical properties. The clinical feasibility of the developed method was also demonstrated in an orthotopic mouse model of breast cancer. Results: Our results from simulations and controlled experiments demonstrate that the proposed reconstruction technique is accurate and robust. Conclusion: Availability of the proposed technique could address the clinical need of a non-invasive modality capable of imaging, quantifying and monitoring the mechanical properties of tumors with high spatial resolution and in real time. Significance: This technique can have a significant impact on the clinical translation of elasticity imaging methods.
The automatic diagnosis of various retinal diseases from fundus images is important to support clinical decisionmaking. However, developing such automatic solutions is challenging due to the requirement of a large amount of humanannotated data. Recently, unsupervised/self-supervised feature learning techniques receive a lot of attention, as they do not need massive annotations. Most of the current self-supervised methods are analyzed with single imaging modality and there is no method currently utilize multi-modal images for better results. Considering that the diagnostics of various vitreoretinal diseases can greatly benefit from another imaging modality, e.g., FFA, this paper presents a novel self-supervised feature learning method by effectively exploiting multi-modal data for retinal disease diagnosis. To achieve this, we first synthesize the corresponding FFA modality and then formulate a patient feature-based softmax embedding objective. Our objective learns both modality-invariant features and patient-similarity features. Through this mechanism, the neural network captures the semantically shared information across different modalities and the apparent visual similarity between patients. We evaluate our method on two public benchmark datasets for retinal disease diagnosis. The experimental results demonstrate that our method clearly outperforms other self-supervised feature learning methods and is comparable to the supervised baseline. Our code is available at GitHub 1. Index Terms-Retinal disease diagnosis, self-supervised learning, multi-modal data I. INTRODUCTION C OLOR fundus photography has been widely used in clinical practice to evaluate various conventional ophthalmic diseases, e.g., age-related macular degeneration (AMD) [1], pathologic myopia (PM) [2], and diabetic retinopathy [3, 4]. Recently, deep learning has shown very good performance on a variety of automatic ophthalmic disease detection problems from fundus images [5-7], and these techniques can help ophthalmologists in decision making. The success is attributed to the learned representative features from fundus images, which requires a large amount of training data with massive human annotations. However, it is tedious and expensive to annotate the fundus images, since experts are needed to provide reliable labels. Hence, in this paper, our goal is to learn the representative features from data itself, without any human
Ultrasound poroelastography aims at assessing the poroelastic behavior of biological tissues via estimation of the local temporal axial strains and effective Poisson's ratios (EPR). Currently, reliable estimation of EPR using ultrasound is a challenging task due to the limited quality of lateral strain estimation. In this paper, we propose a new two-step EPR estimation technique based on dynamic programming elastography (DPE) and Horn-Schunck (HS) optical flow estimation. In the proposed method, DPE is used to estimate the integer axial and lateral displacements while HS is used to obtain subsample axial and lateral displacements from the motion-compensated pre-compressed and post-compressed radio frequency data. Axial and lateral strains are then calculated using Kalman filter-based least square estimation. The proposed two-step technique was tested using finite-element simulations, controlled experiments and in vivo experiments, and its performance was statistically compared with that of analytic minimization (AM) and correlation-based method (CM). Our results indicate that our technique provides EPR elastograms of higher quality and accuracy than those produced by AM and CM. Regarding signal-to-noise ratio and elastographic contrast-to-noise ratio, in simulated data, the proposed method provides an average improvement of 30% and 75%, respectively, with respect to AM and of 100% and 169%, respectively, with respect to CM, whereas, in experiments, the proposed approach provides an average improvement of 30% and 67% with respect to AM and of 230% and 525% with respect to CM. Based on these results, the proposed method may be the preferred one in experimental poroelastography applications.
Background:The ongoing COVID-19 pandemic has created several challenges including the financial burden which may result in mental health conditions. Aim:This study was undertaken to gauge the mental health difficulties during the COVID-19 pandemic to gain an insight into wage earners’ mental health as they are responsible for maintaining the finance of their families in this critical situation. Method:This cross-sectional study was conducted through an online survey, a total of 707 individual Bangladeshi wage earners were enrolled in between 20 May 2020 and 30 May 2020. The questionnaire had sections on sociodemographic information, COVID-19 related questions, PHQ-9 & GAD-7 scales to assess depressive symptoms & anxiety, respectively. STATA version 14.1 program was used to carry out all the analyses. Results:The study revealed that 58.6% and 55.9% of the respondents had moderate to severe anxiety and depressive symptoms, respectively. The total monthly income less than 30,000 BDT (353.73USD) displayed increased odds of suffering from depressive symptoms (OR=4.12; 95% CI: 2.68-6.34) and anxiety (OR=3.31; 95% CI: 2.17-5.03). Participants who didn’t get any salary, had no income source during lockdown, had financial problem, inadequate food supply were more likely to suffer from anxiety and depressive symptoms (p ≤ .01). Perceiving the upcoming financial crisis as a stressor was a potential risk factor for anxiety (OR=1.91; 95% CI:1.32-2.77) and depressive symptoms (OR=1.50; 95% CI:1.04-2.16). Conclusion:Wage-earners in a low resource setting like Bangladesh require mental health attention. Furthermore, financial consideration from the state or their workplace may help them to deal with mental health difficulties during this pandemic.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.