BACKGROUND Activation mapping using noninvasive electrocardiographic imaging (ECGi) has recently been used to describe the physiology of different cardiac abnormalities. These descriptions differ from prior invasive studies, and both methods have not been thoroughly confronted in a clinical setting.OBJECTIVE The goal of the present study was to provide validation of noninvasive activation mapping in a clinical setting through direct confrontation with invasive epicardial contact measures.METHODS Fifty-nine maps were obtained in 55 patients and aligned on a common geometry. Nearest-neighbor interpolation was used to avoid map smoothing. Quantitative comparison was performed by computing between-map correlation coefficients and absolute activation time errors. RESULTSThe mean activation time error was 20.4 6 8.6 ms, and the between-map correlation was poor (0.03 6 0.43). The results suggested high interpatient variability (correlation 20.68 to 0.82), wide QRS patterns, and paced rhythms demonstrating significantly better mean correlation (0.68 6 0.17). Errors were greater in scarred regions (21.9 6 10.8 ms vs 17.5 6 6.7 ms; P , .01). Fewer epicardial breakthroughs were imaged using noninvasive mapping (1.3 6 0.5 vs 2.3 6 0.7; P , .01). Primary breakthrough locations were imaged 75.7 6 38.1 mm apart. Lines of conduction block (jumps of 50 ms between contiguous points) due to structural anomalies were recorded in 27 of 59 contact maps and were not visualized at these same sites noninvasively. Instead, artificial lines appeared in 33 of 59 noninvasive maps in regions of reduced bipolar voltage amplitudes (P 5 .03). An in silico model confirms these artificial constructs.CONCLUSION Overall, agreement of ECGi activation mapping and contact mapping is poor and heterogeneous. The between-map correlation is good for wide QRS patterns. Lines of block and epicardial breakthrough sites imaged using ECGi are inaccurate. Further work is required to improve the accuracy of the technique.
Purpose: Standard image reconstruction methods for fluorescence Diffuse Optical Tomography (fDOT) generally make use of L2-regularization. A better choice is to replace the L2 by a total variation functional that effectively removes noise while preserving edges. Among the wide range of approaches available, the recently appeared Split Bregman method has been shown to be optimal and efficient. Furthermore, additional constraints can be easily included. We propose the use of the Split Bregman method to solve the image reconstruction problem for fDOT with a nonnegativity constraint that imposes the reconstructed concentration of fluorophore to be positive. Methods: The proposed method is tested with simulated and experimental data, and results are compared with those yielded by an equivalent unconstrained optimization approach based on Gauss Newton (GN) method, in which the negative part of the solution is projected to zero after each iteration. In addition, the method dependence on the parameters that weigh data fidelity and nonnegativity constraints is analyzed. Results: Split Bregman yielded a reduction of the solution error norm and a better full width at tenth maximum for simulated data, and higher signal-to-noise ratio for experimental data. It is also shown that it led to an optimum solution independently of the data fidelity parameter, as long as the number of iterations is properly selected, and that there is a linear relation between the number of iterations and the inverse of the data fidelity parameter. Conclusions: Split Bregman allows the addition of a nonnegativity constraint leading to improve image quality.
Fluorescence diffuse optical tomography (fDOT) is an imaging modality that provides images of the fluorochrome distribution within the object of study. The image reconstruction problem is ill-posed and highly underdetermined and, therefore, regularisation techniques need to be used. In this paper we use a nonlinear anisotropic diffusion regularisation term that incorporates anatomical prior information. We introduce a split operator method that reduces the nonlinear inverse problem to two simpler problems, allowing fast and efficient solution of the fDOT problem. We tested our method using simulated, phantom and ex-vivo mouse data, and found that it provides reconstructions with better spatial localisation and size of fluorochrome inclusions than using the standard Tikhonov penalty term.
When dealing with ill-posed problems such as fluorescence diffuse optical tomography (fDOT) the choice of the regularization parameter is extremely important for computing a reliable reconstruction. Several automatic methods for the selection of the regularization parameter have been introduced over the years and their performance depends on the particular inverse problem. Herein a U-curve-based algorithm for the selection of regularization parameter has been applied for the first time to fDOT. To increase the computational efficiency for large systems an interval of the regularization parameter is desirable. The U-curve provided a suitable selection of the regularization parameter in terms of Picard's condition, image resolution and image noise. Results are shown both on phantom and mouse data. R41-R93 (1999). 9. J. Hadamard, "Sur les problèmes aux dérivées partielles et leur signification physique," Princeton University Bulletin, 49-52 (1902). 10. Hanke and Hansen, "Regularization methods for large scale problems," Surv. Math. Ind. 3, 253-315 (1993). 11. C. R. Vogel, ed., Computational Methods for Inverse Problems (SIAM, 2002). 12. E. E. Graves, J. P. Culver, J. Ripoll, R. Weissleder, and V. Ntziachristos, "Singular-value analysis and optimization of experimental parameters in fluorescence molecular tomography," J. Opt. Soc. Am. A 21(2), 231-241 (2004 ©2011 Optical Society of America
Abstract. Reconstruction algorithms for imaging fluorescence in near infrared ranges usually normalize fluorescence light with respect to excitation light. Using this approach, we investigated the influence of absorption and scattering heterogeneities on quantification accuracy when assuming a homogeneous model and explored possible reconstruction improvements by using a heterogeneous model. To do so, we created several computer-simulated phantoms: a homogeneous slab phantom (P1), slab phantoms including a region with a two-to six-fold increase in scattering (P2) and in absorption (P3), and an atlas-based mouse phantom that modeled different liver and lung scattering (P4). For P1, reconstruction with the wrong optical properties yielded quantification errors that increased almost linearly with the scattering coefficient while they were mostly negligible regarding the absorption coefficient. This observation agreed with the theoretical results. Taking the quantification of a homogeneous phantom as a reference, relative quantification errors obtained when wrongly assuming homogeneous media were in the range þ41 to þ94% (P2), 0.1 to −7% (P3), and −39 to þ44% (P4). Using a heterogeneous model, the overall error ranged from −7 to 7%. In conclusion, this work demonstrates that assuming homogeneous media leads to noticeable quantification errors that can be improved by adopting heterogeneous models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.