2022
DOI: 10.1088/1361-6420/ac6d03
|View full text |Cite
|
Sign up to set email alerts
|

Imaging conductivity from current density magnitude using neural networks*

Abstract: Conductivity imaging represents one of the most important tasks in medical imaging. In this work we develop a neural network based reconstruction technique for imaging the conductivity from the magnitude of the internal current density. It is achieved by formulating the problem as a relaxed weighted least-gradient problem, and then approximating its minimizer by standard fully connected feedforward neural networks. We derive bounds on two components of the generalization error, i.e., approximation error and st… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 67 publications
0
12
0
Order By: Relevance
“…nonconvexity of the loss landscape, the optimizer may fail to find a global minimizer of the empirical loss \widehat J \bfitgam but instead only an approximate local minimizer. This phenomenon has been observed across a broad range of neural solvers based on DNNs [37,16,25]. Table 3(b) and (c) shows that the L 2 (\Omega ) error e(\q) of the reconstruction \q does not vary much with different DNN architectures and numbers of sampling points.…”
mentioning
confidence: 72%
See 3 more Smart Citations
“…nonconvexity of the loss landscape, the optimizer may fail to find a global minimizer of the empirical loss \widehat J \bfitgam but instead only an approximate local minimizer. This phenomenon has been observed across a broad range of neural solvers based on DNNs [37,16,25]. Table 3(b) and (c) shows that the L 2 (\Omega ) error e(\q) of the reconstruction \q does not vary much with different DNN architectures and numbers of sampling points.…”
mentioning
confidence: 72%
“…In recent years, the use of DNNs for solving direct and inverse problems for PDEs has received a lot of attention; see [15,43] for overviews. Existing neural inverse schemes using DNNs roughly fall into two groups: supervised approaches (see, e.g., [40,28,22]) and unsupervised approaches (see, e.g., [7,8,34,49,25,35]). Supervised methods exploit the availability of (abundant) paired training data to extract problem-specific features and are concerned with learning approximate inverse operators.…”
Section: \Biggl\{mentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, DNNs became very popular for inverse problems and attracted many researchers (see, e.g. [1,6,8,12,13,17,19,24,25] and the references therein). These methods usually build a single DNN.…”
Section: Introductionmentioning
confidence: 99%