2021
DOI: 10.1137/20m1338460
|View full text |Cite
|
Sign up to set email alerts
|

On Learned Operator Correction in Inverse Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
69
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
8

Relationship

4
4

Authors

Journals

citations
Cited by 40 publications
(70 citation statements)
references
References 37 publications
0
69
0
Order By: Relevance
“…This is possibly because CNNs can learn and compensate non-Gaussian modelling errors more efficiently than BAE, which assumes modelling errors as Gaussian. The authors refer to [50], [51] for more discussions on modelling error corrections using CNNs.…”
Section: Discussionmentioning
confidence: 99%
“…This is possibly because CNNs can learn and compensate non-Gaussian modelling errors more efficiently than BAE, which assumes modelling errors as Gaussian. The authors refer to [50], [51] for more discussions on modelling error corrections using CNNs.…”
Section: Discussionmentioning
confidence: 99%
“…While the scope of digital twins' applications spans beyond SHM alone, its basic aim is to provide information on the current or future state of an asset by combining real-time data, and a physical/data-driven model offers many potential avenues for engagement with the inverse problems community. Nonetheless, in specifically considering a classical SHM application, such as damage localization [230], developments stemming from the inverse community, including, for example, state estimation [231][232][233], uncertainty/model error approximation/compensation [194,234,235], regularization [236] and model reduction [237], have excellent potential for enriching or enhancing digital twin frameworks. As a whole, the future outlook for the integration and advancement of inverse methodologies in SHM is very bright.…”
Section: (E) Digital Twins and Outlookmentioning
confidence: 99%
“…Especially for the cases involving intractable forward problems, model reduction techniques have been promising [13,[299][300][301], but these are either difficult to design by hand or are restricted by overly simplistic assumptions. Here, datadriven approaches are a powerful alternative to compensate for modelling errors [194,234,302] or reducing computational cost of iterative optimization schemes by model approximations [32,303]. Finally, we note that recent developments in geometric learning extend deep networks on Euclidean meshes to general meshes, such as finite elements, by a embedding them into graph structures essentially using the underlying geometry [304,305].…”
Section: (A) Machine-learned Inversionmentioning
confidence: 99%
“…This can be a strength when the original updates δx k+1 converge toward the true solution. Alternatively, if the forward model is not accurate, the GCN can compensate and correct for the wrong components and extract useful information for the updates, acting as a learned model correction [13], [14]. We will see this correcting nature in the experiments (e.g.…”
Section: B Network Structurementioning
confidence: 99%
“…where γ ∈ R + is the smoothing parameter that can be varied [18], and E = diag (Lσ) 2 + γ is a diagonal matrix. We will compare the GCNM recosntructions to TV reconstructions using (13).…”
Section: B the Inverse Problem For Eitmentioning
confidence: 99%