2023
DOI: 10.48550/arxiv.2303.10440
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differentiable Rendering for 3D Fluorescence Microscopy

Abstract: Differentiable rendering is a growing field that is at the heart of many recent advances in solving inverse graphics problems, such as the reconstruction of 3D scenes from 2D images. By making the rendering process differentiable, one can compute gradients of the output image with respect to the different scene parameters efficiently using automatic differentiation. Interested in the potential of such methods for the analysis of fluorescence microscopy images, we introduce deltaMic, a microscopy renderer that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 57 publications
0
2
0
Order By: Relevance
“…A possible generic avenue to solve these problems may lie in a fully variational approach, where a mathematical loss between the microscopy images and the meshes could be constrained by an arbitrary mechanical model to allow direct gradientbased optimization of its spatio-temporal parameters. Our recent effort to design such an efficient loss for comparing a mesh and an image may begin to fill this gap [83]. Importantly, the current force inference method we introduced will remain a fundamental building block to this research field, providing already accurate geometric and mechanical maps, which will form an ideal initial guess to refined but more computationally expensive iterative methods.…”
Section: Discussionmentioning
confidence: 99%
“…A possible generic avenue to solve these problems may lie in a fully variational approach, where a mathematical loss between the microscopy images and the meshes could be constrained by an arbitrary mechanical model to allow direct gradientbased optimization of its spatio-temporal parameters. Our recent effort to design such an efficient loss for comparing a mesh and an image may begin to fill this gap [83]. Importantly, the current force inference method we introduced will remain a fundamental building block to this research field, providing already accurate geometric and mechanical maps, which will form an ideal initial guess to refined but more computationally expensive iterative methods.…”
Section: Discussionmentioning
confidence: 99%
“…Jakob et al (2022) and Vicini (2022) improved computation efficiency with respect to scene parameter optimization. Present research on differentiable renderers is mostly focused on reconstructing the shape and appearance of real-world objects from very high resolution RGB images (Ichbiah et al, 2023;Jiang et al, 2020;Luan et al, 2021;Petersen et al, 2022). Salesin et al (2024) will adapt these methods for retrieving aerosol size and refractive index in a simple atmosphere-ocean scene in the future work.…”
Section: Its Sensitivity Quantification Needs To Know Its Derivativesmentioning
confidence: 99%