2018
DOI: 10.1145/3197517.3201333
|View full text |Cite
|
Sign up to set email alerts
|

End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging

Abstract: Fig. 1. One of the applications of the proposed end-to-end computational camera design paradigm is achromatic extended depth of field. When capturing an image with a regular singlet lens (top left), out-of-focus regions are blurry and chromatic aberrations further degrade the image quality. With our framework, we optimize the profile of a refractive optical element that achieves both depth and chromatic invariance. This element is fabricated using diamond turning (right) or using photolithography. After proces… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
165
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 293 publications
(167 citation statements)
references
References 53 publications
1
165
0
1
Order By: Relevance
“…Deep Optics Deep learning can be used for jointly training camera optics and CNN-based estimation methods. This approach was recently demonstrated for applications in extended depth of field and superresolution imaging [39], image classification [2], and multicolor localization microscopy [25]. For example, Hershko et al [25] proposed to learn a custom diffractive phase mask that produced highly wavelength-dependent point spread functions (PSFs), allowing for color recovery from a grayscale camera.…”
Section: Computational Photography For Depth Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep Optics Deep learning can be used for jointly training camera optics and CNN-based estimation methods. This approach was recently demonstrated for applications in extended depth of field and superresolution imaging [39], image classification [2], and multicolor localization microscopy [25]. For example, Hershko et al [25] proposed to learn a custom diffractive phase mask that produced highly wavelength-dependent point spread functions (PSFs), allowing for color recovery from a grayscale camera.…”
Section: Computational Photography For Depth Estimationmentioning
confidence: 99%
“…Inspired by recent work on deep optics [2,39,12], we interpret the monocular depth estimation problem with coded defocus blur as an optical-encoder, electronic-decoder system that can be trained in an end-to-end manner. Although co-designing optics and image processing is a core idea in computational photography, only differentiable estimation algorithms, such as neural networks, allow for true end-toend computational camera designs.…”
Section: Introductionmentioning
confidence: 99%
“…There are several recent works that consider use of machine learning to jointly optimize hardware and software for imaging tasks [5,6,7,8,9,10,11]. These approaches aim to find a fixed set of optical parameters that are optimal for a particular task.…”
Section: Previous Workmentioning
confidence: 99%
“…These methods require a huge number of training examples to properly learn millions of parameters that model the reconstruction process, and they often do not transfer well to the experimental setting. In comparison, model-based methods [14], [15] are able to efficiently learn the experimental design with very few training examples and have been shown to learn designs that do transfer well to the experimental setting. This is achieved by unrolling the image reconstruction process [16]- [19] into a Physics-based Neural Network (PbNN) [14] and then learning the experimental design parameters to maximize system performance.…”
Section: Introductionmentioning
confidence: 99%