In the emerging field of computational imaging, rapid prototyping of new camera concepts becomes increasingly difficult since the signal processing is intertwined with the physical design of a camera. As novel computational cameras capture information other than the traditional two-dimensional information, ground truth data, which can be used to thoroughly benchmark a new system design, is also hard to acquire. We propose to bridge this gap by using simulation. In this article, we present a raytracing framework tailored for the design and evaluation of computational imaging systems. We show that, depending on the application, the image formation on a sensor and phenomena like image noise have to be simulated accurately in order to achieve meaningful results while other aspects, such as photorealistic scene modeling, can be omitted. Therefore, we focus on accurately simulating the mandatory components of computational cameras, namely apertures, lenses, spectral filters and sensors. Besides the simulation of the imaging process, the framework is capable of generating various ground truth data, which can be used to evaluate and optimize the performance of a particular imaging system. Due to its modularity, it is easy to further extend the framework to the needs of other fields of application. We make the source code of our simulation framework publicly available and encourage other researchers to use it to design and evaluate their own camera designs. 1
Computational imaging adjusts the traditional optics of a camera in order to process the light signal prior to digitalization. A major problem is the effort of subsequent adjustments of the optical components, limiting such cameras to very narrow fields of application. This article analyzes a camera that uses a transparent liquid crystal display instead of a mechanical aperture. The simple adjustability of the optical transfer function enables a flexible application and adaptation to varying ambient conditions. The performance of the camera is exemplariliy analyzed in the scope of depth estimation with the depth-from-defocus method. Our algorithm estimates the scale of the point spread function in a single image by comparing the power spectral density of the observed image with the known aperture shape used to capture the image. The analysis of a test scene shows that the camera is capable to reproduce previous simulative results and that object distances can be reliably estimated from single images.
This article presents an introductory microcontroller programming course on digital signal processing for undergraduate university level. The course is intended to provide insight into information technology and to prepare students for more complex exercises later on in their studies. Solutions to overcome pedagogical obstacles like the fear of new technologies and to minimise technological incompatibilities between different operating systems while setting up a programming tool chain are presented. This leads to an increased scalability of the course, allowing hundreds of students to attend each year. In the case presented here, the average number of participants was 300. The problem-oriented task assignments are defined leading to a final creative improvement task, for which the students' solutions are analysed. The course is evaluated and an outlook on further improvements is given.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.