Spectral imaging can reveal a lot of hidden details about the world around us, but is currently confined to laboratory environments due to the need for complex, costly and bulky cameras. Imec has developed a unique spectral sensor concept in which the spectral unit is monolithically integrated on top of a standard CMOS image sensor at wafer level, hence enabling the design of compact, low cost and high acquisition speed spectral cameras with a high design flexibility. This flexibility has previously been demonstrated by imec in the form of three spectral camera architectures: firstly a high spatial and spectral resolution scanning camera, secondly a multichannel snapshot multispectral camera and thirdly a per-pixel mosaic snapshot spectral camera. These snapshot spectral cameras sense an entire multispectral data cube at one discrete point in time, extending the domain of spectral imaging towards dynamic, video-rate applications. This paper describes the integration of our per-pixel mosaic snapshot spectral sensors inside a tiny, portable and extremely user-friendly camera. Our prototype demonstrator cameras can acquire multispectral image cubes, either of 272x512 pixels over 16 bands in the VIS (470-620nm) or of 217x409 pixels over 25 bands in the VNIR (600-900nm) at 170 cubes per second for normal machine vision illumination levels. The cameras themselves are extremely compact based on Ximea xiQ cameras, measuring only 26x26x30mm, and can be operated from a laptop-based USB3 connection, making them easily deployable in very diverse environments.
Although the potential of spectral imaging has been demonstrated in research environments, its adoption by industry has so far been limited due to the lack of high speed, low cost and compact spectral cameras. We have previously presented work to overcome this limitation by monolithically integrating optical interference filters on top of standard CMOS image sensors for high resolution pushbroom hyperspectral cameras. These cameras require a scanning of the scene and therefore introduce operator complexity due to the need for synchronization and alignment of the scanning to the camera. This typically leads to problems with motion blur, reduced SNR in high speed applications and detection latency and overall restricts the types of applications that can use this system. This paper introduces a novel snapshot multispectral imager concept based on optical filters monolithically integrated on top of a standard CMOS image sensor. By using monolithic integration for the dedicated, high quality spectral filters at its core, it enables the use of mass-produced fore-optics, reducing the total system cost. It overcomes the problems mentioned for scanning applications by snapshot acquisition, where an entire multispectral data cube is sensed at one discrete point in time. This is achieved by applying a novel, tiled filter layout and an optical sub-system which simultaneously duplicates the scene onto each filter tile. Through the use of monolithically integrated optical filters it retains the qualities of compactness, low cost and high acquisition speed, differentiating it from other snapshot spectral cameras based on heterogeneously integrated custom optics. Moreover, thanks to a simple cube assembly process, it enables real-time, low-latency operation. Our prototype camera can acquire multispectral image cubes of 256x256 pixels over 32 bands in the spectral range of 600-1000nm at a speed of about 30 cubes per second at daylight conditions up to 340 cubes per second at higher illumination levels as typically used in machine vision applications.
3D graphics has found its way to mobile devices such as Personal Digital Assistants (PDA) and cellular phones. Given their limited battery capabilities, these devices typically have less computational resources available than their counterparts connected to a power supply. Additionally, the workload of 3D graphics applications changes very drastically over time. These different and changing conditions make the creation of 3D content a real challenge for the content creators.To allow the rendering of arbitrary content on a mobile device without the need of ad-hoc content creation. We present a framework to adapt the resolution of 3D objects to the available processing resources. An MPEG-4 scalable geometry decoder is used to change the resolution and an analytical model of the workload of a mobile renderer is presented for controlling the scalable decoder. Because of the scarce computational resources, a good balance between accuracy and complexity is needed. The presented approach has an error and a complexity overhead of less than 10% for most practical cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.