Abstract:Event cameras are novel sensors that report brightness changes in the form of a stream of asynchronous "events" instead of intensity frames. They offer significant advantages with respect to conventional cameras: high temporal resolution, high dynamic range, and no motion blur. While the stream of events encodes in principle the complete visual signal, the reconstruction of an intensity image from a stream of events is an ill-posed problem in practice. Existing reconstruction approaches are based on hand-craft… Show more
“…In recent years, this technology has attracted a lot of attention from academia and industry. This is due to the availability of prototype event cameras and the advantages that they offer to tackle problems that are difficult with standard frame-based image sensors (that provide stroboscopic synchronous sequences of pictures), such as high-speed motion estimation [6], [7] or high dynamic range (HDR) imaging [8].…”
Section: Introduction and Applicationsmentioning
confidence: 99%
“…Event cameras are used for object tracking [12], [13], surveillance and monitoring [14], and object/gesture recognition [15], [16], [17]. They are also profitable for depth estimation [18], [19], structured light 3D scanning [20], optical flow estimation [21], [22], HDR image reconstruction [8], [23], [24] and Simultaneous Localization and Mapping (SLAM) [25], [26], [27]. Event-based vision is a growing field of research, and other applications, such as image deblurring [28] or star tracking [29], [30], will appear as event cameras become widely available [9].…”
Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of µs), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.
“…In recent years, this technology has attracted a lot of attention from academia and industry. This is due to the availability of prototype event cameras and the advantages that they offer to tackle problems that are difficult with standard frame-based image sensors (that provide stroboscopic synchronous sequences of pictures), such as high-speed motion estimation [6], [7] or high dynamic range (HDR) imaging [8].…”
Section: Introduction and Applicationsmentioning
confidence: 99%
“…Event cameras are used for object tracking [12], [13], surveillance and monitoring [14], and object/gesture recognition [15], [16], [17]. They are also profitable for depth estimation [18], [19], structured light 3D scanning [20], optical flow estimation [21], [22], HDR image reconstruction [8], [23], [24] and Simultaneous Localization and Mapping (SLAM) [25], [26], [27]. Event-based vision is a growing field of research, and other applications, such as image deblurring [28] or star tracking [29], [30], will appear as event cameras become widely available [9].…”
Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of µs), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.
“…This could be tackled by engineering new tools to help with the calibration either in the setup itself or computationally. 58 Finally, the drastic changes in datafrom images to events imposes the development of a new framework for processing. Importantly, it has been demonstrated here that data can be analysed to extract relevant information (e.g., particle focusing position, particle velocity) and can also be directly compared to images (e.g., comparison with composite image from fluorescent imaging).…”
Visualising fluids and particles within channels is a key element of microfluidic work. Current imaging methods for particle image velocimetry often require expensive high-speed cameras with powerful illuminating sources, thus...
“…The integrator accumulates incoming photons as photoelectrons, and the decimator uses a decimation factor 2 to divide the event frequency, determining the threshold of light the pixel must take in before firing an event [9]. Hence such a sensor is not invariant to scene illumination, but can directly encode scene luminosity without the need for conventional active pixels like DAVIS [3,5,10], estimation [7], or a complex neural network to reconstruct video [6]. The incident light intensity, , of a pixel may be computed by dividing the decimation factor by the time delta, Δ , between two consecutive events for that pixel, written as ∝ 2 Δ .…”
Event cameras are biologically-inspired sensors that upend the framed, synchronous nature of traditional cameras. Singh et al. proposed a novel sensor design wherein incident light values may be measured directly through continuous integration, with individual pixels' light sensitivity being adjustable in real time [8], allowing for extremely high frame rate and high dynamic range video capture. Arguing the potential usefulness of this sensor, this paper introduces a system for simulating the sensor's event outputs and pixel firing rate control from 3D-rendered input images. CCS CONCEPTS • Computing methodologies → Image compression; Discreteevent simulation; Agent / discrete models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.