Proceedings of 2010 IEEE International Symposium on Circuits and Systems 2010
DOI: 10.1109/iscas.2010.5537149
|View full text |Cite
|
Sign up to set email alerts
|

Activity-driven, event-based vision sensors

Abstract: The four chips [1][2][3][4] presented in the special session on "Activity-driven, event-based vision sensors" quickly output compressed digital data in the form of events. These sensors reduce redundancy and latency and increase dynamic range compared with conventional imagers. The digital sensor output is easily interfaced to conventional digital post processing, where it reduces the latency and cost of post processing compared to imagers. The asynchronous data could spawn a new area of DSP that breaks from c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
117
0
2

Year Published

2011
2011
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 198 publications
(120 citation statements)
references
References 41 publications
(55 reference statements)
1
117
0
2
Order By: Relevance
“…These last two inclusions overcome size constrains in the ADC block of Chapter 2, and output bandwidth (BW) limitations at its synchronous readout when upscaling the FPA size in high-speed applications with sparse activity in the focal plane. In this sense, a promising workaround is the use of address event representation (AER) communication protocols at pixel level [119] which, besides simplifying A/D conversion by moving part of 85 its signal processing outside the pixel, also adapt transmission capacity to visual contents themselves.…”
Section: Imager Architecture and Operation Proposalmentioning
confidence: 99%
“…These last two inclusions overcome size constrains in the ADC block of Chapter 2, and output bandwidth (BW) limitations at its synchronous readout when upscaling the FPA size in high-speed applications with sparse activity in the focal plane. In this sense, a promising workaround is the use of address event representation (AER) communication protocols at pixel level [119] which, besides simplifying A/D conversion by moving part of 85 its signal processing outside the pixel, also adapt transmission capacity to visual contents themselves.…”
Section: Imager Architecture and Operation Proposalmentioning
confidence: 99%
“…Event-based (retinal) vision sensors [1], such as the Dynamic Vision Sensor [2], offer great potential for robotic applications: since only pixel-level brightness changes are transmitted, less bandwidth is required and less data must be processed. In addition, these changes are transmitted at the time they occur with minimal latency, which is in the order of a few micro-seconds.…”
Section: A Motivationmentioning
confidence: 99%
“…Unlike a standard image sensor which synchronously captures a complete image in every time step, these retinas only emit spikes to indicate relevant changes in the scenery. According to a classification made by Delbruck, Linares-Barranco, Culurciello and Posch (2010), there are four types of neuromorphic retinas. Spatial contrast and spatial difference sensors avoid spatial information redundancy by only transmitting intensity ratios and differences, respectively.…”
Section: Neuromorphic Sensorsmentioning
confidence: 99%
“…For a detailed discussion of neuromorphic sensor devices, we refer the reader to the work of Liu and Delbruck (2010). A direct comparison of different types neuromorphic retina sensors is available in Delbruck et al (2010).…”
Section: Neuromorphic Sensorsmentioning
confidence: 99%