Abstract:Regardless of the marvels brought by the conventional frame‐based cameras, they have significant drawbacks due to their redundancy in data and temporal latency. This causes problem in applications where low‐latency transmission and high‐speed processing are mandatory. Proceeding along this line of thought, the neurobiological principles of the biological retina have been adapted to accomplish data sparsity and high dynamic range at the pixel level. These bio‐inspired neuromorphic vision sensors alleviate the m… Show more
“…Unfortunately, the frame-based vision has some disadvantages, e.g., high data redundancy, high bandwidth demand in short-latency usecases, or limited dynamic range [41]. The DVS (see Figure 10), sometimes called "silicon retina" [41,42], functions differently. Each pixel of the sensor operates separately and elicits its events immediately when pixel illuminance changes [43].…”
Tracking the trajectory of the load carried by the rotary crane is an important problem that allows reducing the possibility of its damage by hitting an obstacle in its working area. On the basis of the trajectory, it is also possible to determine an appropriate control system that would allow for the safe transport of the load. This work concerns research on the load motion carried by a rotary crane. For this purpose, the laboratory crane model was designed in Solidworks software, and numerical simulations were made using the Motion module. The developed laboratory model is a scaled equivalent of the real Liebherr LTM 1020 object. The crane control included two movements: changing the inclination angle of the crane’s boom and rotation of the jib with the platform. On the basis of the developed model, a test stand was built, which allowed for the verification of numerical results. Event visualization and trajectory tracking were made using a dynamic vision sensor (DVS) and the Tracker program. Based on the obtained experimental results, the developed numerical model was verified. The proposed trajectory tracking method can be used to develop a control system to prevent collisions during the crane’s duty cycle.
“…Unfortunately, the frame-based vision has some disadvantages, e.g., high data redundancy, high bandwidth demand in short-latency usecases, or limited dynamic range [41]. The DVS (see Figure 10), sometimes called "silicon retina" [41,42], functions differently. Each pixel of the sensor operates separately and elicits its events immediately when pixel illuminance changes [43].…”
Tracking the trajectory of the load carried by the rotary crane is an important problem that allows reducing the possibility of its damage by hitting an obstacle in its working area. On the basis of the trajectory, it is also possible to determine an appropriate control system that would allow for the safe transport of the load. This work concerns research on the load motion carried by a rotary crane. For this purpose, the laboratory crane model was designed in Solidworks software, and numerical simulations were made using the Motion module. The developed laboratory model is a scaled equivalent of the real Liebherr LTM 1020 object. The crane control included two movements: changing the inclination angle of the crane’s boom and rotation of the jib with the platform. On the basis of the developed model, a test stand was built, which allowed for the verification of numerical results. Event visualization and trajectory tracking were made using a dynamic vision sensor (DVS) and the Tracker program. Based on the obtained experimental results, the developed numerical model was verified. The proposed trajectory tracking method can be used to develop a control system to prevent collisions during the crane’s duty cycle.
“…Over the last decade, an increasing number of studies have used event-based data for computer vision, with performances sometimes better than those obtained from more classical frame-based cameras in applications like object recognition (Neil and Liu, 2016;Stromatias et al, 2017), or visual odometry (Gallego and Scaramuzza, 2017;Nguyen et al, 2019). These studies were all based on deep convolutional neural networks or SNNs, coupled with supervised learning or classification approaches (see (Lakshmi et al, 2019)). For example, (Zhu et al, 2019) used an artificial neural network (ANN) to predict the optic flow from event-based data collected from a camera mounted on the top of a car moving within an urban environment (see also (Zhu et al, 2018)).…”
We developed a Spiking Neural Network composed of two layers that processes event-based data captured by a dynamic vision sensor during navigation conditions. The training of the network was performed using a biologically plausible and unsupervised learning rule, Spike-Timing-Dependent Plasticity. With such an approach, neurons in the network naturally become selective to different components of optic flow, and a simple classifier is able to predict self-motion properties from the neural population output spiking activity. Our network has a simple architecture and a restricted number of neurons. Therefore, it is easy to implement on a neuromorphic chip and could be used for embedded applications necessitating low energy consumption.
“…Event camera-based algorithms for single or multiple object detection, pose estimation, and tracking (MOT) can be classified into three categories: feature-based, artificial neural network-based, and time surface-based [35]. Studies focusing on robot pose estimation using event cameras have been reported in the literature [59]- [61].…”
Section: A Robotic Systems With Event Camerasmentioning
confidence: 99%
“…The event camera used in this study was affixed to a stationary mount on the ceiling to provide a fixed frame of reference. When an event camera moves, the background suffers from clutter, making it difficult to distinguish the object of interest [35].…”
This paper presents a real-time method to detect and track multiple mobile ground robots using event cameras. The method uses density-based spatial clustering of applications with noise (DBSCAN) to detect the robots and a single k-dimensional (k − d) tree to accurately keep track of them as they move in an indoor arena. Robust detections and tracks are maintained in the face of event camera noise and lack of events (due to robots moving slowly or stopping). An off-the-shelf RGB camera-based tracking system was used to provide ground truth. Experiments including up to 4 robots are performed to study the effect of i) varying DBSCAN parameters, ii) the event accumulation time, iii) the number of robots in the arena, iv) the speed of the robots, v) variation in ambient light conditions on the detection and tracking performance, and vi) the effect of alternative clustering algorithms on detection performance. The experimental results showed 100% detection and tracking fidelity in the face of event camera noise and robots stopping for tests involving up to 3 robots (and upwards of 93% for 4 robots). When the lighting conditions were varied, a graceful degradation in detection and tracking fidelity was observed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.