Bus arrival prediction has important implications for public travel, urban dispatch, and mitigation of traffic congestion. The factors affecting urban traffic conditions are complex and changeable. As the predicted distance increases, the difficulty of traffic prediction becomes more difficult. Forecast based on historical data responds quite slowly for changes under the short-term conditions, and vehicle prediction based on real-time speed is not sufficient to predict under long-term conditions. Therefore, an arrival prediction method based on temporal vector and another arrival prediction method based on spatial vector is proposed to solve the problems of remote dependence of bus arrival and road incidents, respectively. In this paper, combining the advantages of the two prediction models, this paper proposes a long short-term memory (LSTM) and Artificial neural networks (ANN) comprehensive prediction model based on spatialtemporal features vectors. The long-distance arrival-to-station prediction is realized from the dimension of time feature, and the short-distance arrival-to-station prediction is realized from the dimension of spatial feature, thereby realizing the bus-to-station prediction. Besides, experiments were conducted and tested based on the entity dataset, and the result shows that the proposed method has high accuracy among bus arrival prediction problems. INDEX TERMS Artificial neural networks, bus arrival prediction, LSTM, spatial-temporal feature vector.
Taking inspiration from biology to solve engineering problems using the organizing principles of biological neural computation is the aim of the field of neuromorphic engineering. This field has demonstrated success in sensor based applications (vision and audition) as well as in cognition and actuators. This paper is focused on mimicking the approaching detection functionality of the retina that is computed by one type of Retinal Ganglion Cell (RGC) and its application to robotics. These RGCs transmit action potentials when an expanding object is detected. In this work we compare the software and hardware logic FPGA implementations of this approaching function and the hardware latency when applied to robots, as an attention/reaction mechanism. The visual input for these cells comes from an asynchronous event-driven Dynamic Vision Sensor, which leads to an end-to-end event based processing system. The software model has been developed in Java, and computed with an average processing time per event of 370 ns on a NUC embedded computer. The output firing rate for an approaching object depends on the cell parameters that represent the needed number of input events to reach the firing threshold. For the hardware implementation, on a Spartan 6 FPGA, the processing time is reduced to 160 ns/event with the clock running at 50 MHz. The entropy has been calculated to demonstrate that the system is not totally deterministic in response to approaching objects because of several bioinspired characteristics. It has been measured that a Summit XL mobile robot can react to an approaching object in 90 ms, which can be used as an attentional mechanism. This is faster than similar event-based approaches in robotics and equivalent to human reaction latencies to visual stimulus.In [1], Muench et al. identified a ganglion cell type in the mouse retina, they called it the approach sensitivity cell (AC), as it is sensitive to the approaching motion of objects (expanding objects). The detection of approaching motion elicits behaviors such as startle and protective motor responses in animals and humans. These responses are also important to predict collisions. This kind of function is also required in autonomous vehicles and robotics for obstacle detection and avoidance, which is one of the most important tasks in autonomous vehicles and mobile robots' environment perception. This task is basically carried out using range sensors or computer vision.Range sensors are based on flight time, the most used are ultrasonic range sensors (sonars), 2D and 3D laser range sensors (LIDAR) and structured light vision sensors (such as Microsoft Kinect). With all of them obstacles can be detected in a range of few millimeters to several meters and in a field of view of up to 360 • , in the case of 3D LIDAR.Computer vision is based on the computation of the information (frames) captured by CCD cameras, and its goal is to get salient information. To detect objects in the robot pathway, two approaches are possible: the use of a single camera or two camera...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.