Echo State Networks (ESNs) and a Nonlinear Auto-Regressive Moving Average model with eXogenous inputs (NARMAX) have been applied to multi-sensor time-series data arising from a test footbridge which has been subjected to multiple potentially damaging interventions. The aim of the work was to automatically classify known potentially damaging events, while also allowing engineers to observe and localise any long term damage trends. The techniques reported here used data from ten temperature sensors as inputs and were tasked with predicting the output signal from eight tilt sensors embedded at various points over the bridge. Initially, interventions were identified by both ESNs and NARMAX. In addition, training ESNs using data up to the first event, and determining the ESNs' subsequent predictions, allowed inferences to be made not only about when and where the interventions occurred, but also the level of damage caused, without requiring any prior data preprocessing or extrapolation. Finally, ESNs were successfully used as classifiers to characterise various different types of intervention that had taken place.
Abstract-Echo State Networks (ESNs) have been applied to time-series data arising from a structural health monitoring multi-sensor array placed onto a test footbridge which has been subjected to a number of potentially damaging interventions over a three year period. The time-series data, sampled approximately every five minutes from ten temperature sensors, have been used as inputs and the ESNs were tasked with predicting the expected output signal from eight tilt sensors that were also placed on the footbridge. The networks were trained using temperature and tilt sensor data up to the first intervention and subsequent discrepancies in the ESNs' prediction accuracy allowed inferences to be made about when further interventions occurred and also the level of damage caused. Comparing the error in signals with the location of each of the tilt sensors allowed damaged regions to be determined.
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).
Conflict of Interest: Adam Wootton declares that he has no conflict of interest. Sarah Taylor received a research grant from NERC FSF. Charles Day declares that he has no conflict of interest. Peter Haycock declares that he has no conflict of interest. STRUCTURED ABSTRACTBackground: Static Pattern recognition requires a machine to classify an object on the basis of a combination of attributes, and is typically performed using machine learning techniques such as Support Vector Machines and Multilayer Perceptrons.Unusually, in this study we applied a successful time-series processing neural network architecture, the Echo State Network (ESN), to a static pattern recognition task. Methods: The networks were presented with clamped input data patterns, but in this work they were allowed to run until their output units delivered a stable set of output activations, in a similar fashion to previous work that focused on the behaviour of ESN reservoir units. Our aim was to see if the short term memory developed by the reservoir and the clamped inputs could deliver improved overall classification accuracy. The study utilized a challenging, high dimensional, real-world plant species spectroradiometry classification dataset with the objective of accurately detecting one of the world's top 100 invasive plant species. Results: Surprisingly, the ESNs performed equally well with both unsettled and settled reservoirs. Delivering a classification accuracy of 96.60%, the clamped ESNs outperformed three widely used machine learning techniques, namely Support Vector Machines, Extreme Learning Machines and Multilayer Perceptrons. Contrary to past work, where inputs were clamped until reservoir stabilization, it was found that it was possible to obtain similar classification accuracy (96.49%) by clamping the input patterns for just two repeats. Conclusions: The chief contribution of this work is that a recurrent architecture can get good classification accuracy, even while the reservoir is still in an unstable state.
The input units (left) are fully connected to the reservoir neurons via randomly weighted connections set at initialisation. The reservoir is sparsely interconnected with randomly weighted connections and there is potential for recurrent loops. Each reservoir node is connected to each output node (right) and these connections are the only ones that are trained.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.