Structural health monitoring (SHM) strategies should ideally consist of continuous on-line damage detection processes, which do not need to rely on the comparison of newly acquired data with baseline references, previously defined assuming that structural systems are undamaged and unchanged during a given period of time.The present paper addresses the topic of SHM and describes an original strategy for detecting damage in an early stage without relying on the definition of data references. This strategy resorts to the combination of two statistical learning methods. Neural networks were used to estimate the structural response, and clustering methods were adopted for automatically classifying the neural networks' estimation errors. To ensure an on-line continuous process, these methods were sequentially applied in a moving windows process.The proposed original strategy was tested and validated on numerical and experimental data obtained from a cable-stayed bridge. It proved highly robust to false detections and sensitive to early damage by detecting small stiffness reductions in single stay cables as well as the detachment of neoprene pads in anchoring devices, resorting only to a small amount of inexpensive sensors. detection approaches rely on signal processing and statistical learning techniques to extract sensitive information from time-series acquired on site [8][9][10][11]. Their computational simplicity makes them cost-effective and the most suitable candidates for carrying out automated on-line damage detection [2], either based on modal information [12][13][14] or based on statistical and time-series features [2,8,15].Data-driven SHM approaches rely on two mandatory steps for conducting damage detection: response modelling and statistical classification. The first aims at separating the variations imposed by 'normal' environmental/operational actions from those caused by damage [16]. It relies on training statistical learning algorithms so that they can accurately estimate the 'normal' structural response. Any 'abnormal' variations can afterwards be highlighted by comparing the estimates with the actual responses. The most reported statistical modelling algorithms found in SHM literature consist of multi-layer perceptron (MLP) neural networks [17][18][19], support vector regressions [20], linear regressions [2], principal component analysis [21] and auto associative neural networks [22]. Regardless of the chosen algorithms, response modelling has been reported in the literature as a supervised problem, where the statistical learning algorithms are trained a priori with reference data, in which the structural systems must be assumed undamaged and unchanged [9,11,23].Statistical classification consists of discriminating SHM data as related to identical or distinct structural conditions [24,25]. This step has also been addressed under supervised approaches, where classification algorithms are trained with reference data sets (in general, the same ones used for response modelling) to define boundaries that s...
This article addresses the subject of data-driven structural health monitoring and proposes a real-time strategy to conduct structural assessment without the need to define a baseline period, in which the monitored structure is assumed healthy and unchanged. Independence from baseline references is achieved using unsupervised discrimination machine-learning methods, widely known as clustering algorithms, which are able to find groups in data relying only on their intrinsic features and without requiring prior knowledge as input. Real-time capability is based on the definition of symbolic data, which allows describing large amounts of information without loss of generality or structural-related information. The efficiency of the proposed methodology is illustrated using an experimental case study in which structural changes were imposed to a suspended bridge during an extensive rehabilitation programme. A single-value novelty index capable of describing multisensor data is proposed, and its effectiveness in identifying structural changes in real time, using outlier analysis, is discussed.
The use of the human iris as a biometric has recently attracted significant interest in the area of security applications. The need to capture an iris without active user cooperation places demands on the optical system. Unlike a traditional optical design, in which a large imaging volume is traded off for diminished imaging resolution and capacity for collecting light, Wavefront Coded imaging is a computational imaging technology capable of expanding the imaging volume while maintaining an accurate and robust iris identification capability. We apply Wavefront Coded imaging to extend the imaging volume of the iris recognition application.
Iris recognition imaging is attracting considerable interest as a viable alternative for personal identification and verification in many defense and security applications. However current iris recognition systems suffer from limited depth of field, which makes usage of these systems more difficult by an untrained user. Traditionally, the depth of field is increased by reducing the imaging system aperture, which adversely impacts the light capturing power and thus the system signal-tonoise ratio (SNR). In this paper we discuss a computational imaging system, referred to as Wavefront Coded ® imaging, for increasing the depth of field without sacrificing the SNR or the resolution of the imaging system. This system employs a especially designed Wavefront Coded lens customized for iris recognition. We present experimental results that show the benefits of this technology for biometric identification.
In this paper we use our derived approximate representation of the modulation transfer function to analytically solve the problem of the extension of the depth of field for two cases of interest: uniform quality imaging and task-based imaging. We derive the optimal result for each case as a function of the problem specifications. We also compare the two different imaging cases and discuss the advantages of using our optimization approach for each case. We also show how the analytical solutions given in this paper can be used as a convenient design tool as opposed to previous lengthy numerical optimizations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.