2021
DOI: 10.1177/14759217211025488
|View full text |Cite
|
Sign up to set email alerts
|

A general framework for supervised structural health monitoring and sensor output validation mitigating data imbalance with generative adversarial networks-generated high-dimensional features

Abstract: This study proposes a novelty-classification framework that applies to structural health monitoring (SHM) and sensor output validation (SOV) problems. The proposed framework has simple high-dimensional features with several advantages. First, the feature extraction method is extensively applicable to instrumented structures. Second, the high-dimensional features’ utilization alleviates one of the main issues of supervised novelty classifications, namely, imbalanced datasets and low-sampled data classes. Recurr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(19 citation statements)
references
References 30 publications
0
19
0
Order By: Relevance
“…There is always the possibility that the Generator is not trained well for all parts of that SGD, resulting in misleading generations. This phenomenon is shown in Soleimani et al [9]. In that study, generated data objects were "Capped" to avoid misleading generations to enter the analysis.…”
Section: The Reliability Analysis Methods and Unreliable Gan-generate...mentioning
confidence: 92%
See 3 more Smart Citations
“…There is always the possibility that the Generator is not trained well for all parts of that SGD, resulting in misleading generations. This phenomenon is shown in Soleimani et al [9]. In that study, generated data objects were "Capped" to avoid misleading generations to enter the analysis.…”
Section: The Reliability Analysis Methods and Unreliable Gan-generate...mentioning
confidence: 92%
“…The first feature (F I) is high-dimensional (N × D L /2), made of magnitudes of half-spectrum FFTs of input signals with maximum values suppressed to ten. As discussed in Soleimani et al [9], the features to vary in a fixed-range is beneficial for the GAN's training. The second feature (F II) is a reduced F I representation, made of quartiles of vibrational energy in each time-series data object, which is tried on prior studies [23].…”
Section: Feature Extractionmentioning
confidence: 99%
See 2 more Smart Citations
“…High dimensional, domain‐specific features obtained through simple feature extraction methods (e.g., fast Fourier transform of a signal) can address the structure‐specific feature extraction challenges if paired with the right detection architecture. In supervised settings, high‐dimensional features have been used with 1D convolutional neural networks (CNNs) (Abdeljaber et al., 2018; Azimi & Pekcan, 2020), 2D CNN (Yu et al., 2019), 2D CNN (Yu et al., 2019), or with long short‐term memory (LSTM) units, a variation of recurrent neural networks (RNN) (Soleimani‐Babakamali, Soleimani‐Babakamali, & Sarlo, 2021). Specific target functions are required to employ such deep architectures in unsupervised settings.…”
Section: Introductionmentioning
confidence: 99%