2023
DOI: 10.3390/s23135854
|View full text |Cite
|
Sign up to set email alerts
|

A Light-Weight Artificial Neural Network for Recognition of Activities of Daily Living

Abstract: Human activity recognition (HAR) is essential for the development of robots to assist humans in daily activities. HAR is required to be accurate, fast and suitable for low-cost wearable devices to ensure portable and safe assistance. Current computational methods can achieve accurate recognition results but tend to be computationally expensive, making them unsuitable for the development of wearable robots in terms of speed and processing power. This paper proposes a light-weight architecture for recognition of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 49 publications
(58 reference statements)
0
2
0
Order By: Relevance
“…Furthermore, we compared the performance of the LREAL network with previous methods using ENABL3S according to the recognizable classes, the number of used sensors, and accuracy (Table 8). The results demonstrated that the LREAL network outperformed LDA [8], CNN-LSTM [37], CNN [38], and Light-Weight Artificial Neural Network (LWANN) [39] in terms of accuracy for steadystate recognition, and surpassed LIR-Net [27], LDA, CNN, LWANN, and CNN-LSTM in terms of accuracy for transition recognition. While LIR-Net achieves better overall performance and accuracy for steady-state recognition compared to LREAL, it only recognizes five steady states and eight types of transitions compared to the seven steady states and twelve types of transitions recognized by the LREAL network.…”
Section: Overall Network Evaluationmentioning
confidence: 99%
“…Furthermore, we compared the performance of the LREAL network with previous methods using ENABL3S according to the recognizable classes, the number of used sensors, and accuracy (Table 8). The results demonstrated that the LREAL network outperformed LDA [8], CNN-LSTM [37], CNN [38], and Light-Weight Artificial Neural Network (LWANN) [39] in terms of accuracy for steadystate recognition, and surpassed LIR-Net [27], LDA, CNN, LWANN, and CNN-LSTM in terms of accuracy for transition recognition. While LIR-Net achieves better overall performance and accuracy for steady-state recognition compared to LREAL, it only recognizes five steady states and eight types of transitions compared to the seven steady states and twelve types of transitions recognized by the LREAL network.…”
Section: Overall Network Evaluationmentioning
confidence: 99%
“…The performance of the computational methods used for the recognition process was evaluated using a set of metrics. These metrics include the accuracy of the identification of plastics, precision, recall and F1-score, which are commonly used for performance evaluations of ML methods [45][46][47]. The calculation of these metrics employs information from the correct recognition of the target class or true positive (TP), correct recognition of the non-target class or true negative (TN), incorrect recognition of the target class or false positive (FP) and incorrect recognition of a non-target class (FN).…”
Section: Performance Metricsmentioning
confidence: 99%