2021
DOI: 10.1016/j.jvcir.2021.103132
|View full text |Cite
|
Sign up to set email alerts
|

A deep genetic algorithm for human activity recognition leveraging fog computing frameworks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(9 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…In view of this, a method combining data-driven and knowledge-driven was proposed in reference [35]. Indeed, some scholars used deep genetic algorithms or transfer learning for HAR to reduce model performance loss and improve recognition accuracy [36], [37].…”
Section: Other Related Methodsmentioning
confidence: 99%
“…In view of this, a method combining data-driven and knowledge-driven was proposed in reference [35]. Indeed, some scholars used deep genetic algorithms or transfer learning for HAR to reduce model performance loss and improve recognition accuracy [36], [37].…”
Section: Other Related Methodsmentioning
confidence: 99%
“…The authors in [100] proposed a methodology for identifying human activity on a real-time basis. The suggested activity recognition model includes a one-of-a-kind change SurveilEdge, a collaborative cloud-edge system for real-time queries of large-scale surveillance video streams, was introduced by the authors of [98].…”
Section: Use Case 2: Intelligent Multimedia Processing On Edge For Hu...mentioning
confidence: 99%
“…The authors in [100] proposed a methodology for identifying human activity on a real-time basis. The suggested activity recognition model includes a one-of-a-kind change detection module.…”
Section: Use Case 2: Intelligent Multimedia Processing On Edge For Hu...mentioning
confidence: 99%
See 1 more Smart Citation
“…Several human behaviors, including walking, sitting, riding a bike, jogging, eating, reading, and washing, are considered static or dynamic [3] . Training data for HAR are primarily derived from nonvisual wearable sensors and ambient assisted cameras (visual sensors) and a combination of both [4] , [5] .…”
Section: Introductionmentioning
confidence: 99%