2018 IEEE 15th International Conference on Wearable and Implantable Body Sensor Networks (BSN) 2018
DOI: 10.1109/bsn.2018.8329677
|View full text |Cite
|
Sign up to set email alerts
|

A machine learning approach for gesture recognition with a lensless smart sensor system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 5 publications
0
7
0
Order By: Relevance
“…Optoelectronics have been widely and consistently used in robotics over the recent years, particularly in the research field of collaborative systems and are shown to increase the safety of human operators. In the future, the price drop of optoelectronic sensors and the release of more compact and easier to implement hybrid and data fusion solutions, as well as next-generation wearable lens-less cameras [ 83 , 84 , 85 ], will lead to fewer obstructions in jobsites and improve the practicality of camera-based approaches in other industry sectors.…”
Section: Discussionmentioning
confidence: 99%
“…Optoelectronics have been widely and consistently used in robotics over the recent years, particularly in the research field of collaborative systems and are shown to increase the safety of human operators. In the future, the price drop of optoelectronic sensors and the release of more compact and easier to implement hybrid and data fusion solutions, as well as next-generation wearable lens-less cameras [ 83 , 84 , 85 ], will lead to fewer obstructions in jobsites and improve the practicality of camera-based approaches in other industry sectors.…”
Section: Discussionmentioning
confidence: 99%
“…It validated the performance of the Rambus LSS system in gesture recognition applications. A more detailed description of different machine learning approaches and an analysis of the results are provided in [ 51 ]. In this work, the classification accuracy as a function of LED position using Random Forest training [ 52 ] is evaluated to validate the performance.…”
Section: Resultsmentioning
confidence: 99%
“…The feature extraction consists of a procedure to gather the principal characteristics of a gesture to represent it as a unique vector that is able to describe it. A frame-based descriptor approach is used for the same [ 50 , 51 ]. As illustrated in [ 50 ], the frame-based descriptor is well suited for extracting features from inertial sensors.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The algorithms for HAR can be classified into shallow and deep learning methods. Common shallow methods in HAR include SVM [13], [20], [23], k-nearest neighbors (kNN) [16], [24], linear discriminant analysis (LDA) [9], and random forest (RF) [21]. Deep learning approaches, such as LSTM [7], [15], CNN-LSTM [25], [27], CNN [22], and convLSTM [26], have shown impressive leaps in performance compared to their shallow counterparts by learning to automatically extract features from raw sensor data, thus dropping the need for having human experts to provide hand-engineered features.…”
Section: Background a Human Activity Recognition (Har)mentioning
confidence: 99%