2011
DOI: 10.1007/978-3-642-25167-2_12
|View full text |Cite
|
Sign up to set email alerts
|

Using Active Learning to Allow Activity Recognition on a Large Scale

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
28
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(29 citation statements)
references
References 18 publications
1
28
0
Order By: Relevance
“…But for longer activities such as sleeping or leave home we can afford a more coarse-grained discretisation. A choice of 60 s per time slot has shown to be fine enough with an acceptable error rate as shown by [28,29]. For this particular dataset, timeslices of 60 s incur in a performance error below 2 % [26], which proves to be a good trade-off between loss of information and efficiency.…”
Section: Data Pre-processingmentioning
confidence: 92%
“…But for longer activities such as sleeping or leave home we can afford a more coarse-grained discretisation. A choice of 60 s per time slot has shown to be fine enough with an acceptable error rate as shown by [28,29]. For this particular dataset, timeslices of 60 s incur in a performance error below 2 % [26], which proves to be a good trade-off between loss of information and efficiency.…”
Section: Data Pre-processingmentioning
confidence: 92%
“…Active learning has been studied in many previous works as a means for improving the AR performance with a minimal supervision [12,7,1]. It leverages the unlabeled data to identify uncertain/informative instances for labeling.…”
Section: Related Workmentioning
confidence: 99%
“…This problem is further referred to as supervised activity discovery. In the AR domain, the problem of minimizing supervision was studied in the context of active learning [12,7,1]. To improve the recognition results with minimal supervision, an active learning system asks a user to label only the most uncertain instances.…”
Section: Introductionmentioning
confidence: 99%
“…An information gain function is used to select the most informative points in the pool, and requests are issued to the user in order to obtain the corresponding label. This produces an updated pool, which is again evaluated to obtain a new label, iteratively [1]. Unfortunately, the user's memory cannot be trusted for reliable annotation if the target activity took place in the distant past or too many other activities took place after the target activity.…”
Section: Technical Perspectivementioning
confidence: 99%