2020
DOI: 10.1109/tkde.2020.3017122
|View full text |Cite
|
Sign up to set email alerts
|

Representation Learning from Limited Educational Data with Crowdsourced Labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1
1

Relationship

3
2

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 56 publications
0
10
0
Order By: Relevance
“…Many approaches have achieved promising results in many tasks such as face recognition (Schroff et al, 2015), person re-identification (Yi et al, 2014), and collaborative filtering (Hsieh et al, 2017) etc. Recently a body of works have attempted to learn effective embeddings from crowdsourced labels by using DML approaches (Xu et al, 2019;Wang et al, 2020b). For example, Xu et al estimated crowdsourced label confidence and adjust the DML loss function accordingly (Xu et al, 2019).…”
Section: Deep Metric Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Many approaches have achieved promising results in many tasks such as face recognition (Schroff et al, 2015), person re-identification (Yi et al, 2014), and collaborative filtering (Hsieh et al, 2017) etc. Recently a body of works have attempted to learn effective embeddings from crowdsourced labels by using DML approaches (Xu et al, 2019;Wang et al, 2020b). For example, Xu et al estimated crowdsourced label confidence and adjust the DML loss function accordingly (Xu et al, 2019).…”
Section: Deep Metric Learningmentioning
confidence: 99%
“…In this paper we study and develop solutions that are applicable and can learn effective neural language representations from crowdsourced labels in an end-to-end manner. Our work focuses on the refinements of a popular deep language representation learning paradigm: the deep metric learning (DML) (Koch et al, 2015;Xu et al, 2019;Wang et al, 2020b). We aim to develop an algorithm to automatically learn a nonlinear language representation of the crowdsourced data from multiple workers using DNNs.…”
Section: Introductionmentioning
confidence: 99%
“…Large-scale Human Evaluation Results. Besides evaluations on the GT set, which is usually limited in educational scenarios (Xu et al, 2019;Wang et al, 2020), we conduct evaluations on the largescale generated results. We randomly create 100 valid linear equations and ensure that none of them appears in our training set.…”
Section: Makementioning
confidence: 99%
“…Hard Example Mining Strategy: Many instances that can be classified correctly by the model contribute little to the contrastive loss [18,21]. That is to say, a randomly selected instance xc j probably has been far away from an instance x i after epochs of training.…”
Section: Multi-task Learning Modulementioning
confidence: 99%
“…To address the above challenges, in this study, we propose an end-to-end multi-task framework for automatic dialogic instruction detection from online videos. Specifically, we (1) propose a contrastive loss based multi-task framework to distinguish instances by enlarging the distances between instances of di↵erent categories [12,18]; (2) utilize the pre-trained neural language model to robustly handle errors from ASR transcriptions without manual annotation e↵orts [5,15]; and (3) propose a strategy to select and exploit hard instances in the training process to achieve higher performance [21,18].…”
Section: Introductionmentioning
confidence: 99%