Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2021
DOI: 10.1007/978-3-030-69544-6_18
|View full text |Cite
|
Sign up to set email alerts
|

Watch, Read and Lookup: Learning to Spot Signs from Multiple Supervisors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
58
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 20 publications
(59 citation statements)
references
References 37 publications
0
58
0
Order By: Relevance
“…Alternative approaches investigate the use of Multiple Instance Learning [6,28,48]. Other recent contributions leverage words from audio-aligned subtitles with keyword spotting methods based on mouthing cues [1], dictionaries [45] and attention maps generated by transformers [61] to annotate large numbers of signs, as well as to learn domain invariant features for improved sign recognition through joint training [36].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Alternative approaches investigate the use of Multiple Instance Learning [6,28,48]. Other recent contributions leverage words from audio-aligned subtitles with keyword spotting methods based on mouthing cues [1], dictionaries [45] and attention maps generated by transformers [61] to annotate large numbers of signs, as well as to learn domain invariant features for improved sign recognition through joint training [36].…”
Section: Related Workmentioning
confidence: 99%
“…Similarly to these works, we also aim to automatically annotate sign language videos by making use of audioaligned subtitles. To this end, we make use of prior keyword spotting methods [1,45]. However, differently from all the other methods mentioned above we propose an iterative approach, SPOT-ALIGN, that alternates between repeated sign spotting (to obtain more annotations) and jointly training on the resulting annotations together with dictionary exemplars (to obtain better features for spotting).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Key shortcomings of existing datasets include: a lack of diversity (in terms of signing environment, number of signer identities, or both), restricted domain of discourse [5,24] (for example, weather broadcasts) and limited scale [3]. MSASL [22], WLASL [25], BSL-DICT [29] and BSL SignBank [14] cover a wide vocabulary, but are restricted to isolated signs. BSLCORPUS [35] provides fine-grained linguistic annotation of conversations and narratives, but is limited to pairs of signers under lab conditions.…”
Section: Related Workmentioning
confidence: 99%
“…However, most other people do not understand any sign language, which severely hinders deaf people from blending into the society. To facilitate communication between the deaf and normal people, researchers have paid attention to sign language recognition (SLR), which aims at recognizing the words or sentences from videos [2], [3], and sign language annotation, which focuses on temporally locating instances of signs among the sequences of continuous gestures [4], [5].…”
mentioning
confidence: 99%