2020
DOI: 10.1109/taslp.2020.2996082
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Word Discovery and Retrieval With Spoken Descriptions and Visual Concepts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 32 publications
0
1
0
Order By: Relevance
“…In the embedding space, the image representation can work as the supervision information to train the speech encoder. This task which relies on a matching relationship between images and their corresponding spoken descriptions spawned several other cross-modal tasks between visual and speech, i.e., the segmentation of the objects in an image and keywords in an utterance [28], [35] and multimodal word discovery [36], [37]. Most recently, Wang et al [38], [39] proposed the S2IGAN model to generate images based on spoken descriptions.…”
Section: B Cross-modal Learning Between Visual and Speechmentioning
confidence: 99%
“…In the embedding space, the image representation can work as the supervision information to train the speech encoder. This task which relies on a matching relationship between images and their corresponding spoken descriptions spawned several other cross-modal tasks between visual and speech, i.e., the segmentation of the objects in an image and keywords in an utterance [28], [35] and multimodal word discovery [36], [37]. Most recently, Wang et al [38], [39] proposed the S2IGAN model to generate images based on spoken descriptions.…”
Section: B Cross-modal Learning Between Visual and Speechmentioning
confidence: 99%