Web-accessible video sites -such as YouTube -currently comprise several of the most trafficked sites in the world. Mobile smartphone penetration is also at an all-time high, as is userappetite for innovative mobile services. This paper anticipates the desire for video content understanding on mobile smartphones by on-the-go users. The implemented tool provides an innovative, compact, visual way for users to use their smartphone to understand the content of a video of interest before download, and the approach goes beyond the prevalent VCR-like controls and static keyframes of today. With a large e-commerce ecosystem evolving around mobile video, this work is extremely topical. We present our early design and implementation details and show how to support deeper mobile video-understanding than the current limited state of the art.
We describe the speech activity detection (SAD), speaker diarization (SD), and automatic speech recognition (ASR) experiments conducted by the Behavox team for the Interspeech 2020 Fearless Steps Challenge (FSC-2). A relatively small amount of labeled data, a large variety of speakers and channel distortions, specific lexicon and speaking style resulted in high error rates on the systems which involved this data. In addition to approximately 36 hours of annotated NASA mission recordings, the organizers provided a much larger but unlabeled 19k hour Apollo-11 corpus that we also explore for semi-supervised training of ASR acoustic and language models, observing more than 17% relative word error rate improvement compared to training on the FSC-2 data only. We also compare several SAD and SD systems to approach the most difficult tracks of the challenge (track 1 for diarization and ASR), where long 30-minute audio recordings are provided for evaluation without segmentation or speaker information. For all systems, we report substantial performance improvements compared to the FSC-2 baseline systems, and achieved a first-place ranking for SD and ASR and fourth-place for SAD in the challenge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.