Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services 2021
DOI: 10.1145/3458864.3466904
|View full text |Cite
|
Sign up to set email alerts
|

Lost and Found!

Abstract: We demonstrate an application of finding target persons on a surveillance video. Each visually detected participant is tagged with a smartphone ID and the target person with the query ID is highlighted. This work is motivated by the fact that establishing associations between subjects observed in camera images and messages transmitted from their wireless devices can enable fast and reliable tagging. This is particularly helpful when target pedestrians need to be found on public surveillance footage, without th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 1 publication
0
2
0
Order By: Relevance
“…We evaluate our method on two datasets, the Vi-Fi Multi-modal Dataset [18,19] and the H3D Dataset [30], for the problem of layout sequence prediction from noisy mobile data. To assess our proposed method, we require datasets that contain layout sequences (trajectories, bounding boxes, depths, etc.)…”
Section: Experiments 51 Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…We evaluate our method on two datasets, the Vi-Fi Multi-modal Dataset [18,19] and the H3D Dataset [30], for the problem of layout sequence prediction from noisy mobile data. To assess our proposed method, we require datasets that contain layout sequences (trajectories, bounding boxes, depths, etc.)…”
Section: Experiments 51 Datasetsmentioning
confidence: 99%
“…[6]. Vi-Tag is the current state-of-the-art method on the Vi-Fi dataset [18,19] for the vision-motion identity association task, which aims to match identities in cameras in the vision modality with wireless signals in the mobile modality. The core of Vi-Tag is the X-Translator model, which translates the visual modality to the mobile modality and finds the most similar wireless signals to match the identities in both modalities.…”
Section: Baselinesmentioning
confidence: 99%