2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00555
|View full text |Cite
|
Sign up to set email alerts
|

Person-in-WiFi: Fine-Grained Person Perception Using WiFi

Abstract: Figure 1. Person-in-WiFi. Top: WiFi antennas as sensors for person perception. Receiver antennas record WiFi signals as inputs to Person-in-WiFi. The rest rows are, images used to annotate WiFi signals, and two outputs: person segmentation masks and body poses. estimation in an end-to-end manner. Experimental results on over 10 5 frames under 16 indoor scenes demonstrate that Person-in-WiFi achieved person perception comparable to approaches using 2D images.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
60
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 140 publications
(74 citation statements)
references
References 59 publications
0
60
0
Order By: Relevance
“…These algorithms have utilized generator models paired with appropriate targets and loss functions to solve many image-to-image translation problems like image denoising [10,71], image superresolution [15,28,32], image colorization [27,31], and real-to-art image translations [26,27,76]. While there has been some work that utilizes rf-based techniques that utilize machine learning to solve through-wall human pose estimation [58,73,74], our paper is the first to present general principles for using ideas of imageto-image translation for the localization problem. Specifically, the data distribution of the indoor WiFi localization data is different from all image translation work, and RF-based pose estimation data.…”
Section: Related Workmentioning
confidence: 99%
“…These algorithms have utilized generator models paired with appropriate targets and loss functions to solve many image-to-image translation problems like image denoising [10,71], image superresolution [15,28,32], image colorization [27,31], and real-to-art image translations [26,27,76]. While there has been some work that utilizes rf-based techniques that utilize machine learning to solve through-wall human pose estimation [58,73,74], our paper is the first to present general principles for using ideas of imageto-image translation for the localization problem. Specifically, the data distribution of the indoor WiFi localization data is different from all image translation work, and RF-based pose estimation data.…”
Section: Related Workmentioning
confidence: 99%
“…In the case of moving objects, the fluctuation of multipath fading is different depending on human activity and/or the number of moving people. This characteristic is used for human movement detection, human behavior classification, human or vehicle identification, human counting in a room, and human counting in a passageway [3,5,[11][12][13][14][15][16][17][18][19], among others. In these applications, the main classification algorithms for estimation, such as SVMs, recurrent neural networks (RNNs), and long short-term memory (LSTM), are used.…”
Section: Related Studiesmentioning
confidence: 99%
“…For example, three PIR sensors are used to detect people's movement [4]. On the other hand, LiDAR sensors are a crucial component for autonomous cars, but LiDAR sensors are not cheaper than Wi-Fi devices [5]. We expect Wi-Fi devices with CSI collection to be a cost-effective solution.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This article presents some typical deep neural network models used in CSI-based behavior recognition applications, including Autoencoder, Convolutional Neural Network (CNN), LSTM, RNN, Residual Neural Network (ResNet), and Restricted Boltzmann Machine (RBM). In addition, we also detailed introduce these applications, such as daily behavior recognition [138]- [142], falling detection [143], [144], syncope detection [145], hand gesture recognition [8], [146]- [149], sign language recognition [150], gait and walking direction recognition [151], [152], human presence detection [153]- [155], crowd counting [156]- [159], user authentication [160]- [163], and respiration monitoring [22].…”
Section: Deep Learning-based Behavior Recognitionmentioning
confidence: 99%