Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services 2021
DOI: 10.1145/3458864.3467679
|View full text |Cite
|
Sign up to set email alerts
|

mmMesh

Abstract: In this paper, we present mmMesh, the first real-time 3D human mesh estimation system using commercial portable millimeterwave devices. mmMesh is built upon a novel deep learning framework that can dynamically locate the moving subject and capture his/her body shape and pose by analyzing the 3D point cloud generated from the mmWave signals that bounce off the human body. The proposed deep learning framework addresses a series of challenges. First, it encodes a 3D human body model, which enables mmMesh to estim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 77 publications
(14 citation statements)
references
References 36 publications
0
14
0
Order By: Relevance
“…Cao et al [20] provided a joint global-local human pose estimation network using a mmWave radar Range-Doppler Matrix (RDM) sequence, which achieves high estimation accuracy compared to using global RDM only. Xue et al [21] proposed to use the range-azimuth point cloud obtained from mmWave radar and a combination of convolutional neural network (CNN) and Long Short-Term Memory (LSTM) networks to achieve 3D human mesh model reconstruction. Zhong et al [22] provided a skeletal pose estimation method that combines point convolution to extract the local information and density from the point cloud, and they found that the accuracy of the pose estimation increased.…”
Section: Human Skeletal Posture Estimation With Linear Array Radarmentioning
confidence: 99%
“…Cao et al [20] provided a joint global-local human pose estimation network using a mmWave radar Range-Doppler Matrix (RDM) sequence, which achieves high estimation accuracy compared to using global RDM only. Xue et al [21] proposed to use the range-azimuth point cloud obtained from mmWave radar and a combination of convolutional neural network (CNN) and Long Short-Term Memory (LSTM) networks to achieve 3D human mesh model reconstruction. Zhong et al [22] provided a skeletal pose estimation method that combines point convolution to extract the local information and density from the point cloud, and they found that the accuracy of the pose estimation increased.…”
Section: Human Skeletal Posture Estimation With Linear Array Radarmentioning
confidence: 99%
“…Additionally, it conducts the experiments under different conditions, such as poor lighting, rain, smoke, and occlusion with different materials. Similarly, mmMesh [30] chooses TI AWR1843BOOST to collect mmWave data reflected from 8 daily activities. VICON motion capture system [37] with a sampling rate of 10 fps is used to obtain high-precision dynamic pose information of the subject, which can be utilized to generate the ground truth human mesh.…”
Section: Datasetsmentioning
confidence: 99%
“…Different from the existing vision, IR, and X-ray imaging systems, mmWave signals can penetrate clothes and work in low visibility conditions. Due to its higher privacy and millimeter-scale ranging resolution, mmWave-based imaging has been widely pursued for pose/posture tracking [30] [112], automatic driving [114], security concerns [65] [113] [119], etc. We summarize the related works in TABLE VIII.…”
Section: Human Imagingmentioning
confidence: 99%
See 1 more Smart Citation
“…Nonetheless, 4D-radar-point-cloud-based solutions still pose challenges for highperformance gait recognition. First, the limited numbers of antennas on commercial mmWave radars result in sparse radar point clouds that exhibit a lack of appearance or geometric information [19]. Second, due to the specular reflection phenomenon of mmWave signals, only parts of human body reflections propagate back to the received antennas [20,21].…”
Section: Introductionmentioning
confidence: 99%