2023
DOI: 10.7717/peerj-cs.1152
|View full text |Cite
|
Sign up to set email alerts
|

A novel approach for yoga pose estimation based on in-depth analysis of human body joint detection accuracy

Abstract: Virtual motion and pose from images and video can be estimated by detecting body joints and their interconnection. The human body has diverse and complicated poses in yoga, making its classification challenging. This study estimates yoga poses from the images using a neural network. Five different yoga poses, viz. downdog, tree, plank, warrior2, and goddess in the form of RGB images are used as the target inputs. The BlazePose model was used to localize the body joints of the yoga poses. It detected a maximum … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 28 publications
(32 reference statements)
0
6
0
Order By: Relevance
“…The ground truths dataset was prepared by segmenting the CBCT images (DICOM format) using a third-party A.I. tool (Makesense.AI) [ 32 ]. Teeth were grouped as 0 (ERR depth = 0.5 mm), 1 (ERR depth = 1.0 mm), 2 (ERR depth = 2.0 mm), 3 (no ERR).…”
Section: Methodsmentioning
confidence: 99%
“…The ground truths dataset was prepared by segmenting the CBCT images (DICOM format) using a third-party A.I. tool (Makesense.AI) [ 32 ]. Teeth were grouped as 0 (ERR depth = 0.5 mm), 1 (ERR depth = 1.0 mm), 2 (ERR depth = 2.0 mm), 3 (no ERR).…”
Section: Methodsmentioning
confidence: 99%
“…tool (Makesense.AI.) [31]. Teeth were grouped as 0 (ERR depth = 0.5 mm), 1 (ERR depth = 1.0 mm), 2 (ERR depth= 2.0 mm), 3 (no ERR).…”
Section: Ground Truth Labellingmentioning
confidence: 99%
“…Similarly, another study by Desai & Mewada (2023) developed an AR-based soccer training system that used CV and AI technology to track the user's movements and provide real-time feedback on their performance [19]. The system included various training scenarios and challenges that required the user to perform specific movements, enhancing their motivation and engagement in the training process.…”
Section: Related Workmentioning
confidence: 99%