2020
DOI: 10.1155/2020/8812928
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Learning‐Based Approach to Enable Action Recognition for Construction Equipment

Abstract: In order to support smart construction, digital twin has been a well-recognized concept for virtually representing the physical facility. It is equally important to recognize human actions and the movement of construction equipment in virtual construction scenes. Compared to the extensive research on human action recognition (HAR) that can be applied to identify construction workers, research in the field of construction equipment action recognition (CEAR) is very limited, mainly due to the lack of available d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(21 citation statements)
references
References 46 publications
(60 reference statements)
0
21
0
Order By: Relevance
“…e CNN-based method for generating dance spectrum of lower limbs is proposed in this study, in which the network structure consists of two convolution layers, a pool layer, three fully connected layers, the output of the softmax function is the probability value of the human action category, and finally, the category with the largest probability value is taken as the result of network prediction [29]. In order to prevent overfitting, a dropout layer is connected behind each fully connected layer.…”
Section: Preprocessing Of 3d Motion Capture Datamentioning
confidence: 99%
“…e CNN-based method for generating dance spectrum of lower limbs is proposed in this study, in which the network structure consists of two convolution layers, a pool layer, three fully connected layers, the output of the softmax function is the probability value of the human action category, and finally, the category with the largest probability value is taken as the result of network prediction [29]. In order to prevent overfitting, a dropout layer is connected behind each fully connected layer.…”
Section: Preprocessing Of 3d Motion Capture Datamentioning
confidence: 99%
“…It has more advantages than sensor technology (Pradhananga and Teizer, 2013;Yang et al, 2011;Brilakis et al, 2011;Teizer et al, 2010). At present, pieces of literature have studied the identification and classification of construction entities (Memarzadeh et al, 2013;Tajeen and Zhu, 2014), the activity recognition of construction entities (Yang et al, 2014;Akhavian and Behzadan, 2015;Kim and Chi, 2019;Zhang et al, 2020;Kim et al, 2018a), the location of relevant entities at the construction site (Kim, 2018;Golparvar-Fard et al, 2013) and the capture of their trajectory (Angah and Chen, 2020;Roberts and Golparvar-Fard, 2019), and the automatic identification of hazards based on the distance and spatial information between identified entities (Fang et al, 2020). Hyojoo et al (2019) presented a vision-based collision warning system based on the automated 3D position estimation of each worker with monocular vision, intending to protect equipment workers from potentially dangerous situations, such as collisions between the equipment and workers in a certain proximity.…”
Section: Literature Review 21 Application Of Computer Vision In Const...mentioning
confidence: 99%
“…, 2013; Tajeen and Zhu, 2014), the activity recognition of construction entities (Yang et al. , 2014; Akhavian and Behzadan, 2015; Kim and Chi, 2019; Zhang et al. , 2020; Kim et al ., 2018a), the location of relevant entities at the construction site (Kim, 2018; Golparvar-Fard et al.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Greif et al (2020) developed a digital twin of silos to optimize in-situ logistics. Zhang et al (2020) created the DT model of construction equipment to enable action recognition. However, the application of DT in deformation monitoring of deep foundation pit excavation is immature and still has some limitations.…”
Section: Related Workmentioning
confidence: 99%