2023
DOI: 10.3934/mbe.2023558
|View full text |Cite
|
Sign up to set email alerts
|

Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification

Abstract: <abstract> <p>The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 59 publications
0
1
0
Order By: Relevance
“…Here, g and d were set to 8 and 4, respectively, and the average values of θ (0 deg, 45 deg, 90 deg, and 135 deg) were used to ensure rotation invariance of GLCM features. Four common feature analysis models, namely ASM, 23 contrast, 24 correlation, 25 and entropy, 26 are employed.…”
Section: Sar Image Texture Features and Gray Level Co-occurrence Matrixmentioning
confidence: 99%
“…Here, g and d were set to 8 and 4, respectively, and the average values of θ (0 deg, 45 deg, 90 deg, and 135 deg) were used to ensure rotation invariance of GLCM features. Four common feature analysis models, namely ASM, 23 contrast, 24 correlation, 25 and entropy, 26 are employed.…”
Section: Sar Image Texture Features and Gray Level Co-occurrence Matrixmentioning
confidence: 99%