2019
DOI: 10.1007/s10462-018-09678-0
|View full text |Cite
|
Sign up to set email alerts
|

Significance of processing chrominance information for scene classification: a review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 60 publications
0
8
0
Order By: Relevance
“…According to the quantitative relationship between OpenGL perspective imaging view functional parameters and internal and external orientation elements, the projection matrix and model view required for camera imaging of urban virtual 2 Wireless Communications and Mobile Computing geographic scene image are calculated through camera position, attitude, and internal parameters. According to the inverse process of OpenGL urban virtual geographic scene imaging, the imaging projection ray of the target in three-dimensional space is calculated, and the intersection of the ray and the three-dimensional scene is obtained, that is, the actual coordinates of the synthetic recognition target in the real world, where the calculation formula of pixel coordinates of the monitoring target imaging on the simulation image is as follows [14]:…”
Section: Research On the Methods Of Synchronousmentioning
confidence: 99%
“…According to the quantitative relationship between OpenGL perspective imaging view functional parameters and internal and external orientation elements, the projection matrix and model view required for camera imaging of urban virtual 2 Wireless Communications and Mobile Computing geographic scene image are calculated through camera position, attitude, and internal parameters. According to the inverse process of OpenGL urban virtual geographic scene imaging, the imaging projection ray of the target in three-dimensional space is calculated, and the intersection of the ray and the three-dimensional scene is obtained, that is, the actual coordinates of the synthetic recognition target in the real world, where the calculation formula of pixel coordinates of the monitoring target imaging on the simulation image is as follows [14]:…”
Section: Research On the Methods Of Synchronousmentioning
confidence: 99%
“…Based on the bag-of-words (BoW) model introduced for text classification, a bag of visual words-based approach has been proposed, wherein different classifiers can be used in two stages for image classification 27 32 In the development of an image classification system using the BoW model, the first stage is the formation of unsupervised formation clusters, followed by histogram feature generation 33 35 The derived histogram features are then classified using supervised algorithms 24 , 36 , 37 …”
Section: Review On Image Classification Systemsmentioning
confidence: 99%
“…Although various machine-learning models used for the classification of images from multiple classes are mathematically well defined, there are dependencies on various parameters for feature-based classification. For instance, in a SIFT-GMM-based image classification, the system must be tuned for the parameters associated with feature extraction, followed by the tuning of the GMM parameters 20 , 38 , 39 . Techniques exist for estimating the optimal values of the parameters for the models.…”
Section: Review On Image Classification Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…The purpose of the scene recognition research is to use recognition algorithms to effectively process the semantic information contained in the image data, in order to extract the image features and determine the valid information of the category to which the scene image belongs. In the face of complex scene recognition problems, traditional scene recognition methods [10] gradually show limitations. Deep neural networks are able to learn the deep characteristics of images from a large number of sample images and show significant advantages in the field of image recognition, which better achieves low cost, high accuracy, and more stable navigation services [11].…”
Section: Introductionmentioning
confidence: 99%