2020
DOI: 10.1016/j.procs.2020.03.400
|View full text |Cite
|
Sign up to set email alerts
|

Scene Recognition from Image Using Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Literature [20] introduces the method from OpenCV, which improves the face recognition technique by using the sweet face method and the optimal map method, and at the same time, enhances the side face, occlusion and exaggerated expression detection and recognition to improve the accuracy. Literature [21], in order to solve the problem of scene recognition, explores the application ability of the dataset of PLACES2, which contains close to 7 million images of various scenes, which can be used as inputs for training and testing the model, and design the neural architecture of the machine through convolutional neural network, which is used to store the images required for scene recognition, and conduct simulation experiments after a large number of training to verify the reliability of the dataset. Literature [22] proposed a multi-feature fusion method based on weighted sequence fusion to obtain fused feature vectors of active and passive millimeter-wave urban and rural areas, which is applied with millimeterwave imaging target recognition, and the performance of this method is proved to be better than that based on the original feature vectors.…”
Section: Introductionmentioning
confidence: 99%
“…Literature [20] introduces the method from OpenCV, which improves the face recognition technique by using the sweet face method and the optimal map method, and at the same time, enhances the side face, occlusion and exaggerated expression detection and recognition to improve the accuracy. Literature [21], in order to solve the problem of scene recognition, explores the application ability of the dataset of PLACES2, which contains close to 7 million images of various scenes, which can be used as inputs for training and testing the model, and design the neural architecture of the machine through convolutional neural network, which is used to store the images required for scene recognition, and conduct simulation experiments after a large number of training to verify the reliability of the dataset. Literature [22] proposed a multi-feature fusion method based on weighted sequence fusion to obtain fused feature vectors of active and passive millimeter-wave urban and rural areas, which is applied with millimeterwave imaging target recognition, and the performance of this method is proved to be better than that based on the original feature vectors.…”
Section: Introductionmentioning
confidence: 99%
“…For example, a pixel in an image can be thought of as a vector of density values; pixels form a feature, where a feature is a cluster of custom shapes such as corners, edges, and roundness 5 . The simple design of the deep learning model is the convolutional neural network (CNN), which includes convolution filters, pooling, activation function, dropout, fully connected, and classification layers 6 . In addition, there are various methods for object detection using deep learning, including Faster R-CNN, You Only Look Once (YOLO), and Single Shot Detection (SSD).…”
Section: Introductionmentioning
confidence: 99%
“…The representation of an image can be considered to comprise a vector of density values per pixel or features such as clusters of edges and custom shapes, with some representing the data better [12]. The basic architecture of the deep learning concept is the convolutional neural network (CNN), which consists of convolution, pooling, activation function, dropout, fully connected, and classification layers [13]. In the last few years, deep learning has become a mainstay in the field of object detection [14] and classification and image segmentation.…”
Section: Introductionmentioning
confidence: 99%