2018
DOI: 10.1007/978-3-319-95921-4_17
|View full text |Cite
|
Sign up to set email alerts
|

Texture Descriptors for Classifying Sparse, Irregularly Sampled Optical Endomicroscopy Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…The current facial expression recognition methods are mainly divided into two categories, one is the traditional manual method, and the other is the network model using deep learning. Although the traditional method is widely used, it is very limited in practical applications [13] [14].…”
Section: Related Workmentioning
confidence: 99%
“…The current facial expression recognition methods are mainly divided into two categories, one is the traditional manual method, and the other is the network model using deep learning. Although the traditional method is widely used, it is very limited in practical applications [13] [14].…”
Section: Related Workmentioning
confidence: 99%
“…Oriented FAST and rotated BRIEF (ORB) (Wan et al, 2015), (vi) Histogram of Oriented Gradients (HOG) (Gu et al, 2016;Vo et al, 2017), (vii) textons (Gu et al, 2016), (viii) Local Derivative Patterns (LDP) (Vo et al, 2017), as well as (ix) features extracted from Convolutional Neural Networks (CNN) prior to the fully connected layer employed for computing each class score (Gil et al, 2017;Vo et al, 2017). (Leonovych et al, 2018) introduced Sparse Irregular Local Binary Patterns (SILBP), an adaptation of LBPs taking into consideration the sparse, irregular sampling imposed by the imaging fibre bundle on FBEµ images. Feature spaces combining two or more of the above descriptors are also frequent, with descriptors customarily extracted from the whole image, yet in some cases, regular or randomly distributed sub-windows/patches have been used, either on their own, or in conjunction to the whole image feature space.…”
Section: Image Classificationmentioning
confidence: 99%
“…Feature spaces combining two or more of the above descriptors are also frequent, with descriptors customarily extracted from the whole image, yet in some cases, regular or randomly distributed sub-windows/patches have been used, either on their own, or in conjunction to the whole image feature space. A number of well-established classifiers have been assessed, including (i) k-Nearest Neighbours (kNN) (André et al, 2012b;Desir et al, 2010;Hebert et al, 2012;Saint-Réquier et al, 2009;Srivastava et al, 2005;Srivastava et al, 2008), (ii) Linear and Quadratic Discriminant Analysis (LDA and QDA) (Leonovych et al, 2018;Srivastava et al, 2005;Srivastava et al, 2008), (iii) Support Vector Machines (SVM) and their adaptation with Recursive Feature Elimination (SVM-RFE) (Desir et al, 2010;Desir et al, 2012b;Jaremenko et al, 2015;Leonovych et al, 2018;Petitjean et al, 2009;Rakotomamonjy et al, 2014;Saint-Réquier et al, 2009;Vo et al, 2017;Wan et al, 2015;Zubiolo et al, 2014), (iv) Random Forests (RF) and variants such as Extremely Randomised Trees (ET) (Desir et al, 2012a;Heutte et al, 2016;Jaremenko et al, 2015;Leonovych et al, 2018;Seth et al, 2016;Vo et al, 2017), (v) Gaussian Mixture Models (GMM) (He et al, 2012;Perperidis et al, 2016), (vi) Boosted Cascade of Classifiers (Hebert et al, 2012), (vii) Neural Networks (NN) (Ştefănescu et al, 2016), (viii) Gaussian Processes Classifiers (GPC), and (ix) Lasso Generalised Linear Models (GLM) (Seth et al, 2016). Most studies employed leave-k-out and k-fold cross validation to assess the predictive capacity of the proposed methodology on limited, pre-annotated frames.…”
Section: Image Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Handcrafted methods have been widely adopted for FER and rely on features [33], [13]. Nevertheless, they have shown their restrictions in practical applications [35], [38]. Lately, deep learning, especially Convolutional Neural Networks (CNN), methods have proved competitive in many vision tasks, e.g, image classification, segmentation, emotion recognition, etc.…”
Section: Related Workmentioning
confidence: 99%