2013
DOI: 10.1007/978-3-642-36678-9_11
|View full text |Cite
|
Sign up to set email alerts
|

Bag–of–Colors for Biomedical Document Image Classification

Abstract: The number of biomedical publications has increased noticeably in the last 30 years. Clinicians and medical researchers regularly have unmet information needs but require more time for searching than is usually available to find publications relevant to a clinical situation. The techniques described in this article are used to classify images from the biomedical open access literature into categories, which can potentially reduce the search time. Only the visual information of the images is used to classify im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
4
1
1

Relationship

4
2

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…A set of low‐level visual descriptors is selected from the descriptor bank of ParaDISE (Schaer, Markonis, & Müller, ; Markonis et al, 2017) and their combination is explored (García Seco de Herrera, Markonis, Schaer, et al, 2013) to optimize the outcomes. The following descriptors are chosen after the performance tests on a different ImageCLEF database; Bag of Visual Words (BoVW) using the Scale Invariant Feature Transform (SIFT) ( Lowe, ) with a spatial pyramid matching ( Lazebnik, Schmid, & Ponce, ) (BoVW‐SPM) —each image is represented by a histogram symbolizing a set of local descriptors represented in visual words from a previously learned vocabulary; spatial information is added to the BoVW‐SIFT descriptor; Bag of Colors (BoC) ( García Seco de Herrera, Markonis, & Müller, ) with an n × n spatial grid (Grid BoC) —each image is represented by a histogram symbolizing the colors from a previously learned vocabulary; spatial information is added to the BoC descriptor; Color and Edge Directivity Descriptor (CEDD) (Chatzichristofis & Boutalis, )—color and texture information is produced by a 144 bin histogram. Little computation is needed for its extraction; Tamura texture (Tamura, Mori, & Yamawaki, )—this descriptor extracts six visual properties: coarseness, contrast, directionality, line‐likeness, regularity, and roughness.…”
Section: Case–based Retrieval Techniquesmentioning
confidence: 99%
“…A set of low‐level visual descriptors is selected from the descriptor bank of ParaDISE (Schaer, Markonis, & Müller, ; Markonis et al, 2017) and their combination is explored (García Seco de Herrera, Markonis, Schaer, et al, 2013) to optimize the outcomes. The following descriptors are chosen after the performance tests on a different ImageCLEF database; Bag of Visual Words (BoVW) using the Scale Invariant Feature Transform (SIFT) ( Lowe, ) with a spatial pyramid matching ( Lazebnik, Schmid, & Ponce, ) (BoVW‐SPM) —each image is represented by a histogram symbolizing a set of local descriptors represented in visual words from a previously learned vocabulary; spatial information is added to the BoVW‐SIFT descriptor; Bag of Colors (BoC) ( García Seco de Herrera, Markonis, & Müller, ) with an n × n spatial grid (Grid BoC) —each image is represented by a histogram symbolizing the colors from a previously learned vocabulary; spatial information is added to the BoC descriptor; Color and Edge Directivity Descriptor (CEDD) (Chatzichristofis & Boutalis, )—color and texture information is produced by a 144 bin histogram. Little computation is needed for its extraction; Tamura texture (Tamura, Mori, & Yamawaki, )—this descriptor extracts six visual properties: coarseness, contrast, directionality, line‐likeness, regularity, and roughness.…”
Section: Case–based Retrieval Techniquesmentioning
confidence: 99%
“…The info table is then updated, including the modality information and the subfigure URLs. The method presented in [42] was used for the modality classification. The method proposed in [43] was used for the compound figure separation.…”
Section: • Latefusion Seekermentioning
confidence: 99%
“…For the visual indexing, BoVW and Bag-of-Colors (BoC) [42] representations were used as shape and color features of the images. E2LSH was used as an ANN indexing method.…”
Section: • Latefusion Seekermentioning
confidence: 99%
“…The same features are used as in [16] to measure only the effect of semisupervised learning on the classifiers used: color and edge directivity descriptor (CEDD) [8]; bag of visual words using scale-invariant feature transform (BoVW-SIFT) [25]; fuzzy color and texture histogram (FCTH) [9]; bag of colors (BoC) [15]; and fuzzy color histogram (FCH) [19].…”
Section: Multi-modal Featuresmentioning
confidence: 99%