2020
DOI: 10.1109/access.2019.2954851
|View full text |Cite
|
Sign up to set email alerts
|

A Latent Feature-Based Multimodality Fusion Method for Theme Classification on Web Map Service

Abstract: Massive maps have been shared as Web Map Service (WMS) from various providers, which could be used to facilitate people's daily lives and support space analysis and management. The theme classification of maps could help users efficiently find maps and support theme-related applications. Traditionally, metadata is usually used in analyzing maps content, few papers use maps, especially legends. In fact, people usually considers metadata, maps and legends together to understand what maps tell, however, no study … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…Fusing different data sources to enhance model performance has long been investigated in computer vision [43,44,45,46]. While traditional image classification models only require images as inputs, several studies have found that incorporating metadata during the training process could be helpful [47,48,49]. Integrating metadata into image classification could be as simple as concatenating metadata to the image features, training a dedicated learner with metadata and combining the probability outputs with those from image classifiers, or fusing metadata into the network architecture.…”
Section: B Methods Utilizing Both Images and Metadatamentioning
confidence: 99%
See 1 more Smart Citation
“…Fusing different data sources to enhance model performance has long been investigated in computer vision [43,44,45,46]. While traditional image classification models only require images as inputs, several studies have found that incorporating metadata during the training process could be helpful [47,48,49]. Integrating metadata into image classification could be as simple as concatenating metadata to the image features, training a dedicated learner with metadata and combining the probability outputs with those from image classifiers, or fusing metadata into the network architecture.…”
Section: B Methods Utilizing Both Images and Metadatamentioning
confidence: 99%
“…Experimenting with an SVM classifier yielded an improvement from 81.3% to 90.1% in terms of accuracy. Yang et al [47] enhanced theme classification using images from maps and their metadata such as name, title, keywords, and abstract. Langenberg et al [48] used traffic lights' contextual metadata to assign each traffic light to its appropriate lane.…”
Section: B Methods Utilizing Both Images and Metadatamentioning
confidence: 99%
“…[98], [100], [104], [116], [125], [127], [187], [206], [241], [326], [365], [395], [411], Image & Numerical [62], [75], [119], [126], [167], [313], [331], [353], [405], [410], Audio & Text & Sensor [384], Audio & Text [180], [282], [377], [391], [392], Text & Signal [109], Text & Numerical [304], [349], Sensor & Signal [240], [242], [258], [389], Sensor & Numerical [183], Signal & Numerical [205], [257], [260], [318]. Figure 10 displays the extracted information related to each modality and data type with the links between them.…”
Section: B Taskmentioning
confidence: 99%
“…A total of 212 articles related to fusion learning were encountered. Of 155 articles, 99 were model-agnostic, where 62 pertained to early [55], [56], [58], [59], [62], [63], [75], [76], [79], [98], [102]- [105], [111], [115], [119], [120], [133], [141], [142], [166], [173], [207], [213], [240], [242], [250], [252], [254], [258], [259], [270], [271], [280], [282], [299], [303], [305]- [307], [313], [320], [324], [326], [330], [334], [337], [347], [349], [357], [359], [364], [367], [381],…”
Section: F Fusionmentioning
confidence: 99%
“…A cloud-based search broker, GeoSearch, integrates data visualization, interactive filtering technologies, and service quality information to help end users narrow down the retrieved candidates (Gui et al, 2013a). Image contents of WMS layers (Yang, Gui, Wu, & Li, 2019) and user relevance feedback were also used in retrieval to deal with semantic gaps in human-computer interactions (Hu, Gui, Cheng, Qi, & Wu, 2016;Li et al, 2019). However, most of these methods were limited to similarity matching, and is unable to perceive geographic semantics in map services (Yang et al, 2019).…”
Section: Geospatial Resource Discoverymentioning
confidence: 99%