2009
DOI: 10.1007/s10844-009-0103-x
|View full text |Cite
|
Sign up to set email alerts
|

Ontology-driven web-based semantic similarity

Abstract: Estimation of the degree of semantic similarity/distance between concepts is a very common problem in research areas such as natural language processing, knowledge acquisition, information retrieval or data mining. In the past, many similarity measures have been proposed, exploiting explicit knowledge-such as the structure of a taxonomy-or implicit knowledge-such as information distribution. In the former case, taxonomies and/or ontologies are used to introduce additional semantics; in the latter case, frequen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
41
0
1

Year Published

2010
2010
2017
2017

Publication Types

Select...
7
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 71 publications
(44 citation statements)
references
References 28 publications
2
41
0
1
Order By: Relevance
“…This argument that has been supported by recent works on privacy-protection (Chow, Golle, & Staddon, 2008;Sánchez, et al, 2013a;Sánchez, Batet, & Viejo, 2013b), which considered the Web as a realistic proxy of social knowledge. In order to compute term probabilities from the Web in an efficient manner, several authors (Sánchez, Batet, Valls, & Gibert, 2010;Turney, 2001) have used the hit count returned by a Web Search Engine (e.g., Bing, Google) when querying the term t. In our approach, term probabilities are computed in this way:…”
Section: Step-0: Defining the User's Personal Privacy Requirementsmentioning
confidence: 99%
“…This argument that has been supported by recent works on privacy-protection (Chow, Golle, & Staddon, 2008;Sánchez, et al, 2013a;Sánchez, Batet, & Viejo, 2013b), which considered the Web as a realistic proxy of social knowledge. In order to compute term probabilities from the Web in an efficient manner, several authors (Sánchez, Batet, Valls, & Gibert, 2010;Turney, 2001) have used the hit count returned by a Web Search Engine (e.g., Bing, Google) when querying the term t. In our approach, term probabilities are computed in this way:…”
Section: Step-0: Defining the User's Personal Privacy Requirementsmentioning
confidence: 99%
“…According to the theories of knowledge base modeling and manipulation technology, KBS can be categorized as linguistic knowledge bases (Baker, 2014;Fellbaum, 1998;Speer & Havasi, 2012), expert knowledge bases (Driankov, Hellendoorn, & Reinfrank, 2013;Kerr-Wilson & Pedrycz, 2016;Kung & Su, 2007), ontology (Fensel, 2004;Sanchez, Batet, Valls, & Gibert, 2010) and most recently the cognitive knowledge base (Wang, 2014).…”
Section: Knowledge Base Modeling Approachesmentioning
confidence: 99%
“…Only Nandi & Bernstein have proposed a technique which was based on logs from virtual shops for computing similarity between products [26]. However, a number of works have addressed the semantic similarity measurement [16], [28], [30], [34], [35], and the use of WI techniques for solving computational problems [19], [36], [37] separately.…”
Section: Related Workmentioning
confidence: 99%