Proceedings of the Twelfth International Conference on World Wide Web - WWW '03 2003
DOI: 10.1145/775152.775250
|View full text |Cite
|
Sign up to set email alerts
|

Semantic search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
70
0
3

Year Published

2006
2006
2017
2017

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 489 publications
(73 citation statements)
references
References 5 publications
0
70
0
3
Order By: Relevance
“… Disambiguation: Semantic search goes through disambiguation, to remove any ambiguity and multiple word meanings, to return the most probable search term meaning [44] [45].…”
Section: Semantic Searchmentioning
confidence: 99%
“… Disambiguation: Semantic search goes through disambiguation, to remove any ambiguity and multiple word meanings, to return the most probable search term meaning [44] [45].…”
Section: Semantic Searchmentioning
confidence: 99%
“…Another goal of our work is to demonstrate the use of crowdsourcing for a large-scale evaluation campaign for a novel search task, which in our case is adhoc object retrieval over RDF. Many semantic search systems of this type, such as [5,6,19], have appeared in the past few years, but none have been evaluated against each other except on a very small scale. Semantic search systems are a subset of information retrieval systems, and thus it would be natural to apply existing IR benchmarks for their evaluation in a large-scale campaign.…”
Section: Crowdsourcing-based Evaluationmentioning
confidence: 99%
“…The advantage of the crowd is that it is always available, it is accessible to most people at a relatively small cost, and the workforce scales elastically with increasing evaluation demands. Further, platforms such as Amazon Mechanical Turk 5 provide integrated frameworks for running crowdsourced tasks with minimal effort. We show how crowdsourcing can help execute an evaluation campaign for a search task that has not yet been sufficiently addressed to become part of a large evaluation effort such as TREC: ad-hoc Web object retrieval [10], for which we created a standard data set and queries for the task of object retrieval using real-world data, and the way we employed Mechanical Turk to elicit high quality judgments from the noise of unreliable workers in the crowd.…”
Section: Reliability and Repeatability Of The Evaluation Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…Another work under the same context can be found in [12] where the architecture of a pure peer-to-peer system is presented that offers a distributed, cooperative, and adaptive environment for URL sharing. Potential benefits of integrating new types of searching data (bookmarks and web history data) for personalization and improved searching accuracy are discussed in [13], [18]. The implication of social bookmarks as a way to enhance search in the web was considered in [20], [35] with mixed results.…”
mentioning
confidence: 99%