1998
DOI: 10.1016/s0010-4825(98)00036-5
|View full text |Cite
|
Sign up to set email alerts
|

Trends in medical information retrieval on internet

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2004
2004
2008
2008

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(11 citation statements)
references
References 9 publications
0
11
0
Order By: Relevance
“…We adopted an agent software strategy (Baujard, Baujard, Aurel, Boyer, & Appel, 1998;Boyer et al, 1997;Chen, Soong, Grimes, & Orthner, 2004;Gao & Wang, 2004) to search various information sources automatically. The objective was not to develop new agents but to use available ones for searching and retrieving with minimum human intervention.…”
Section: Storage and Agent Softwarementioning
confidence: 99%
“…We adopted an agent software strategy (Baujard, Baujard, Aurel, Boyer, & Appel, 1998;Boyer et al, 1997;Chen, Soong, Grimes, & Orthner, 2004;Gao & Wang, 2004) to search various information sources automatically. The objective was not to develop new agents but to use available ones for searching and retrieving with minimum human intervention.…”
Section: Storage and Agent Softwarementioning
confidence: 99%
“…Web pages are then compared with a set of relevant documents, and those with a similarity score above a certain threshold are considered relevant (Baujard et al, 1998).…”
Section: Web Page Filteringmentioning
confidence: 99%
“…An automatic approach was adopted to extract all the terms from a page and compare them with a domain lexicon, similarly to the method used in Baujard et al (1998). Both the number of relevant terms that appear in the page title and the TFIDF scores of the terms that appear in the body of the page were considered.…”
Section: Page Contentmentioning
confidence: 99%
See 1 more Smart Citation
“…In general, the filtering techniques can be classified as follows: (1) domain experts manually determine the relevance of each Web page (e.g., Yahoo); (2) the relevance of a Web page is determined by the occurrences of particular keywords (e.g., computer) [6]; (3) TFIDF (term frequency * inverse document frequency) is calculated based on a lexicon created by domain-experts and Web pages are then with a high similarity score to the lexicon are considered relevant [1]; and (4) text classification techniques such as the Naive Bayesian classifier are applied [3,9]. Among these, text classification is the most promising approach.…”
Section: Related Workmentioning
confidence: 99%