2008
DOI: 10.1016/j.neunet.2008.05.008
|View full text |Cite
|
Sign up to set email alerts
|

Neurolinguistic approach to natural language processing with applications to medical text analysis

Abstract: Understanding written or spoken language presumably involves spreading neural activation in the brain. This process may be approximated by spreading activation in semantic networks, providing enhanced representations that involve concepts not found directly in the text. The approximation of this process is of great practical and theoretical interest. Although activations of neural circuits involved in representation of words rapidly change in time snapshots of these activations spreading through associative ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2009
2009
2016
2016

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 37 publications
(26 citation statements)
references
References 22 publications
0
26
0
Order By: Relevance
“…A combination of all four indexes can be normalized between 0 and 4 so that 4 means perfect agreement between all the measures as for the number of clusters. Prior work that used automated priming approach showed usefulness of combined indices in discovery of interesting clusters in patient discharge summaries [1,13,14].…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…A combination of all four indexes can be normalized between 0 and 4 so that 4 means perfect agreement between all the measures as for the number of clusters. Prior work that used automated priming approach showed usefulness of combined indices in discovery of interesting clusters in patient discharge summaries [1,13,14].…”
Section: Methodsmentioning
confidence: 99%
“…Multiple evidence for priming effects comes from psychology and neuroimaging. For that reason presented here technique for information retrieval is neurocognitively inspired [1,13,14].…”
Section: Neurocognitive Inspirations In Information Retrievalmentioning
confidence: 99%
See 2 more Smart Citations
“…The raw features given in the dataset description are used to create a large set of enhanced or hidden features. The topic of feature generation has received recently more attention in analysis of sequences and images, where graphical models known as Conditional Random Fields became popular [23], generating for natural text analysis sometimes millions of low-level features [24]. Attempts at meta-learning on the ensemble level lead to very rough granularity of the existing models and knowledge [25], thus exploring only a small subspace of all possible models, as it is done in the multistrategy learning [26].…”
Section: Introduction: Neurocognitive Inspirations For Meta-learningmentioning
confidence: 99%