2020
DOI: 10.14569/ijacsa.2020.0110330
|View full text |Cite
|
Sign up to set email alerts
|

Adapted Lesk Algorithm based Word Sense Disambiguation using the Context Information

Abstract: The process of identifying the meaning of a polysemous word correctly from a given context is known as the Word Sense Disambiguation (WSD) in natural language processing (NLP). Adapted Lesk algorithm based system is proposed which makes use of knowledge based approach. This work utilizes WordNet as the knowledge source (lexical database). The proposed system has three units-Input query, Pre-Processing and WSD classifier. Task of input query is to take the inputs sentence (which is an unstructured query) from t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…With three key phases, including document pre-processing and hyperplane computation, it outperforms recent systems, achieving top scores on PAN 2013 and PAN 2014 datasets. Kumar et al (2020) introduce an Adapted Lesk algorithm-based Word Sense Disambiguation (WSD) system, employing a knowledge-based approach with WordNet. The system consists of three units: Input query, Pre-Processing, and WSD classifier.…”
Section: Unsupervised Methodmentioning
confidence: 99%
“…With three key phases, including document pre-processing and hyperplane computation, it outperforms recent systems, achieving top scores on PAN 2013 and PAN 2014 datasets. Kumar et al (2020) introduce an Adapted Lesk algorithm-based Word Sense Disambiguation (WSD) system, employing a knowledge-based approach with WordNet. The system consists of three units: Input query, Pre-Processing, and WSD classifier.…”
Section: Unsupervised Methodmentioning
confidence: 99%
“…To disambiguate, the algorithm calculates the overlap between the words in the context and the words in the definitions to determine the appropriate sense. One of the major drawbacks of the Lesk algorithm is its computational complexity, which stems from the exponential growth of comparisons needed for numerous candidate senses associated with polysemous words across various lexical resources [12].…”
Section: Related Workmentioning
confidence: 99%
“…Their model achieved an F1 score of 66%. The study in [12] used the adapted Lesk and trained a classifier responsible for retrieving senses of the target word from Wordnet, assigning a score based on the number of words common between the target gloss and context word gloss.…”
Section: Related Workmentioning
confidence: 99%
“…The authors [11] have proposed a system that consists of three units -WSD classifier, pre-processing, and input query. The input query receives an unstructured query from the user while preprocessing unit transforms this query into structured form which is then transformed to the WSD classifier.…”
Section: Related Workmentioning
confidence: 99%