2015
DOI: 10.1017/s1351324915000224
|View full text |Cite
|
Sign up to set email alerts
|

ISO standard modeling of a large Arabic dictionary

Abstract: In this paper, we address the problem of the large coverage dictionaries of Arabic language usable both for direct human reading and automatic Natural Language Processing. For these purposes, we propose a normalized and implemented modeling, based on Lexical Markup Framework (LMF-ISO 24613) and Data Registry Category (DCR-ISO 12620), which allows a stable and well-defined interoperability of lexical resources through a unification of the linguistic concepts. Starting from the features of the Arabic language, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…Experiments use on the one hand the LMF standardized Arabic dictionary [10] as a resource to exploit [10] the synonymy of words and properties of semantic arguments (semantic class and thematic role)and, on the other hand, the Stanford Parser [4], the MADAMIRA tool [16] to reduce words to their stem or lemma by removing the suffix, the prefix. After that, they match the remaining word with verbal or noun patterns and the Weka software package [5] to find out the optimal parameters in the learning phase.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Experiments use on the one hand the LMF standardized Arabic dictionary [10] as a resource to exploit [10] the synonymy of words and properties of semantic arguments (semantic class and thematic role)and, on the other hand, the Stanford Parser [4], the MADAMIRA tool [16] to reduce words to their stem or lemma by removing the suffix, the prefix. After that, they match the remaining word with verbal or noun patterns and the Weka software package [5] to find out the optimal parameters in the learning phase.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Therefore, we believe that the enrichment of our database of Arabic sentences can significantly enhance the results. In addition, the performance of our system depends on the lemmatizer system, syntactical parser, synonyms and semantic predicates retrieved from the Arabic LMF dictionary [10]. According to a comparative evaluation study of Arabic language stemmers and syntactical parsers, MADAMIRA [16] and the Stanford parser [4] achieved the highest accuracy.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In the NLP domain, several works have focussed on the lexicon standardization. Among the works dealing with Arabic lexica on the basis of the LMF standard, we can mention for example the development of ArabicLDB [3]. The ArabicLDB aims at the construction of a conjugation system of the Arabic verbs and nouns.…”
Section: Overview On Lmfmentioning
confidence: 99%
“…However, these sentence similarity methods based on semantic information do not directly induce a real similarity score. For this reason, some approaches estimate the similarity between sentences based on syntactic and semantic information, called hybrid methods, such as [12] and [6] that take account of the semantic information and word order information implied in the sentence, [19] and syntactic dependency and [21] that takes account of synonymy relations between the word sense and the semantic predicate based on the LMF standardized Arabic dictionary [9]. Indeed, the authors estimate to compute the sentence similarity for Arabic Language.…”
Section: Introductionmentioning
confidence: 99%