2009
DOI: 10.1007/978-3-642-03996-6_20
|View full text |Cite
|
Sign up to set email alerts
|

Ontology Evaluation through Text Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…However, the added penalty term Ξ»β€–π’Žβ€– 2 in objective ( 8) only decreases the weights of less relevant or irrelevant neighbors, but not selecting neighbors. To enable neighbor selection, a 𝑙 1 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer is added to the objective to set the weights of those neighbors to be dropped to zero minπ‘–π‘šπ‘–π‘§π‘’ 𝑓(π’Ž) = ‖𝑩 𝑇 π’Ž βˆ’ 𝒙 Μƒβ€–2 + πœ† 1 β€–π’Žβ€– 2 + πœ† 2 β€–π’Žβ€–, 𝐦 ∈ 𝐑 k , πœ† 1 , πœ† 2 ∈ 𝐑 + (10) By adding the 𝑙 2 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer and the 𝑙 1 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer, not only a unique correlation vector could be obtained for each test sample, but also the most relevant neighbors among the k nearest neighbors of a test sample will be selected. In the field of statistical learning, objective function (10) is called an elastic net regression.…”
Section: Classificationmentioning
confidence: 99%
See 3 more Smart Citations
“…However, the added penalty term Ξ»β€–π’Žβ€– 2 in objective ( 8) only decreases the weights of less relevant or irrelevant neighbors, but not selecting neighbors. To enable neighbor selection, a 𝑙 1 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer is added to the objective to set the weights of those neighbors to be dropped to zero minπ‘–π‘šπ‘–π‘§π‘’ 𝑓(π’Ž) = ‖𝑩 𝑇 π’Ž βˆ’ 𝒙 Μƒβ€–2 + πœ† 1 β€–π’Žβ€– 2 + πœ† 2 β€–π’Žβ€–, 𝐦 ∈ 𝐑 k , πœ† 1 , πœ† 2 ∈ 𝐑 + (10) By adding the 𝑙 2 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer and the 𝑙 1 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer, not only a unique correlation vector could be obtained for each test sample, but also the most relevant neighbors among the k nearest neighbors of a test sample will be selected. In the field of statistical learning, objective function (10) is called an elastic net regression.…”
Section: Classificationmentioning
confidence: 99%
“…To enable neighbor selection, a 𝑙 1 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer is added to the objective to set the weights of those neighbors to be dropped to zero minπ‘–π‘šπ‘–π‘§π‘’ 𝑓(π’Ž) = ‖𝑩 𝑇 π’Ž βˆ’ 𝒙 Μƒβ€–2 + πœ† 1 β€–π’Žβ€– 2 + πœ† 2 β€–π’Žβ€–, 𝐦 ∈ 𝐑 k , πœ† 1 , πœ† 2 ∈ 𝐑 + (10) By adding the 𝑙 2 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer and the 𝑙 1 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer, not only a unique correlation vector could be obtained for each test sample, but also the most relevant neighbors among the k nearest neighbors of a test sample will be selected. In the field of statistical learning, objective function (10) is called an elastic net regression. Since least square loss function is convex, 𝑙 1 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer and 𝑙 2 βˆ’ π‘›π‘œπ‘Ÿπ‘š regularizer are convex, and the summation of convex functions are convex, objective function (10) is convex.…”
Section: Classificationmentioning
confidence: 99%
See 2 more Smart Citations
“…This approach is a hybrid of several other approaches from semantic similarity measurement and vocabulary approach. While other works state user profiling as their intermediate data driven approach [2], data driven to text classification [3], Noy et al [4] adds the markup of user based ontology selection history on search terms to draw the analysis of ontology selection from the BioPortal ontology repositories result.…”
Section: Introductionmentioning
confidence: 99%