2021
DOI: 10.3906/elk-2005-59
|View full text |Cite
|
Sign up to set email alerts
|

An improved version of multi-view k-nearest neighbors (MVKNN) for multiple view learning

Abstract: Multi-view learning (MVL) is a special type of machine learning that utilizes more than one views, where views include various descriptions of a given sample. Traditionally, classification algorithms such as k-nearest neighbors (KNN) are designed for learning from single-view data. However, many real-world applications involve datasets with multiple views and each view may contain different and partly independent information, which makes the traditional single-view classification approaches ineffective. Theref… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…Variance filtering (VAR) is a method of filtering by the variance of the features themselves ( Zhou et al, 2020 ). Its variance is calculated as follows, where X is the feature matrix and p is the probability of one of the classes in that feature: Relevant filtering can filter out features that are more relevant and meaningful to labels ( KIYAK et al, 2021 ). This article selects chi square filtering as one of the alternative feature selection methods.…”
Section: Validation and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Variance filtering (VAR) is a method of filtering by the variance of the features themselves ( Zhou et al, 2020 ). Its variance is calculated as follows, where X is the feature matrix and p is the probability of one of the classes in that feature: Relevant filtering can filter out features that are more relevant and meaningful to labels ( KIYAK et al, 2021 ). This article selects chi square filtering as one of the alternative feature selection methods.…”
Section: Validation and Resultsmentioning
confidence: 99%
“…Relevant filtering can filter out features that are more relevant and meaningful to labels (KIYAK et al, 2021). This article selects chi square filtering as one of the alternative feature selection methods.…”
Section: Strength Level Improvement Submodel Validationmentioning
confidence: 99%
“…Conceptually, compounds are plotted in a multidimensional space, with each dimension representing a descriptor. [191,[193][194][195] Upon introducing a new compound, KNN determines the K nearest neighbors based on their proximity in this space, using a predefined value of K often derived as the square root of the total number of compounds in the dataset. For example, with 400 compounds, K would be approximately 20.…”
Section: K-nearest Neighbours (Knn)mentioning
confidence: 99%
“…These methods involve the development of algorithms that can analyze and interpret patterns in large datasets, automatically adjusting their parameters to improve performance over time. Machine learning encompasses various approaches, including supervised learning [16] (where models are trained on labeled data to make predictions), unsupervised learning [17] (for discovering patterns and structures in data), and reinforcement learning [18] (for decision-making in dynamic environments). These methods have wide-ranging applications, from natural language processing and image recognition to autonomous robotics and recommendation systems, and are fundamental in enabling computers to perform tasks that require learning from experience.…”
Section: Introductionmentioning
confidence: 99%