2022
DOI: 10.14421/jiska.2022.7.1.56-67
|View full text |Cite
|
Sign up to set email alerts
|

Algoritma K-Nearest Neighbor untuk Memprediksi Prestasi Mahasiswa Berdasarkan Latar Belakang Pendidikan dan Ekonomi

Abstract: Student academic performance is one measure of success in higher education. Prediction of student academic performance is important because it can help in decision-making. K-Nearest Neighbor (K-NN) algorithm is a method that can be used to predict it. Normalization is needed to scale the attribute value, so the data are in a smaller range than the actual data. Feature selection is used to eliminate irrelevant features. Data cleaning from outliers in the dataset aims to delete data that can affect the classific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 6 publications
0
1
0
Order By: Relevance
“…K-Nearest Neighbors (K-NN), a simple and easy-to-implement algorithm in machine learning that is beneficial for tackling classification and regression problems [17]. K-NN searches for groupings of k objects from the training data that are most comparable to the objects in the test data [18].…”
Section: 2k-nearest Neighbormentioning
confidence: 99%
“…K-Nearest Neighbors (K-NN), a simple and easy-to-implement algorithm in machine learning that is beneficial for tackling classification and regression problems [17]. K-NN searches for groupings of k objects from the training data that are most comparable to the objects in the test data [18].…”
Section: 2k-nearest Neighbormentioning
confidence: 99%
“…Proses pembelajaran data mining melibatkan pengembalian dan identifikasi data, yang selanjutnya diolah menjadi pengetahuan dan wawasan yang bernillai [11] [12]. K-Nearest Neighbor (K-NN)merupakan suatu algoritma yang digunakan untuk mencari kasus dengan melakukan estimasi sejauh mana kemiripan antara kasus baru dengan kasus yang sudah ada sebelumnya [13] [14].…”
Section: Pendahuluanunclassified
“…Totally, the original data contains redundant/irrelevant features [21]; In detail, the outside features provide misleading information, which leads to a fall in the learning accuracy. For example, in K-Nearest Neighbor, the irrelevant features increase the distances between samples from the same class, which makes it more challenging to truly classify data [4,13].…”
Section: Introductionmentioning
confidence: 99%