2023
DOI: 10.33395/sinkron.v8i3.12422
|View full text |Cite
|
Sign up to set email alerts
|

Crime of theft prediction using Machine Learning K-Nearest Neighbour Algorithm at Polresta Bandar Lampung

Abstract: The era of the industrial revolution 4.0 is a time where cyber and physical technology collaborate. This study aims to predict the types of theft crimes that occur in the Bandar Lampung Police area with the K-Nearest Neighbor algorithm, evaluate the prediction results and profiling the prediction results carried out by Bandar Lampung Police investigators in efforts to prevent and handle criminal acts of theft in the jurisdiction of the Bandar Lampung Police Lampung. The approach was carried out using the quant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 5 publications
0
1
0
Order By: Relevance
“…K here is a parameter that determines the number of nearest neighbors to be considered (Adjani, Fauzia, & Juliane, 2023) (Arifuddin, Pinastawa, Anugraha, & Pradana, 2023). KNN is very effective for datasets that have a small number of dimensions (features), but can become less efficient as the number of dimensions increases due to the complexity of calculating the distance between data points (Hermawan & Prianggono, 2023) (Triani, Dar, & Yanris, 2023). KNN does not require a learning model in the initial phase, but requires more time in the testing phase because it needs to calculate the distance from new data points to all other data points that already exist in the dataset (A. W. Sari, Hermanto, & Defriani, 2023).…”
Section: Literature Reviewmentioning
confidence: 99%
“…K here is a parameter that determines the number of nearest neighbors to be considered (Adjani, Fauzia, & Juliane, 2023) (Arifuddin, Pinastawa, Anugraha, & Pradana, 2023). KNN is very effective for datasets that have a small number of dimensions (features), but can become less efficient as the number of dimensions increases due to the complexity of calculating the distance between data points (Hermawan & Prianggono, 2023) (Triani, Dar, & Yanris, 2023). KNN does not require a learning model in the initial phase, but requires more time in the testing phase because it needs to calculate the distance from new data points to all other data points that already exist in the dataset (A. W. Sari, Hermanto, & Defriani, 2023).…”
Section: Literature Reviewmentioning
confidence: 99%