2019
DOI: 10.1007/s11135-019-00882-w
|View full text |Cite
|
Sign up to set email alerts
|

Comparing supervised learning algorithms and artificial neural networks for conflict prediction: performance and applicability of deep learning in the field

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…After selecting input and target variables, data cleaning was done, which involved feature selection for predictive variables and outlier detection within instance values to ensure data integrity for accurate predictions. Additionally, data transformations are essential for maintaining consistency in deep learning models, including converting categorical variables to numerical form and ensuring that input data is numeric and normalized [ 35 , 36 ]. Chollet [ 37 ] also highlights that deep learning frameworks natively handle numerical data, making the numerical assignment of categorical variables a common preprocessing step for seamless integration into the model.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…After selecting input and target variables, data cleaning was done, which involved feature selection for predictive variables and outlier detection within instance values to ensure data integrity for accurate predictions. Additionally, data transformations are essential for maintaining consistency in deep learning models, including converting categorical variables to numerical form and ensuring that input data is numeric and normalized [ 35 , 36 ]. Chollet [ 37 ] also highlights that deep learning frameworks natively handle numerical data, making the numerical assignment of categorical variables a common preprocessing step for seamless integration into the model.…”
Section: Methodsmentioning
confidence: 99%
“…This is particularly important for diverse datasets like construction accident data, where features can have significantly different measurement units. During training, features often span varying scales, leading high-range variables to dominate those with smaller ranges, adversely affecting predictions [ 35 , 36 , 40 ]. Therefore, data normalization aims to minimize bias, ensuring equitable feature contribution and enhancing pattern recognition by minimizing the influence of dominant features on the model's overall performance.…”
Section: Methodsmentioning
confidence: 99%
“…Each node in the fully connected layer produces a single output with its learnable corresponding weight that is linked to all the activations in the previous nodes [56]. It is noteworthy that before applying the generated feature matrixes to the fully connected layer, all 2D features have to be changed into a one-dimensional matrix (1D vector) [65][66][67]. The latest layer for classification tasks in a CNN-based pipeline is the Softmax regression layer which is able to differentiate one from the other.…”
Section: Local Directional Numbermentioning
confidence: 99%
“…This tendency proves the need of the methodological research to achieve an integrated, advanced body of methodologies which could improve not only the performances in processing huge amounts of data, connections, and resources now available in both the physical and virtual spaces, but mainly a justification of their effectiveness in relating political culture theory with its milieu of rather independently developed methodologies which are now waiting to prove how and why they can contribute to political culture theory improvement (Ettensberger 2019). This tendency has been induced and sustained in agent-based modelling and social simulation research by some of the most relevant attempts to elaborate comparative analysis of research methodologies (Axtell et al 1996;Lorenz 2014).…”
Section: Comparative Analysis Testing and Evaluation Of Research Mementioning
confidence: 99%