2017
DOI: 10.1007/s10044-017-0650-7
|View full text |Cite
|
Sign up to set email alerts
|

A new F-score gradient-based training rule for the linear model

Abstract: even more difficult, automatic data annotation problems are often considered as high-class imbalance problems. In this paper we address the basic recognition modelthe linear perceptron. On top of it, many other, more complex solutions may be proposed. The presented research is done from the perspective of automatic data annotation. 1.1 Linear recognition models Training of linear models has a long history. One shall note the classic Fisher's Linear Discriminant Analysis (LDA, e.g., [2]). Existence of closed-fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…To uniformly compare models and avoid intercept biases, we compared the F-score associated to the predictor across the models and checked for meaningful variations (ie > 10%). 23,24 All models so far computed do not show any confounding effect with very limited F-score variations ~ 3%.…”
Section: Discussionmentioning
confidence: 89%
See 1 more Smart Citation
“…To uniformly compare models and avoid intercept biases, we compared the F-score associated to the predictor across the models and checked for meaningful variations (ie > 10%). 23,24 All models so far computed do not show any confounding effect with very limited F-score variations ~ 3%.…”
Section: Discussionmentioning
confidence: 89%
“…Three regressive linear models have been built for each predictor, i) the null model with only the predictor as independent variable, ii) a model with the predictor and a linear combination of all the other possible confounders and iii) a third model which also includes mutual effects among all the predictors. To uniformly compare models and avoid intercept biases, we compared the F‐score associated to the predictor across the models and checked for meaningful variations (ie > 10%) 23,24 . All models so far computed do not show any confounding effect with very limited F‐score variations ~ 3%.…”
Section: Methodsmentioning
confidence: 99%
“…Two works appeared in 2017, the current authors' proposal of an approximated G-Mean [7], and a new F1 score based loss function [12]. In 2019, two further variants of the F1 score were proposed, the first [13] used to train a CNN to classify emotions in tweets, and the second [14] a proposal specifically for linear models, applied to synthetic and image data. Also in 2019, [15] used a CNN and a multi-class variant of the F1 score to perform cell segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Each LDA classifier is tested using the validation set. The resulting arrays are then compared to the true binary seizure state [56], [57]. The validation set is used to find features that are generalizable and have the power to estimate the hidden seizure state.…”
Section: Feature Selectionmentioning
confidence: 99%