1999
DOI: 10.1023/a:1009982220290
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
70
0
2

Year Published

2001
2001
2018
2018

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 1,372 publications
(72 citation statements)
references
References 16 publications
0
70
0
2
Order By: Relevance
“…Another measure known by F1 is the average of Precision (Prec) and Recall (Rec) (Yang 1999). Precision is defined as a function of correct positive predictions.…”
Section: Discussionmentioning
confidence: 99%
“…Another measure known by F1 is the average of Precision (Prec) and Recall (Rec) (Yang 1999). Precision is defined as a function of correct positive predictions.…”
Section: Discussionmentioning
confidence: 99%
“…A second but related possible issue with our study is associated with sample number bias [5557]. We made corrections with weight factors [58,59] and used the multi-class macro- F 1 score [60] to account for the fact that some conditions contained more samples than others, but the predictability of individual conditions nevertheless increased with the number of training samples for that particular condition (S3 Fig). Accuracy limitations could be more thoroughly evaluated through the use of learning curves to determine whether test set accuracies plateau with increasing training set size, but the class imbalance problem and fairly low number of overall samples per condition in our data make it difficult to evaluate accuracies across a broad range of training set sizes.…”
Section: Discussionmentioning
confidence: 99%
“…Instead, we assessed prediction accuracy via F 1 scores, which jointly assess precision and recall. In particular, for predictions of multiple conditions at once, we scored prediction accuracy via the multi-class macro F 1 score [38,60,71] that normalizes individual F 1 scores over individual conditions, i.e., it gives each condition equal weight instead of each sample. There are two different macro F 1 score calculation that have been proposed in the literature.…”
Section: Methodsmentioning
confidence: 99%
“…However, a system may prefer to differentiate between spammers and manipulators based on the topic of opinions-that is, users who post political opinions are classified as manipulators and the others as spammers. This topic-based classification often utilizes machine learning techniques and has been shown to be successful in various applications [Busemann et al 2000;Yang 1999]. Among the manipulators, nearly 1.43% are incorrectly classified as nonmanipulators.…”
Section: Classification Accuracymentioning
confidence: 99%