Companion Proceedings of the Web Conference 2022 2022
DOI: 10.1145/3487553.3524207
|View full text |Cite
|
Sign up to set email alerts
|

Multi-task Ranking with User Behaviors for Text-video Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…Alternative measures such as precision [105,106], recall [107,108], and F1-score [109,110] are used to evaluate model performance, particularly when dealing with imbalanced data. Additionally, evaluation measures like the area under the curve (AUC) [111,112] and receiver operating characteristic (ROC) [10,113] curve are frequently used to assess binary classifiers. These measures provide insights into the model's ability to differentiate between positive and negative instances, particularly when the costs associated with false positives and false negatives differ.…”
Section: Quality Models and Evaluation Measuresmentioning
confidence: 99%
“…Alternative measures such as precision [105,106], recall [107,108], and F1-score [109,110] are used to evaluate model performance, particularly when dealing with imbalanced data. Additionally, evaluation measures like the area under the curve (AUC) [111,112] and receiver operating characteristic (ROC) [10,113] curve are frequently used to assess binary classifiers. These measures provide insights into the model's ability to differentiate between positive and negative instances, particularly when the costs associated with false positives and false negatives differ.…”
Section: Quality Models and Evaluation Measuresmentioning
confidence: 99%