Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop &Amp; Shared Task 2018
DOI: 10.18653/v1/w18-5916
|View full text |Cite
|
Sign up to set email alerts
|

UZH@SMM4H: System Descriptions

Abstract: Our team at the University of Zürich participated in the first 3 of the 4 sub-tasks at the Social Media Mining for Health Applications (SMM4H) shared task. We experimented with different approaches for text classification, namely traditional feature-based classifiers (Logistic Regression and Support Vector Machines), shallow neural networks, RCNNs, and CNNs. This system description paper provides details regarding the different system architectures and the achieved results.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 6 publications
0
5
0
Order By: Relevance
“…Those scores were reported across different kinds of classification tasks, generally showing very good scores for straightforward tasks that include only document‐level classification. For these binary classification tasks F1, precision, and recall scores higher than 0.9 is becoming more common 51–55 . Scores decrease for harder classification tasks such as normalisation to controlled vocabularies, commonly ranging between 0.2 and 0.6 in precision, recall or F1.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Those scores were reported across different kinds of classification tasks, generally showing very good scores for straightforward tasks that include only document‐level classification. For these binary classification tasks F1, precision, and recall scores higher than 0.9 is becoming more common 51–55 . Scores decrease for harder classification tasks such as normalisation to controlled vocabularies, commonly ranging between 0.2 and 0.6 in precision, recall or F1.…”
Section: Resultsmentioning
confidence: 99%
“…Ambiguity also increases complexity, for example when drug name can have multiple synonyms, trade names, or multiple correct labels 39,56 . Noise is a concept that generally refers to data being unreliable due to their unstructured and naturally expressed form, thus causing errors both while labelling gold‐standard data and when processing and predicting on new data 31,52 …”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…For this last merging step, we gave the system resulting from merging early stopping systems nine times the weight of the other system which resulted from merging systems trained for a fixed number of epochs. For the last run (MTL+BERT), we combined predictions from all 20 BERT systems with the first system and a second MTL configuration which uses different word embeddings (Ellendorff et al, 2018) and omits lexicon features.…”
Section: Experiments and System Descriptionsmentioning
confidence: 99%