2022
DOI: 10.1016/j.knosys.2022.108377
|View full text |Cite
|
Sign up to set email alerts
|

Not all datasets are born equal: On heterogeneous tabular data and adversarial examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…However, it is constrained by its linearity and dependence on well‐defined hypothesis functions. In contrast, logistic regression handles classification tasks by constraining outputs between 0 and 1, making it versatile for multi‐class scenarios 46 . Naive Bayes, akin to logistic regression, predicts class labels based on joint probability calculations using Bayes Theorem, assuming feature independence, though this assumption may limit its effectiveness in complex or correlated datasets.…”
Section: Integration Of Ai Workflow With Organoid Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, it is constrained by its linearity and dependence on well‐defined hypothesis functions. In contrast, logistic regression handles classification tasks by constraining outputs between 0 and 1, making it versatile for multi‐class scenarios 46 . Naive Bayes, akin to logistic regression, predicts class labels based on joint probability calculations using Bayes Theorem, assuming feature independence, though this assumption may limit its effectiveness in complex or correlated datasets.…”
Section: Integration Of Ai Workflow With Organoid Systemsmentioning
confidence: 99%
“…In contrast, logistic regression handles classification tasks by constraining outputs between 0 and 1, making it versatile for multi-class scenarios. 46 Naive Bayes, akin to logistic regression, predicts class labels based on joint probability calculations using Bayes Theorem, assuming feature independence, though this assumption may limit its effectiveness in complex or correlated datasets. Support-Vector Machines (SVM) aim to maximize class separation by identifying optimal hyperplanes, with the capacity to employ non-linear kernels, but they may overfit and demand significant computation, limiting suitability for large, intricate datasets.…”
Section: Supervised Machine Learning Modelsmentioning
confidence: 99%
“…Tree-based models have long been a favourite for making decisions in high-stakes domains such as medicine and finance, due to their interpretability and exceptional performance on structured data [1]. However, recent results have highlighted that tree-based models are, similarly to other machine learning models [2,3], also highly susceptible to adversarial examples [4][5][6], raising concerns about their use in high-stakes domains where errors can have dire consequences.…”
Section: Introductionmentioning
confidence: 99%
“…Also, counterfactual approaches are not suited to understand why entities are classified as being part of a rumour, due to their focus on features that may change the result [9]. As such, explanations shall be based on a set of related examples, which enable users to generalize their properties [10]. For a rumour represented as a graph, the explanation is then given in terms of related subgraphs of rumours detected in the past.…”
Section: Introductionmentioning
confidence: 99%