2017 20th International Conference on Information Fusion (Fusion) 2017
DOI: 10.23919/icif.2017.8009879
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation metrics for the practical application of URREF ontology: An illustration on data criteria

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Deep learning (DL) has changed the data processing and fusion methods in the last decade, but explainability is needed for user adoption [58]. Since DL needs large data to train, uncertainty representations are needed for data evaluation [59]. DL black-box methods have challenged understanding of what is the data association; whereas successful output explainable decisions have shown promise.…”
Section: Deep Learning (Dl) -Analytics Without the Usermentioning
confidence: 99%
“…Deep learning (DL) has changed the data processing and fusion methods in the last decade, but explainability is needed for user adoption [58]. Since DL needs large data to train, uncertainty representations are needed for data evaluation [59]. DL black-box methods have challenged understanding of what is the data association; whereas successful output explainable decisions have shown promise.…”
Section: Deep Learning (Dl) -Analytics Without the Usermentioning
confidence: 99%
“…In the context of target tracking, a variety of evaluation metrics with physical significance have been proposed, which can evaluate the practicability of the tracking algorithm and the consistency of the expected and assessed results. These metrics can be divided into three categories: effectiveness, timeliness, and accuracy, which can be seen in [ 21 , 22 , 23 , 24 ]. This paper also followed this division criterion for convenience.…”
Section: A Classification Of the Comprehensive Evaluation Metricsmentioning
confidence: 99%
“…URREF criteria have generic definitions and can be instantiated for applications with coarse or finer granularity levels: evaluation metrics can be defined for data analysis [31], or more particularity for data specific types [32] or attributes: reliability and credibility [33], trust and self-confidence [34] or veracity [35]. While allowing a continuous analysis of uncertainty representation, quantification and evaluation [36], UR-REF criteria are detailed enough to capture model-embedded uncertainties [37], their propagation in the context of the decision loop [38] and offer a basis to compare different fusion methods [39].…”
Section: B Urref Ontology and Sources Of Uncertaintymentioning
confidence: 99%