2020
DOI: 10.1109/jstars.2019.2948921
|View full text |Cite
|
Sign up to set email alerts
|

Daphne: A Virtual Assistant for Designing Earth Observation Distributed Spacecraft Missions

Abstract: This article describes Daphne, a virtual assistant for designing Earth observation distributed spacecraft missions. It is, to the best of our knowledge, the first virtual assistant for such application. The article provides a thorough description of Daphne, including its question answering system and the main features we have implemented to help system engineers design distributed spacecraft missions. In addition, the article describes a study performed at NASA's Jet Propulsion Laboratory (JPL) to assess the u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(2 citation statements)
references
References 59 publications
(66 reference statements)
0
2
0
Order By: Relevance
“…The high-level architecture of the DEA, requirements and the result of a set of interviews involving ESA experts are presented in previous publications [1], [2]. To our best knowledge, the Daphne virtual assistant, in development at Texas A&M University and presented in [3], is the most similar concept to the DEA. However, Daphne focuses on Earth Observation missions, and builds its knowledge on a manually defined ontology and structured database.…”
Section: Introductionmentioning
confidence: 99%
“…The high-level architecture of the DEA, requirements and the result of a set of interviews involving ESA experts are presented in previous publications [1], [2]. To our best knowledge, the Daphne virtual assistant, in development at Texas A&M University and presented in [3], is the most similar concept to the DEA. However, Daphne focuses on Earth Observation missions, and builds its knowledge on a manually defined ontology and structured database.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, several Virtual-Assistant (VA) related studies have also tried to define custom evaluation criteria. In [107], authors compare their VA against both a simpler interactive data-exploration scheme and a non-interactive solution-search. They assess the use of their assistant on three factors: performance with task-specific metrics; human-learning using questions and tests at the end of each task; usability for example using System Usability Scale [108].…”
Section: Assistant Evaluation and Development Of Shared Benchmarksmentioning
confidence: 99%