2022
DOI: 10.1175/bams-d-21-0020.1
|View full text |Cite
|
Sign up to set email alerts
|

NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES)

Abstract: We introduce the National Science Foundation (NSF) AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES). This AI institute was funded in 2020 as part of a new initiative from the NSF to advance foundational AI research across a wide variety of domains. To date AI2ES is the only NSF AI institute focusing on environmental science applications. Our institute focuses on developing trustworthy AI methods for weather, climate, and coastal hazards. The AI methods will revo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 43 publications
0
4
0
Order By: Relevance
“…There is no clear way to decide whether the model has been trained sufficiently, whether it performs well on a previously unseen set of inputs (generalizability) or, most importantly, whether we can meaningfully interpret its output (interpretability). Trustworthy ML is a growing area of interest as users favor models that are fair, reliable, and robust, which becomes especially important for high‐stakes decision making (e.g., McGovern et al., 2022). The precise definition of “trust” in a model depends on its application, but for integration into the model hierarchy, we aspire for a trustworthy ML model to be both generalizable and interpretable, as we do with any model.…”
Section: Data‐driven Methods: the Emergence Of Machine Learningmentioning
confidence: 99%
“…There is no clear way to decide whether the model has been trained sufficiently, whether it performs well on a previously unseen set of inputs (generalizability) or, most importantly, whether we can meaningfully interpret its output (interpretability). Trustworthy ML is a growing area of interest as users favor models that are fair, reliable, and robust, which becomes especially important for high‐stakes decision making (e.g., McGovern et al., 2022). The precise definition of “trust” in a model depends on its application, but for integration into the model hierarchy, we aspire for a trustworthy ML model to be both generalizable and interpretable, as we do with any model.…”
Section: Data‐driven Methods: the Emergence Of Machine Learningmentioning
confidence: 99%
“…Trust in AI, and the extent to which AI is deemed trustworthy, is contingent on communications processes and products in AI, such as model or XAI outputs, or interfaces for imposing constraints on AI models; the visual presence of AI tends to increase trust in AI (Gilkson & Woolley, 2020). Many studies have called for or investigated explanations and XAI (McGovern, Bostrom, et al., 2022) as an approach to increasing trust (e.g., Hoffman et al., 2018; Lockey et al., 2021; Miller, 2019; Mueller et al., 2019; Tulio et al, 2007), while such explanations have often relied on visualizations (McGovern et al., 2019).…”
Section: Trust Risk and Scientific Uncertaintymentioning
confidence: 99%
“…The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) is a convergent center focused on AI for the Earth and environmental sciences (ES) (McGovern et al. 2022). We are developing novel AI methods for real‐world high‐impact environmental use cases that ensure that we address the entire chain of relevant issues.…”
Section: Introduction To Ai2esmentioning
confidence: 99%