2021 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC) 2021
DOI: 10.1109/mlhpc54614.2021.00011
|View full text |Cite
|
Sign up to set email alerts
|

HPCFAIR: Enabling FAIR AI for HPC Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…The HPC ED federated catalog builds on a foundation of standardized minimal HPC training metadata for publishing and discovering training information and will merge and standardize the HPC learning metadata from among our partnering organizations into a common metadata schema. [22] [15] [19] [20] [21] [9] We propose that our effort begin with two types of metadata for elements in the federated repository: first, metadata that describes the training material, its access methods, and educational characteristics, including Title, Description, Authors, Publisher, Type, Language, Cost, Format, License, Target Group, Expertise Level, Certification details, and very importantly, Persistent Identifiers, Tags, or Keywords; and second, metadata that identifies the publisher and source of the training material so that when an individual selects a specific training item, they can be directed to the source catalog that published that material to browse all available information and to access that training material. Additionally, we will start with the Research Data Alliance (RDA) "Recommendations for a minimal metadata set to aid harmonized discovery of learning resources" that addresses many of the use cases and needs around basic training sharing and discovery and supports FAIR practices" [12].…”
Section: Metadata Taxonomy and Ontologiesmentioning
confidence: 99%
“…The HPC ED federated catalog builds on a foundation of standardized minimal HPC training metadata for publishing and discovering training information and will merge and standardize the HPC learning metadata from among our partnering organizations into a common metadata schema. [22] [15] [19] [20] [21] [9] We propose that our effort begin with two types of metadata for elements in the federated repository: first, metadata that describes the training material, its access methods, and educational characteristics, including Title, Description, Authors, Publisher, Type, Language, Cost, Format, License, Target Group, Expertise Level, Certification details, and very importantly, Persistent Identifiers, Tags, or Keywords; and second, metadata that identifies the publisher and source of the training material so that when an individual selects a specific training item, they can be directed to the source catalog that published that material to browse all available information and to access that training material. Additionally, we will start with the Research Data Alliance (RDA) "Recommendations for a minimal metadata set to aid harmonized discovery of learning resources" that addresses many of the use cases and needs around basic training sharing and discovery and supports FAIR practices" [12].…”
Section: Metadata Taxonomy and Ontologiesmentioning
confidence: 99%
“…Funded by DOE. This multi-institutional project aims to develop a generic High Performance Computing data management framework 9,10 to make both training data and AI models of scientific applications FAIR. • The FAIR Surrogate Benchmarks Initiative 11 .…”
Section: Fair Initiativesmentioning
confidence: 99%
“…To maximize the impact and utility of such AI models, it's recommended to adopt FAIR principles, ensuring they are findable, accessible, interoperable, and reusable. These principles, initially crafted for scientific datasets [5], have been adapted for research software [6][7][8][9] and other areas, including AI tool development [10,11]. However, applying FAIR principles to AI models presents challenges due to the unique nature of AI models.…”
Section: Introductionmentioning
confidence: 99%