2023
DOI: 10.48550/arxiv.2302.04844
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Gradient of Generative AI Release: Methods and Considerations

Abstract: As increasingly powerful generative AI systems are developed, the release method greatly varies. We propose a framework to assess six levels of access to generative AI systems: fully closed; gradual or staged access; hosted access; cloud-based or API access; downloadable access; and fully open. Each level, from fully closed to fully open, can be viewed as an option along a gradient. We outline key considerations across this gradient: release methods come with tradeoffs, especially around the tension between co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 48 publications
0
1
0
Order By: Relevance
“…Following years of debate about the safety of openly releasing AI models [17,18,36,37], recent years have seen the emergence and proliferation of "open" models, which individuals and organisations have shared on an open access basis on platforms such as the HF Hub [4]. Prior to this, AI models, in particular large language models (LLMs), were principally developed and maintained behind closed doors, albeit with open science practices, such as the sharing of publications on arXiv and code on platforms like GitHub.…”
Section: Related Work "We Have No Moat": the Emergence Of Open Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…Following years of debate about the safety of openly releasing AI models [17,18,36,37], recent years have seen the emergence and proliferation of "open" models, which individuals and organisations have shared on an open access basis on platforms such as the HF Hub [4]. Prior to this, AI models, in particular large language models (LLMs), were principally developed and maintained behind closed doors, albeit with open science practices, such as the sharing of publications on arXiv and code on platforms like GitHub.…”
Section: Related Work "We Have No Moat": the Emergence Of Open Modelsmentioning
confidence: 99%
“…The proliferation of open models, especially foundation models, has ignited heated debate about their potential benefits and risks [16][17][18][19][20]43]. On the one hand, open models are said to promise benefits for research, innovation, and competition by lowering entry barriers and widening access to state-of-the-art AI [44].…”
Section: Related Work "We Have No Moat": the Emergence Of Open Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…We observe that overall 4% (5 out of 104) of primary studies belong to this theme spanning across years 2015-2022 and we anticipate this trend to accelerate. Factors like generative AI [37] will certainly be among the forefront of discovering new microarchitecture vulnerabilities and perhaps generate after-fixes on-the-fly with automated reasoning. With sustained miniaturization in chip fabrication and advances in lithographic techniques, we anticipate newer generation of chips with affordable onboard AI hardware being readily available, which could be programmed for any applicationspecific purpose including security.…”
Section: ) Further Discussionmentioning
confidence: 99%
“…Despite the success of the scaling paradigm, significant challenges still exist especially when the many practical constraints of real-world scenarios have to be met: labeled data can be severely limited (i.e., few-shot scenario (Song et al, 2022;Ye et al, 2021)), data privacy is critical for many industries and has become the subject of increasingly many regulatory pieces (Commission, 2020(Commission, , 2016, compute costs need to be optimized (Strubell et al, 2019). Furthermore, these challenges are made even more complex as stronger foundation models are now available only through APIs (e.g., Ope-nAI's GPT-3, GPT-4 or ChatGPT, Anthropic's Claude or Google's PaLM (Chowdhery et al, 2022)) which has led to some of their parameters being concealed, presenting new challenges for model adaptation (Solaiman, 2023). This paper is centered on the fundamental task of fewshot text classification, specifically focusing on cloud-based/API access.…”
Section: Introductionmentioning
confidence: 99%