“…Prior work analyzed these broader impacts statements, finding convergence around a set of topics such as risks to privacy and bias, but often lacking concrete specifics or strategies for mitigation [8,99,127,167]. However, prior work suggests that many CS researchers may not have the training, resources, or inclination to engage in this type of anticipatory work [45,175], suggesting that new tools, training, and processes, are needed to support researchers and developers in engaging in anticipatory work in ways that are integrated into their research practices. More recently, researchers have proposed a framework that uses LLMs to anticipate harms for classifiers by generating stakeholders and vignettes for a given scenario [24], evaluating this framework through interviews with responsible AI researchers.…”