As the field of digital preservation matures, there is an increasing need to systematically assess an organization's abilities to achieve its digital preservation goals, and a wide variety of assessment tools have been created for this purpose. To map the landscape of research in this area, evaluate the current maturity of knowledge on this central question in DP and provide direction for future research, this paper reviews assessment frameworks in digital preservation through a systematic literature search and categorizes the literature by type of research. The analysis shows that publication output around assessment in digital preservation has increased markedly over time, but most existing work focuses on developing new models rather than rigorous evaluation and validation of existing frameworks. Significant gaps are present in the application of robust conceptual foundations and design methods, and in the level of empirical evidence available to enable the evaluation and validation of assessment models. The analysis and comparison with other fields suggest that the design of assessment models in DP should be studied rigorously in both theory and practice, and that the development of future models will benefit from applying existing methods, processes, and principles for model design.
The increasing use and prominence of web archives raises the urgency of establishing mechanisms for transparency in the making of web archives to facilitate the process of evaluating a web archive's provenance, scoping, and absences. Some choices and process events are captured automatically, but their interactions are not currently well understood or documented.This study examines the decision space of web archives and its role in shaping what is and what is not captured in the web archiving process. By comparing how three different web archives collections were created and documented, we investigate how curatorial decisions interact with technical and external factors and we compare commonalities and differences.The findings reveal the need to understand both the social and technical context that shapes those decisions and the ways in which these individual decisions interact. Based on the study, we propose a framework for documenting key dimensions of a collection that addresses the situated nature of the organizational context, technical specificities, and unique characteristics of web materials that are the focus of a collection. The framework enables future researchers to undertake empirical work studying the process of creating web archives collections in different contexts.
Our work considers the sociotechnical and organisational constraints of web archiving in order to understand how these factors and contingencies influence research engagement with national web collections. In this article, we compare and contrast our experiences of undertaking web archival research at two national web archives: the UK Web Archive located at the British Library and the Netarchive at the Royal Danish Library. Based on personal interactions with the collections, interviews with library staff and observations of web archiving activities, we invoke three conceptual devices (orientating, auditing and constructing) to describe common research practices and associated challenges in the context of each national web archive. Through this framework we centre the early stages of the research process that are often only given cursory attention in methodological descriptions of web archival research, to discuss the epistemological entanglements of researcher practices, instruments, tools and methods that create the conditions of possibility for new knowledge and scholarship in this space. In this analysis, we highlight the significant time and energy required on the part of researchers to begin using national web archives, as well as the value of engaging with the curatorial infrastructure that enables web archiving in practice. Focusing an analysis on these research infrastructures facilitates a discussion of how these web archival interfaces both enable and foreclose on particular forms of researcher engagement with the past Web and in turn contributes to critical ongoing debates surrounding the opportunities and constraints of digital sources, methodologies and claims within the Digital Humanities.
Use of computational methods for exploration and analysis of web archives sources is emerging in new disciplines such as digital humanities. This raises urgent questions about how such research projects process web archival material using computational methods to construct their findings. This paper aims to enable web archives scholars to document their practices systematically to improve the transparency of their methods. We adopt the Research Object framework to characterize three case studies that use computational methods to analyze web archives within digital history research. We then discuss how the framework can support the characterization of research methods and serve as a basis for discussions of methods and issues such as reuse and provenance. The results suggest that the framework provides an effective conceptual perspective to describe and analyze the computational methods used in web archive research on a high level and make transparent the choices made in the process. The documentation of the research process contributes to a better understanding of the findings and their provenance, and the possible reuse of data, methods, and workflows.
To understand and improve their current abilities and maturity, organizations use diagnostic instruments such as maturity models and other assessment frameworks. Increasing numbers of these are being developed in digital curation. Their central role in strategic decision making raises the need to evaluate their fitness for this purpose and develop guidelines for their design and evaluation. A comprehensive review of assessment frameworks, however, found little evidence that existing assessment frameworks have been evaluated systematically, and no methods for their evaluation. This article proposes a new methodology for evaluating the design and use of assessment frameworks. It builds on prior research on maturity models and combines analytic and empirical evaluation methods to explain how the design of assessment frameworks influences their application in practice, and how the design process can effectively take this into account. We present the evaluation methodology and its application to two frameworks. The evaluation results lead to guidelines for the design process of assessment frameworks in digital curation. The methodology provides insights to the designers of the evaluated frameworks that they can consider in future revisions; methodical guidance for researchers in the field; and practical insights and words of caution to organizations keen on diagnosing their abilities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.