PurposeThis article describes the knowledge‐mapping framework the authors designed based on their theoretical and practical research on knowledge mapping. It also shows the practical use of the Framework for companies interested in knowledge‐mapping tools.Design/methodology/approachIn the first place the authors position their research in the context of knowledge management and knowledge‐mapping research and practice. An example of their practical research on knowledge mapping is given as a preliminary step to describe their knowledge‐mapping framework. The use of this framework is illustrated. Finally, the authors validate their framework against a number of commercially available tools with knowledge‐mapping functionality.FindingsThe authors found that their framework is useful, insightful and robust when applied to new knowledge‐mapping tools/functionality.Research limitations/implicationsThe important issue how to embed knowledge‐mapping tools in organizations is not considered to be in the scope of this article.Practical implicationsBased on concrete examples the authors illustrate the practical implications of their knowledge‐mapping framework for companies. The Framework can be used for defining knowledge‐mapping tool requirements, the assessment and comparison of commercial tools, and the assessment of available knowledge in an organization.Originality/valueKnowledge mapping and its use have been a research issue for some time. Companies have also adopted knowledge‐mapping tools to support and stimulate knowledge sharing in their organizations and to help employees find the expertise they are looking for. But no research has been done on how to help companies decide what kind of knowledge‐mapping tool they need or how any tools they already have can be combined in a knowledge‐mapping tool. This article describes a unique and new Framework the authors devised to help companies do just that.
In this paper, we report on a study that explores the contribution of social tags, professional metadata and automatically generated metadata to the retrieval process. In this study, 194 participants tagged a total of 115 videos, while another 140 participants searched the video collection for answers to eight questions. The results show that in the current context, social tags yield an effective retrieval process, whereas automatically generated metadata do not. To put this result in perspective, participants' search strategies were primarily guided by the search tasks, instead of metadata elements. We have found some evidence that social tagging is effective, as the same terminology was used in the retrieval process as in the process of assigning metadata.
At the end of 2011 a Data Intelligence 4 Librarians course was developed to provide online resources and training for digital preservation practitioners, specifically library staff. Lessons learned during the first rounds of the course and developments in the Research Data Management landscape have led to a revision of the positioning, the structure and the content of the course. This paper describes both the three main drivers for the revision, the changes themselves and the lessons that can be drawn from them, after three training rounds in 2014 in the revised format under the new programmatic title of Essentials 4 Data Support.
Peer review of publications is at the core of science and primarily seen as instrument for ensuring research quality. However, it is less common to independently value the quality of the underlying data as well. In the light of the ‘data deluge’ it makes sense to extend peer review to the data itself and this way evaluate the degree to which the data are fit for re-use. This paper describes a pilot study at EASY - the electronic archive for (open) research data at our institution. In EASY, researchers can archive their data and add metadata themselves. Devoted to open access and data sharing, at the archive we are interested in further enriching these metadata with peer reviews.As a pilot, we established a workflow where researchers who have downloaded data sets from the archive were asked to review the downloaded data set. This paper describes the details of the pilot including the findings, both quantitative and qualitative. Finally, we discuss issues that need to be solved when such a pilot is turned into a structural peer review functionality for the archiving system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.