Collecting, integrating, storing and analyzing data in a database system is nothing new in itself. To introduce a current research information system (CRIS) means that scientific institutions must provide the required information on their research activities and research results at a high quality. A one-time cleanup is not sufficient; data must be continuously curated and maintained. Some data errors (such as missing values, spelling errors, inaccurate data, incorrect formatting, inconsistencies, etc.) can be traced across different data sources and are difficult to find. Small mistakes can make data unusable, and corrupted data can have serious consequences. The sooner quality issues are identified and remedied, the better. For this reason, new techniques and methods of data cleansing and data monitoring are required to ensure data quality and its measurability in the long term. This paper examines data quality issues in current research information systems and introduces new techniques and methods of data cleansing and data monitoring with which organizations can guarantee the quality of their data.
The variety and diversity of published content are currently expanding in all fields of scholarly communication. Yet, scientific knowledge graphs (SKG) provide only poor images of the varied directions of alternative scientific choices, and in particular scientific controversies, which are not currently identified and interpreted. We propose to use the rich variety of knowledge present in search histories to represent cliques modeling the main interpretable practices of information retrieval issued from the same “cognitive community”, identified by their use of keywords and by the search experience of the users sharing the same research question. Modeling typical cliques belonging to the same cognitive community is achieved through a new conceptual framework, based on user profiles, namely a bipartite geometric scientific knowledge graph, SKG GRAPHYP. Further studies of interpretation will test differences of documentary profiles and their meaning in various possible contexts which studies on “disagreements in scientific literature” have outlined. This final adjusted version of GRAPHYP optimizes the modeling of “Manifold Subnetworks of Cliques in Cognitive Communities” (MSCCC), captured from previous user experience in the same search domain. Cliques are built from graph grids of three parameters outlining the manifold of search experiences: mass of users; intensity of uses of items; and attention, identified as a ratio of “feature augmentation” by literature on information retrieval, its mean value allows calculation of an observed “steady” value of the user/item ratio or, conversely, a documentary behavior “deviating” from this mean value. An illustration of our approach is supplied in a positive first test, which stimulates further work on modeling subnetworks of users in search experience, that could help identify the varied alternative documentary sources of information retrieval, and in particular the scientific controversies and scholarly disputes.
In our present paper, the influence of data quality on the success of the user acceptance of research information systems (RIS) is investigated and determined. Until today, only a little research has been done on this topic and no studies have been carried out. So far, just the importance of data quality in RIS, the investigation of its dimensions and techniques for measuring, improving, and increasing data quality in RIS (such as data profiling, data cleansing, data wrangling, and text data mining) has been focused. With this work, we try to derive an answer to the question of the impact of data quality on the success of RIS user acceptance. An acceptance of RIS users is achieved when the research institutions decide to replace the RIS and replace it with a new one. The result is a statement about the extent to which data quality influences the success of users’ acceptance of RIS.
Researchers need to be able to integrate ever-increasing amounts of data into their institutional databases, regardless of the source, format, or size of the data. It is then necessary to use the increasing diversity of data to derive greater value from data for their organization. The processing of electronic data plays a central role in modern society. Data constitute a fundamental part of operational processes in companies and scientific organizations. In addition, they form the basis for decisions. Bad data quality can negatively affect decisions and have a negative impact on results. The quality of the data is crucial. This includes the new theme of data wrangling, sometimes referred to as data munging or data crunching, to find the dirty data and to transform and clean them. The aim of data wrangling is to prepare a lot of raw data in their original state so that they can be used for further analysis steps. Only then can knowledge be obtained that may bring added value. This paper shows how the data wrangling process works and how it can be used in database systems to clean up data from heterogeneous data sources during their acquisition and integration.
The topic of data integration from external data sources or independent IT-systems has received increasing attention recently in IT departments as well as at management level, in particular concerning data integration in federated database systems. An example of the latter are commercial research information systems (RIS), which regularly import, cleanse, transform and prepare the analysis research information of the institutions of a variety of databases. In addition, all these so-called steps must be provided in a secured quality. As several internal and external data sources are loaded for integration into the RIS, ensuring information quality is becoming increasingly challenging for the research institutions. Before the research information is transferred to a RIS, it must be checked and cleaned up. An important factor for successful or competent data integration is therefore always the data quality. The removal of data errors (such as duplicates and harmonization of the data structure, inconsistent data and outdated data, etc.) are essential tasks of data integration using extract, transform, and load (ETL) processes. Data is extracted from the source systems, transformed and loaded into the RIS. At this point conflicts between different data sources are controlled and solved, as well as data quality issues during data integration are eliminated. Against this background, our paper presents the process of data transformation in the context of RIS which gains an overview of the quality of research information in an institution’s internal and external data sources during its integration into RIS. In addition, the question of how to control and improve the quality issues during the integration process in RIS will be addressed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.