Our aim is to compare the coverage of the Scopus database with that of Ulrich, to determine just how homogenous it is in the academic world. The variables taken into account were subject distribution, geographical distribution, distribution by publishers and the language of publication. The analysis of the coverage of a product of this nature should be done in relation to an accepted model, the optimal choice being Ulrich's Directory, considered the international point of reference for the most comprehensive information on journals published throughout the world. The results described here allow us to draw a profile of Scopus in terms of its coverage by areas -geographic and thematic -and the significance of peer-review in its publications. Both these aspects are highly pragmatic considerations for information retrieval, the evaluation of research, and the design of policies for the use of scientific databases in scientific promotion.
Our objective is the generation of schematic visualizations as interfaces for scientific domain analysis. We propose a new technique that uses thematic classification (classes and categories) as entities of cocitation and units of measure, and demonstrate the viability of this methodology through the representation and analysis of a domain of great dimensions. The main features of the maps obtained are discussed, and proposals are made for future improvements and applications.
In this study, visual representations are created in order to analyze different aspects of scientific collaboration at the international level. The main objective is to identify the international facet of research by following the flow of knowledge as expressed by the number of scientific publications, and then establishes the main geographical axes of output, showing the interrelationships of the domain, the intensity of these relations, and how the different types of collaboration are reflected in terms of visibility. Thus, the methodology has a twofold application, allowing us to detect significant differences that help characterize patterns of behaviour of a geographical system of output, along with the generation of representations that serve as interfaces for domain analysis and information retrieval.
BackgroundIn the greater framework of the essential functions of Public Health, our focus is on a systematic, objective, external evaluation of Latin American scientific output, to compare its publications in the area of Public Health with those of other major geographic zones. We aim to describe the regional distribution of output in Public Health, and the level of visibility and specialization, for Latin America; it can then be characterized and compared in the international context.MethodsThe primary source of information was the Scopus database, using the category “Public Health, Environmental and Occupational Health”, in the period 1996–2011. Data were obtained through the portal of SCImago Journal and Country Rank. Using a set of qualitative (citation-based), quantitative (document recount) and collaborative (authors from more than one country) indicators, we derived complementary data. The methodology serves as an analytical tool for researchers and scientific policy-makers.ResultsThe contribution of Latin America to the arsenal of world science lies more or less midway on the international scale in terms of its output and visibility. Revealed as its greatest strengths are the high level of specialization in Public Health and the sustained growth of output. The main limitations identified were a relative decrease in collaboration and low visibility.ConclusionsCollaboration is a key factor behind the development of scientific activity in Latin America. Although this finding can be useful for formulating research policy in Latin American countries, it also underlines the need for further research into patterns of scientific communication in this region, to arrive at more specific recommendations.
Proper field delineation plays an important role in scientometric studies, although it is a tough task. Based on an emerging and interdisciplinary field -nanoscience and nanotechnology-this paper highlights the problem of field delineation. First we review the related literature. Then, three different approaches to delineate a field of knowledge were applied at three different levels of aggregation: subject category, publication level, and journal level. Expert opinion interviews served to assess the data, and precision and recall of each approach were calculated for comparison. Our findings confirm that field delineation is a complicated issue at both the quantitative and the qualitative level, even when experts validate results. Highlights:This study offers an updated literature review of NST as a delineated field. We provide an extended and unified NST search strategy suitable for two databases, Scopus and Web of Science. After formulating a conceptual framework as to how to employ different approaches described in the literature, we tried to delineate the NST field at the journal level through a seven-step procedure. Three approaches -at subject category, publication and journal levels-were adopted to explore and analyze these two databases. The results are compared, and we offer a detailed explanation of the topics related to the journals included in each level. At the publication level, we compare the potential of the micro-field classification system developed by Waltman and van Eck (2012) with the other two approaches. Finally, we examine our findings in the light of NST expert opinions to assess the reliability of the results. The findings of this survey confirm certain problems inherent to field delineation at the quantitative and qualitative levels, especially when dealing with interdisciplinary fields (Huang, Notten, & Rasters, 2011).
Network scaling algorithms such as the Pathfinder algorithm are used to prune many different kinds of networks, including citation networks, random networks, and social networks. However, this algorithm suffers from run time problems for large networks and online processing due to its O(n 4 ) time complexity. In this article, we introduce a new alternative, the MST-Pathfinder algorithm, which will allow us to prune the original network to get its PFNET(∞, n − 1) in just O(n 2 · log n) time.The underlying idea comes from the fact that the union (superposition) of all the Minimum Spanning Trees extracted from a given network is equivalent to the PFNET resulting from the Pathfinder algorithm parameterized by a specific set of values (r = ∞ and q = n − 1), those usually considered in many different applications. Although this property is well-known in the literature, it seems that no algorithm based on it has been proposed, up to now, to decrease the high computational cost of the original Pathfinder algorithm. We also present a mathematical proof of the correctness of this new alternative and test its good efficiency in two different case studies: one dedicated to the post-processing of large random graphs, and the other one to a real world case in which medium networks obtained by a cocitation analysis of the scientific domains in different countries are pruned.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.