Growing client population, ever-increasing service demand, and complexity of services are the driving factors for the mobile operators for a paradigm shift in their core technology and radio access networks. 5G mobile network is the result of this paradigm shift and currently under deployment in many developed countries such as United States, United Kingdom, South Korea, Japan, and China-to name a few. However, most of the Least Developed Countries (LDCs) have very recently been implemented 4G mobile networks for which the overall role out phase is still not complete. In this paper, we investigate how feasible it is for LDCs to emphasize on a possible deployment of 5G networks at the moment. At first, we take a holistic approach to show the major technical challenges LDCs are likely to face while deploying the 5G mobile networks. Then we argue that various security aspects of 5G networks are an ongoing issue and LDCs are not technologically competent to handle many security glitches of 5G networks. At the same time, we show that most of the use cases of 5G networks are not applicable in the context of many LDCs (at least at the present time). Finally, this paper concludes that the start of the 5G network deployment in LDCs would take much longer time than expected.
Adoption of preprints dramatically expanded during the COVID-19 pandemic. Many have expressed concern that the risk of flawed decision-making is increased by relying on preprint data that would not survive peer review. We therefore asked how much the information presented in preprints is expected to change after review. We quantify attrition dynamics of over 1000 epidemiological estimates first reported in 100 matched preprints studying COVID-19. We find that 89% of point estimates persist through peer review. Of these, the correlation between preprint and published estimate values is extremely high at 0.99, and there is no systematic trend toward estimate inflation or deflation during review. A higher degree of data alteration during peer review, either in terms of magnitude or deletion, might be expected in papers never published because of their lower quality, which could limit the generalizability of our results. Importantly, we find that expert peer review scores of preprint quality are not related to eventual publication in a peer reviewed journal, mitigating this concern. Uncertainty is reduced somewhat, however, as authors add another 18% of data points compared to the preprint version. Confidence interval ranges also decrease by a small but statistically significant 7%. Therefore, the evidence base presented in preprints is highly stable, and where data change during review, uncertainty is expected to decrease by a small amount on average. These results lend credence to the use of preprints, as one component of the biomedical research literature, in decision-making. These results can help inform the use of preprints during the ongoing pandemic as well as future disease outbreaks.
Insights from biomedical citation networks can be used to identify promising avenues for accelerating research and its downstream bench-to-bedside translation. Citation analysis generally assumes that each citation documents substantive knowledge transfer that informed the conception, design, or execution of the main experiments. Citations may exist for other reasons. In this paper, we take advantage of late-stage citations added during peer review because these are less likely to represent substantive knowledge flow. Using a large, comprehensive feature set of open access data, we train a predictive model to identify late-stage citations. The model relies only on the title, abstract, and citations to previous articles but not the full-text or future citations patterns, making it suitable for publications as soon as they are released, or those behind a paywall (the vast majority). We find that high prediction scores identify late-stage citations that were likely added during the peer review process as well as those more likely to be rhetorical, such as journal self-citations added during review. Our model conversely gives low prediction scores to early-stage citations and citation classes that are known to represent substantive knowledge transfer. Using this model, we find that US federally funded biomedical research publications represent 30% of the predicted early-stage (and more likely to be substantive) knowledge transfer from basic studies to clinical research, even though these comprise only 10% of the literature. This is a threefold overrepresentation in this important type of knowledge flow.
Citation analysis generally assumes that each citation documents causal knowledge transfer that informed the conception, design, or execution of the main experiments. Citations may exist for other reasons. In this paper we identify a subset of citations that are unlikely to represent causal knowledge flow. Using a large, comprehensive feature set of open access data, we train a predictive model to identify such citations. The model relies only on the title, abstract, and reference set and not the full-text or future citations patterns, making it suitable for publications as soon as they are released, or those behind a paywall. We find that the model identifies, with high prediction scores, citations that were likely added during the peer review process, and conversely identifies with low prediction scores citations that are known to represent causal knowledge transfer. Using the model, we find that federally funded biomedical research publications represent 30% of the estimated causal knowledge transfer from basic studies to clinical research, even though these comprise only 10% of the literature, a three-fold overrepresentation in this important type of knowledge transfer. This finding underscores the importance of federal funding as a policy lever to improve human health.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.