Flow equations are necessary for the estimation of flow rate in pipelines. Several flow equations exist and so do conventions for their application. Their range of applicability are delineated in literature such as D. W. Schroeder, Jr. and GL Noble Denton (2010). These works record the limited range of applicability of the Weymouth Equation compared to other existing flow equations. This failing of the Weymouth Equation is most prominent in large diameter pipelines. Fully turbulent flow predominates in large diameter pipelines like trunk lines and the use of Weymouth Equation for calculation of flow rate in such scenarios results in significant discrepancies. This work extends the range of applicability of the Weymouth to fully turbulent flow regimes in large diameter pipelines. In particular, the Weymouth friction factor is corrected by introducing a term accounting for internal pipe roughness. Friction factors for different flow scenarios were calculated and plotted against the Reynolds Number and the Flow Rate to reveal the transition to fully turbulent flow regime. Python programming language was used to compute a table of friction data using both the Colebrook-White Equation and the Weymouth Friction Factor equation. A correction factor was introduced into the Weymouth friction factor that takes into consideration the variation of pipe roughness. Further, the new friction factor relationship was used to modify the existing Weymouth equation. The Modified Weymouth Equation obtained predicted well for fully turbulent flow scenarios. Compared against the Weymouth Equation, it maintained an appreciable efficiency as pipe roughness was varied from standard values obtainable with new pipelines. This study achieved its set objectives of improving the efficiency of the Weymouth Equation for large diameter pipelines. It will find application in the accurate estimation of flow rate as technologies evolve for non-intrusive determination of internal pipe roughness.
Pretrained language models represent the state of the art in NLP, but the successful construction of such models often requires large amounts of data and computational resources. Thus, the paucity of data for low-resource languages impedes the development of robust NLP capabilities for these languages. There has been some recent success in pretraining encoderonly models solely on a combination of lowresource African languages, exemplified by AfriBERTa. In this work, we extend the approach of "small data" pretraining to encoderdecoder models. We introduce AfriTeVa, a family of sequence-to-sequence models derived from T5 that are pretrained on 10 African languages from scratch. With a pretraining corpus of only around 1GB, we show that it is possible to achieve competitive downstream effectiveness for machine translation and text classification, compared to larger models trained on much more data. All the code and model checkpoints described in this work are publicly available at https://github.com/castorini/ afriteva.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.