Jet injection into a supersonic cross-flow is a challenging fluid dynamics problem in the field of aerospace engineering which has applications as part of a rocket thrust vector control system for noise control in cavities and fuel injection in scramjet combustion chambers. Several experimental and theoretical/numerical works have been conducted to explore this flow; however, there is a dearth of literature detailing the instantaneous flow which is vital to improve the efficiency of the mixing of fluids. In this paper, a sonic jet in a Mach 1.6 free-stream is studied using a finite volume Godunov type implicit large eddy simulations technique, which employs fifth-order accurate MUSCL (Monotone Upstream-centered Schemes for Conservation Laws) scheme with modified variable extrapolation and a three-stage second-order strong-stability-preserving Runge–Kutta scheme for temporal advancement. A digital filter based turbulent inflow data generation method is implemented in order to capture the physics of the supersonic turbulent boundary layer. This paper details the averaged and instantaneous flow features including vortex structures downstream of the jet injection, along with the jet penetration, jet mixing, pressure distributions, turbulent kinetic energy, and Reynolds stresses in the downstream flow. It demonstrates that Kelvin–Helmholtz type instabilities in the upper jet shear layer are primarily responsible for mixing of the two fluids. The results are compared to experimental data and recently performed classical large eddy simulations (LES) with the same initial conditions in order to demonstrate the accuracy of the numerical methods and utility of the inflow generation method. Results here show equivalent accuracy for 1∕45th of the computational resources used in the classical LES study.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
We present a convolutional neural network (CNN) that identifies drone models in real-life videos. The neural network is trained on synthetic images and tested on a real-life dataset of drone videos. To create the training and validation datasets, we show a method of generating synthetic drone images. Domain randomization is used to vary the simulation parameters such as model textures, background images, and orientation. Three common drone models are classified: DJI Phantom, DJI Mavic, and DJI Inspire. To test the performance of the neural network model, Anti-UAV, a real-life dataset of flying drones is used. The proposed method reduces the time-cost associated with manually labelling drones, and we prove that it is transferable to real-life videos. The CNN achieves an overall accuracy of 92.4%, a precision of 88.8%, a recall of 88.6%, and an f1 score of 88.7% when tested on the real-life dataset.
Turbulence is generally characterized by random, chaotic motion and has always been a challenging problem for fluid dynamists. Several methods for generating turbulent boundary conditions for computational fluid dynamics are in use today. One of the methods is by adding random white-noise to the averaged velocity profiles to generate turbulence in the flow field, which can be considered as the simplest and inexpensive method. Recently, another method has been introduced, which is based upon a digital filter to generate turbulent inflow data. This paper presents a comparison of the digital filter-based turbulent inflow data generation technique with three different methods within the framework of the large eddy simulations technique using the fifth-and second-order accurate methods in space and time, respectively. The case for comparison/analysis is a sonic jet of air injected transversely into a supersonic (Mach 1.6) stream of air, for which experimental and classical large eddy simulation data were available. It is demonstrated that the random white-noise-based turbulent inflow data dissipate immediately in the computational domain, giving incorrect velocity and pressure profiles. At the same time, the importance of two-point Exponential correlation used in the digital filter-based technique is demonstrated by scaling the random white-noise with the Reynolds stress tensor and ignoring the correlation. This improved the results compared with pure random white-noise, but still exhibits a high initial dissipation rate because the energy is not distributed over the required range of wavenumbers. It is demonstrated that the digital filter-based turbulent inflow data generation technique provides a reliable, accurate and consistent turbulent boundary layer in the flow field, which is required for the capture of correct flow physics. It is also demonstrated that the computational cost involved in all the methods presented is almost identical.
Visual navigation is an essential part of planetary rover autonomy. Rock segmentation emerged as an important interdisciplinary topic among image processing, robotics, and mathematical modeling. Rock segmentation is a challenging topic for rover autonomy because of the high computational consumption, real-time requirement, and annotation difficulty. This research proposes a rock segmentation framework and a rock segmentation network (NI-U-Net++) to aid with the visual navigation of rovers. The framework consists of two stages: the pre-training process and the transfer-training process. The pre-training process applies the synthetic algorithm to generate the synthetic images; then, it uses the generated images to pre-train NI-U-Net++. The synthetic algorithm increases the size of the image dataset and provides pixel-level masks—both of which are challenges with machine learning tasks. The pre-training process accomplishes the state-of-the-art compared with the related studies, which achieved an accuracy, intersection over union (IoU), Dice score, and root mean squared error (RMSE) of 99.41%, 0.8991, 0.9459, and 0.0775, respectively. The transfer-training process fine-tunes the pre-trained NI-U-Net++ using the real-life images, which achieved an accuracy, IoU, Dice score, and RMSE of 99.58%, 0.7476, 0.8556, and 0.0557, respectively. Finally, the transfer-trained NI-U-Net++ is integrated into a planetary rover navigation vision and achieves a real-time performance of 32.57 frames per second (or the inference time is 0.0307 s per frame). The framework only manually annotates about 8% (183 images) of the 2250 images in the navigation vision, which is a labor-saving solution for rock segmentation tasks. The proposed rock segmentation framework and NI-U-Net++ improve the performance of the state-of-the-art models. The synthetic algorithm improves the process of creating valid data for the challenge of rock segmentation. All source codes, datasets, and trained models of this research are openly available in Cranfield Online Research Data (CORD).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.