The increasing need for cleaner and more efficient combustion systems has promoted a paradigm shift in the automotive industry. Virtual hardware and engine calibration screening at the early development stage, has become the most effective way to reduce the time necessary to bring new products to market. Virtual engine development processes need to provide realistic engine combustion rate responses for the entire engine map and for different engine calibrations. Quasi Dimensional (Q-D) combustion models have increasingly been used to predict engine performance at multiple operating conditions. The physics-based Q-D turbulence models necessary to correctly model the engine combustion rate within the Q-D combustion model framework are a computationally efficient means of capturing the effect of port and combustion chamber geometry on performance. A rigorous method of correlating the effect of air motion on combustion parameters such as heat release is required to enable novel geometric architectures to be assessed to deliver future improvements in engine performance.A previously assessed process using a combination of a 0-D combustion Stochastic Reactor Model (SRM), provided by LOGESoft, a 1-D engine system model and non-combusting, 'cold' CFD is used. The approach uses a single baseline CFD run and a user developed scalar mixing time (τSRM) response to quickly predict the Rate of Heat Release (RoHR). In this work, the physically-based response for τSRM has been further developed to consider the effect of Variable Valve Timing (VVT) for a variety of engine operating conditions. Cold CFD and 1-D engine simulations have initially been carried out to investigate changes in Turbulent Kinetic Energy (k) and its dissipation (ε) caused by VVT changes, allowing the engine Rate of Heat Release (RoHR) to be predicted. The change in the intake flow velocity was correlated to the scalar mixing time, τSRM resulting in a good engine RoHR prediction at the explored conditions.
Tightening emission regulations and accelerating production cycles force engine developers to shift their attention towards virtual engineering tools. When simulating in-cylinder processes in commercial LDD DI engine development, the trade-off between run time and accuracy is typically tipped towards the former. Highfidelity simulation approaches which require little tuning would be desirable but require excessive computing resources. For this reason, industry still favors low-fidelity simulation approaches and bridges remaining uncertainties with prototyping and testing. The problem with low-fidelity simulations is that simplifications in the form of sub models introduce multi variable tuning parameter dependencies which, if not understood, impair the predictive nature of CFD simulations.In previous work, the authors have successfully developed a boundary condition dependent input parameter table. This parameter table showed outstanding results for lab-scale experiments for over 40 varying operating conditions. The objective in this paper is first to identify the necessary considerations to adjust for the inherent differences between lab-scale and real engine conditions and then implement this parameter table into industry relevant conditions. With this approach the appropriate simulation setup for a real EU6 diesel engine can be predefined by the boundary conditions without previous tuning iterations. The performance of the simulation will be assessed based on its capability to match experimental heat release and chamber pressure data. The approach shown here has the potential to remove the necessity of lengthy tuning iterations and lays the groundwork for novel auto-tuned and predictive in-cylinder simulations.
Producing reliable in-cylinder simulations for quick-turnaround engine development for industrial purposes is a challenging task for modern computational fluid dynamics, mostly because of the tuning effort required for the sub-models used in the various frameworks (the Reynolds-averaged Navier–Stokes and large eddy simulation). Tuning is required because of the need for modern engines to operate under a wider range of conditions and fuels. In this article, we suggest a novel methodology based on automated simulation parameter optimisation that is capable of delivering a priori a coefficient matrix for each operating condition. This approach produces excellent results for multiple comparison metrics like liquid and vapour penetration lengths, radial and axial mass fraction and temperature distributions. In this article, we also show for the first time that input model coefficients can potentially be linked to ambient boundary conditions in a physically consistent manner. Changes in injection pressure, charge pressure and charge density are considered. This paves the way for the tabulation of the constants in order to eliminate lengthy tuning iterations between operating conditions and move towards adaptive simulations as the piston moves changing the in-cylinder conditions. An additional discussion is performed for the validity range of existent models given that in recent years there has been a shift towards more extreme thermodynamic conditions in the injection stage (reaching the limits of transcritical flows). Although in this work the framework was implemented in the Reynolds-averaged Navier–Stokes context because this is the tool of preference of digital engineering currently by automotive industries, the approach can be easily extended in large eddy simulation.
<div class="section abstract"><div class="htmlview paragraph">OTA (over the air) updates help automotive manufacturers to reduce vehicle warranty and recall costs. Vehicle recall is expensive, and many automotive manufacturers have implemented OTA updates. Updating parameters for connected vehicles can be challenging when dealing with thousands of vehicles across different regions. For example, how does the manufacturer prioritise which vehicles need updating? Environmental and geographical factors affect degradation rates and vehicles in hotter regions or congested cities may degrade faster.</div><div class="htmlview paragraph">For EVs, updating the BMS (battery management system) parameters requires careful analysis prior to the update being deployed, to maximise impact and reduce the likelihood of adverse behaviour being introduced. The analysis overhead increases with the number of vehicles. This is because it requires simulation and optimisation of the fleet BMS calibration in a digital twin environment. A targeted approach is the best option to prioritise vehicles for software updates.</div><div class="htmlview paragraph">Smart OTA scheduling makes use of predictive analytics for battery health prediction together with prescriptive analytics in a smart decision engine. The smart scheduling system uses a deep reinforcement learning (DRL) agent in a digital twin environment. The DRL agent can learn and simulate different scenarios and identify the best update sequence depending on monthly temperature profile, traffic congestion, and many other factors to slow down the degradation of fleet health. Whenever there is an update, the DRL agent assesses the situation and recommends the appropriate action to minimise vehicle failures and maintain fleet health. In a large-scale simulation, this approach improved average fleet battery life by 8 to 13% and increased average vehicle range by 2%.</div></div>
In recent years, the exploration of new combustion technologies has accelerated in response to increasingly stringent emissions regulations and fuel economy demands. Virtual engineering tools, that enable the screening of novel hardware and engine calibrations at the early stage of engine development, have become imperative to meet new emission regulations. One-dimensional engine simulations are used at the start of the design of a new engine to define the overall combustion system geometries. Later, more complex three-dimensional computational fluid dynamics calculations are coupled to one-dimensional engine system codes to optimise initial concept geometries and define a system design ready for prototyping. To provide meaningful results, one-dimensional engine system codes often use empirical-based combustion models to calculate the engine burn rate. Moreover, realistic engine burn rates responses, for the entire engine map and for different calibrations, are required to provide three-dimensional computational fluid dynamics codes with correct boundary conditions during the design optimisation phase. Thus, the burn characteristic of new non-traditional combustion solution, for which little experimental data are available, needs to be initially assumed. To improve virtual development and reduce this uncertainty, the industry’s attention shifted towards quasi-dimensional combustion models capable of providing engine burn rate predictions. Within the quasi-dimensional modelling framework, turbulence models, adding extra user-input variables, are required to capture the effect of different combustion chamber geometries on the engine combustion rate. Rigorous validation of zero-dimensional turbulence models for different engine concepts and calibrations is therefore needed to enable quasi-dimensional combustion models to predict the engine burn rate. An alternative methodology, with limited dependency on previous test data, is required to enhance the exploration of novel combustion strategies and geometric architectures. An available process, based on a quasi-dimensional combustion stochastic reactor model, a one-dimensional engine system model and non-combusting three-dimensional computational fluid dynamics calculations, was used for this work. The approach uses limited non-combusting computational fluid dynamics calculations and a previously developed scaling factor response for the stochastic reactor model turbulence input ( τSRM) to quickly predict the engine rate of heat release. In this work, the scaling factor response was assessed against two different engine variants over a variety of engine operating conditions. Moreover, the same response was used to predict the effect of different bore-to-stroke ratios on the engine combustion rate and knock tolerance. Non-combusting computational fluid dynamics and one-dimensional engine system simulations have been carried out to investigate changes in turbulence characteristics due to different engine variants and bore-to-stroke ratios. It was shown that limited number of non-combusting computational fluid dynamics runs is required to characterise the in-cylinder turbulence for each explored engine variant. The scaling factor response was used to manipulate the turbulence input ( τSRM) resulting in good engine burn rates predictions for the explored engine variants and bore-to-stroke ratios. The presented methodology showed augmented predictive capabilities and has potential to move the engine development towards a less hardware dependent virtual approach, offering a practical solution for the exploration of new engine concepts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.