Abstract. Prevailing Modeling and Simulation (M&S) techniques have struggled toprovide meaningful quantitative results in M&S of complex System of Systems (SoSs) in the face of an environment filled with complex interacting uncertainties. This paper reports on systems thinking applied to "how" M&S techniques should shift to allow a next generation of quantitative tools and techniques. The imperative is to provide quantitative performance results across the constituent interfaces in a modeled architecture. A five step statistical and parametric algorithm tool that addresses Uncertainty Quantification (UQ) is presented. [Improving the utility of UQ data evaluation] A quantitative approach to managing complex uncertainties across modeled interfaces using graph theory is proposed. A future vision for SoS Engineering (SoSE) that uses graph theory based modeling is suggested to improve the utility of tools such as UQ is suggested.
A key requirement for using a simulation model to assess a highly complex system is the ability to characterize and quantify the uncertainty in the simulation results with respect to a typically immense set of possible combinations of values of the model's input parameters. Some of these inputs may be sampled from a known or assumed probability distribution, but others are known only possibilistically. A biologically-inspired exploited search model is proposed to assess issues such as hazard, risk, and sensitivity analysis when possibilistic and probabilistic uncertainties interact. Finally, a method for holistic quantification of total uncertainty is presented.
Preparing a dataset is a very important step in data mining. If the input to the process contains problems, noise, or errors, then the results will reflect this, as well. Not all possible combinations of the data should exist, as the data represent real-world observations. Correlation is expected among the variables. If all possible combinations were represented, then there would be no knowledge to be gained from the mining process.
In building a decision support system (DSS), an important component is the modeling of each potential alternative action to predict its consequence. Decision makers and automated decision systems (i.e., modelbased DSSs) depend upon quality forecasts to assist in the decision process. The more accurate the forecast, the better the DSS is at helping the decision maker to select the best solution. Forecasting is an important contributor to quality decision making, in both the business world and for engineering problems. Retail stores and wholesale distributors must predict sales in order to know how much inventory to have on hand. Too little can cause lost sales and customer dissatisfaction—If too much is on hand, then other inventory problems can occur (i.e., cash flow, ad valorem tax, etc.). If the goods are perishable, it could most certainly be a financial loss. Items that occur over time, as in the number of cars sold per day, the position of an airplane, or the price of a certain stock are called “time series.” When these values are forecast, the accuracy can vary, depending on the data set and the method. This subject has been greatly discussed in the literature and many methods have been presented. Artificial neural networks (ANN) have been shown to be very effective at prediction. Time series forecasting is based upon the assumption that the underlying causal factors are reflected in the lagged data values. Many times, a complete set of the causal factors either is not known or is not available. Predictions are made based upon the theory that whatever has occurred in the near past will continue into the near future. Time series forecasting uses past values to try and predict the future. A slight modification to this concept is the application of recency. What happened more recently is closer to the current situation than the more distant past. The older data still contain knowledge, it just is not as important (or as correct) as the newest information. Things change, life is dynamic, and what used to be may be no more or may be to a different extent. Modification of the training algorithm of a neural network forecaster to consider recency has been proven on real economic data sets to reduce residual by as much as 50%, thereby creating a more accurate model which would allow for better decision making.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.