Named Data Networking (NDN) is an emerging internet architecture that addresses weaknesses of the Internet Protocol (IP). Since Internet users and applications have demonstrated an ever-increasing need for high speed packet forwarding, research groups have investigated different designs and implementations for fast NDN data plane forwarders and claimed they were capable of achieving high throughput rates. However, the correctness of these statements is not supported by any verification technique or formal proof. In this paper, we propose using a formal model-based approach to overcome this issue. We consider the NDN-DPDK prototype implementation of a forwarder developed at the National Institute of Standards and Technology (NIST), which leverages concurrency to enhance overall quality of service. We use our approach to improve its design and to formally demonstrate that it can achieve high throughput rates.
Probability distribution fitting of an unknown stochastic process is an important preliminary step for any further analysis in science or engineering. However, it requires some background in statistics, prior considerations of the process or phenomenon under study and familiarity with several distributions. As such, this paper presents an alternative approach which doesn't require prior knowledge of statistical methods nor previous assumption on the available data. Instead, using Deep Learning, the best candidate distribution is extracted from the output of a neural network that was previously trained on a large suitable database in order to classify an array of observations into a matching distributional model. We find that our classifier can perform this task comparably to using maximum likelihood estimation with an Anderson-Darling goodness of fit test.
Statistical model checking (SMC) is a formal verification method that combines simulations with statistical techniques to provide quantitative answers on whether a stochastic system satisfies some requirements with a controllable accuracy. SMC takes three inputs: a stochastic model, a linear-time/Metric Temporal Logic property to verify and a set of required confidence parameters. The stochastic model is generally obtained by modeling the functional behavior of a system then adding probabilistic variables to it, which are updated via probability distributions (PD). The latter is, typically, obtained by analyzing measurements from the system's execution using statistical tests to select the best fit distribution. However, this task requires a good statistical background and familiarity with several distributions which is beyond the expertise of some analysts. Hence, in the case of SMC, assuming an incorrect distributional model for the data can lead to inappropriate statistical analysis as well as inaccurate verification of the system under study. As such, this paper presents DeepFit, a tool that uses deep learning in addition to traditional statistics to automate the distributional modeling process. DeepFit was evaluated against synthetic data and real world data and it can perform comparably to using maximum likelihood estimation with an Anderson-Darling, Kolmogorov-smirnov and Probability plot correlation coefficient plot goodness of fit tests.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.