In this paper, we present concepts, theories, and overview of knowledge management in an autonomous optical networks and Lamba Architecture in cloud related environment. This study presents some illustrative cases that has been used to illustrate the potential application of KM architecture and to evaluate the various policies for the knowledge sharing and integration algorithm. Here the knowledge is used at the optical transponder system level, sharing, and integration implemented at the node level and supervising and data analytics (SDA) controller level. The KM process has been evaluated by integrating on a metro-network situation in terms of model error convergence time and the data shared among agents. Indeed, the propagation and reinforcement actions illustrated by similar convergence time than data-based policies at various phases of the network learning process without compromising the convergence accuracy of the model prediction. The Lambda Architecture is the new model for Big Data and database research focus, that helps in data processing with a balance on throughput, latency, and fault-tolerance. To provide a complete solution and better accuracy, low latency, and high throughput, there exists no single tool. This introduced the idea to use a set of tools and methods to build a comprehensive Big Data approach. Although this paper does not provide a developed and working tool, however, provides an outline and the methods used by researchers to overcome some of the shortcomings of Lambda Architecture. The Lambda Architecture defines a set of layers to fit in a set of tools and methods rightly for constructing a comprehensive Big Data scheme: Speed Layer, Serving Layer, Batch Layer. Each layer satisfies a set of features and builds upon the functionality delivered by the layers beneath it. The Batch Layer is the place where the master dataset is warehoused, which is an unchangeable and add-only set of raw data. Also, the batch layer computes before the results using a distributed processing system like Hadoop, Apache Spark that can manage large amounts of data. The Speed Layer encapsulates new data coming in real-time and processes it. The Serving Layer comprises a parallel processing query steam engine, that takes results from both Batch and Speed Layers and responds to questions and requests in real-time with low latency. Stack Overflow is a Question-and-Answer forum with an enormous user community, millions of posts with rapid growth over the years. This paper demonstrates the Lambda Architecture by constructing a data pipeline, to add a new "Recommended Questions" section in the Stack Overflow user profile and update the questions suggested in real-time. Additionally, various indicators such as trending tags, user performance numbers such as are shown in user dashboard by querying through batch processing layer. Finally, this paper provides a seamless search of the various methods or techniques used to help solve complex databases which are provided by Stack Overflow platform infrastructure.
A theoretical concept of the GMDH technique using a non-linear regression model, multilayered neural nets, model assessment, and selection to determine the prediction error versus selection model complexity was reviewed and evaluated. The model selection was experimented and evaluated with MATLAB.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.