Today, SDL and SystemC are two very popular languages for embedded systems modeling. SDL has specific advanced features that make it good for reflection of the multi-object systems and interactions between modules. It is also good for system model validation. The SystemC models are better suitable for tracing internal functions of the modeled modules. The hypothetical possibility of combined use of these two languages promises a number of benefits for researchers. This article specifically addresses and discusses the integration of SDL and SystemC modeling environments, exchange the data and control information between the SDL and SystemC sub-modules and the real-time co-modeling aspects of the integrated SDL/SystemC system. As a result, the mechanisms of SDL/SystemC co-modeling is presented and illustrated for an embedded network protocols co-modeling case study. The article gives an overview and description of a co-modeling solution for embedded networks protocols simulation based on experience and previous publications and research.
The algorithm for selection of n-words and formation of a description of the subject area in the form of a fuzzy ontology based on the hybridization of linguistic and statistical methods of text analysis has been developed. The input data of the algorithm are text sequences in machine language, the result of the algorithm operation is the description of the subject area in the form of a fuzzy ontology. The definition of classes of fuzzy ontologies in accordance with the source code is carried out using several machine learning algorithms that combine methods of data preparation and prediction. Methods “Bootstrap”, “Bagging” and “Random forest” were used to classify text sequences. A feature of the developed algorithm is the need to represent ontology objects mainly in the form of one-words with maximization of the number of relations between objects.
The article describes the tasks of data mining, data modeling, presents a classification of data models, and also develops a formal model for representing various types of weakly structured and unstructured information resources. The developed formal model for representing various types of weakly structured and unstructured information resources within the framework of the software environment represents the theoretical basis for creating tools for solving the applied problem of automatic analysis of documentation on paper and digital media for the subsequent cataloging of poorly structured information.
The article presents a formal presentation of four ontologies of knowledge bases of the software environment: ontology of a problem area, linguistic ontology, ontology of precedents, ontology of rules for analyzing texts in natural language. When developing the knowledge base of the software environment, it is necessary to formulate requirements for the model of knowledge representation, the ontology. The development of a unified environment for semantic analysis of flows of weakly structured information that implements modern intelligent algorithms for processing text information will greatly facilitate the decision-making process by a specialist in the time constraint mode, due to the possibility of using a single unified bank of expert knowledge in the work of the question-answer system, and will also allow an automated semantic verification of information flows in order to provide information ion safety organization.
The article describes the tasks of the optical recognition system for tabular documents and the development of a formal model of the optical recognition mechanism for tabular documents. The aim of the study is to use the developed system of automatic analysis of platform-type documentation as expert and training to support decision making, semantic analysis or audit of any content, including in other industries based on the principle of automatic filling of ontologies. The proposed system will allow to overcome this barrier in a cardinal way – by automatically creating a database. In this case, the user’s role will be reduced to the choice of sources for filling ontologies (the choice of thesauruses, corpus of texts in machine-readable or natural language, etc.) and the configuration of inference rules, i.e., essentially, to the configuration operations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.