Today's sensors and analysis systems produce huge amounts of data and one of the main challenges is how to enable users to find relevant data in time. Software systems supporting queries must provide suitable filter mechanisms. Through this filtering, the full dataset can be reduced to a manageable amount of relevant data. Respective filter criteria can be chained together in the form of Boolean expressions using disjunctions and conjunctions, and the resulting hierarchical structures can be further simplified by parentheses.The practical use case that we present in this publication consists of a web application to access potentially large data sets using a defined set of metadata based catalogue entries. This application currently supports the specification of filter criteria, which can be concatenated by the user through a single global conjunction in a flat hierarchy. The underlying query language supports more complex queries using any combination of conjunctions, disjunctions and brackets. There is a user requirement to extend the expressiveness of the client search queries so that the full scope of the query language can be leveraged meaningfully. One problem is how such complex search queries can be graphically structured and visualized in a clear and comprehensive way.There exist different approaches for the graphical visualization of program code and query languages. Some of these approaches also support the graphical editing of these representations by the user. Examples for such frameworks or tools are Blockly, Scratch or Node-RED. In this publication, an analysis of the applicability of such frameworks in the existing web application is presented. For this purpose, operational constraints and exclusion criteria that must be fulfilled for the use in our application are identified. This results in a selection of a framework for a future implementation.
Today's ISR (Intelligence, Surveillance and Reconnaissance) defense coalitions require storage and dissemination mechanisms that are able to cope with emerging changes to requirements and new features. Previous System of systems (SOS) architectures used to be built with years of planning, development, testing and deployment, usually in the form of distributed monoliths. Due to new requirements in ISR, shorter response cycles are required. To reach this goal, new approaches are of interest in the architectural style and workload sharing within the development team, resulting in the ability to better maintain and change existing software solutions. Ideally, such a shift results in improved scalability, replaceability, modularity and resilience.In this context we examined our existing software that provides and also internally uses legacy middleware such as "Common Object Request Broker Architecture" (CORBA) (among others). The overall codebase was written in such a manner that it was easy to produce, i.e., technically motivated. The development team is rather small, so efficiency and the possibility to share (developer) knowledge is important.Our goal was to evaluate the state of the art, thus being able to reasonably apply modern software development approaches that support mandatory legacy support. We attempted a restructuring of the codebase applying the principles of "Domain-Driven Design" with its "bounded contexts", resulting in domain-oriented source code that is easy to verify and maintain.Keeping in mind our small development team, we aimed for shared responsibility, giving us the necessary resilience for unplanned staff absence.In this publication, we present a possible migration path with its operational constraints (e.g., legacy interfaces) towards a more suitable software solution and the lessons learned during the process. In addition, we outline how this was achieved with a small headcount.
Nowadays, ever larger amounts of data are being generated, processed and linked. This enables to share data with other people or communities and to work collaboratively and evaluate data. Depending on the use case, environment and domain there are different aspects to consider regarding data security, availability, data protection etc.In the military environment, a concept and derived specifications for data distribution were standardized as STANAG 4559 and are already used operationally. The advantages of such a solution can also be of interest for other domains with similar needs.A possible use case is in the context of research data. Especially in areas where huge amounts of data with specific features are needed, it is often difficult to access (enough) research data and as a result, the outcome of the research is of limited quality. As every research institution creates its own data it would be helpful to have a possibility to share data and information amongst each other in a standardized way. The possibility of the aggregation from individual authorities results in a joint data pool. The research based on such a data pool can be more (cost) efficient, the quality increases due to the broader data sets and aspects like anomaly detection could be enforced. We present an idea of a concept to use a military data distribution standard for civil applications by defining data model extensions and considering security aspects and obstacles that may occur from various aspects such as the military characteristic and inflexibility of the standard and the data model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.