To achieve knowledge superiority in today’s operations interoperability is the key. Budget restrictions as well as the complexity and multiplicity of threats combined with the fact that not single nations but whole areas are subject to attacks force nations to collaborate and share information as appropriate. Multiple data and information sources produce different kinds of data, real time and non-real time, in different formats that are disseminated to the respective command and control level for further distribution. The data is most of the time highly sensitive and restricted in terms of sharing. The question is how to make this data available to the right people at the right time with the right granularity. The Coalition Shared Data concept aims to provide a solution to these questions. It has been developed within several multinational projects and evolved over time. A continuous improvement process was established and resulted in the adaptation of the architecture as well as the technical solution and the processes it supports. Coming from the idea of making use of existing standards and basing the concept on sharing of data through standardized interfaces and formats and enabling metadata based query the concept merged with a more sophisticated service based approach. The paper addresses concepts for information sharing to facilitate interoperability between heterogeneous distributed systems. It introduces the methods that were used and the challenges that had to be overcome. Furthermore, the paper gives a perspective how the concept could be used in the future and what measures have to be taken to successfully bring it into operations
Today's ISR (Intelligence, Surveillance and Reconnaissance) defense coalitions require storage and dissemination mechanisms that are able to cope with emerging changes to requirements and new features. Previous System of systems (SOS) architectures used to be built with years of planning, development, testing and deployment, usually in the form of distributed monoliths. Due to new requirements in ISR, shorter response cycles are required. To reach this goal, new approaches are of interest in the architectural style and workload sharing within the development team, resulting in the ability to better maintain and change existing software solutions. Ideally, such a shift results in improved scalability, replaceability, modularity and resilience.In this context we examined our existing software that provides and also internally uses legacy middleware such as "Common Object Request Broker Architecture" (CORBA) (among others). The overall codebase was written in such a manner that it was easy to produce, i.e., technically motivated. The development team is rather small, so efficiency and the possibility to share (developer) knowledge is important.Our goal was to evaluate the state of the art, thus being able to reasonably apply modern software development approaches that support mandatory legacy support. We attempted a restructuring of the codebase applying the principles of "Domain-Driven Design" with its "bounded contexts", resulting in domain-oriented source code that is easy to verify and maintain.Keeping in mind our small development team, we aimed for shared responsibility, giving us the necessary resilience for unplanned staff absence.In this publication, we present a possible migration path with its operational constraints (e.g., legacy interfaces) towards a more suitable software solution and the lessons learned during the process. In addition, we outline how this was achieved with a small headcount.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.