ESA's interplanetary missions Rosetta, Mars Express and VenusExpress have common on-board software, which includes an On-board Control Procedure (OBCP) execution system. These OBCPs are managed on-board as binary files stored in the Mass Memory and in the central computer RAM, possibly in several copies for redundancy purposes. On ground, the source files and compiled output have historically been stored in a variety of places, from version control tools to shared network drives. Jointly the missions identified a need to consolidate into a unified system, capable of supporting the different approaches to OBCPs as adopted by each mission. In parallel to the configuration management aspects, the development of new OBCPs has migrated to a procedure-based methodology which allows OBCP creation in a manner more consistent with the procedure development interface more familiar to ground controllers . The intention is for spacecraft operations engineers to be able to generate new OBCPs without any particular programming skill. Although developable without software background, the OBCPs represent modifications to spacecraft on-board autonomy with major function and potentially critical impact. Thus the new configuration management system needs to integrate with the procedure generation tool and enable rapid end-to-end development while maintaining strict version control as required by onboard software. This paper presents the requirements derived for such a multi-mission system, the implemented solution common to the ESA planetary missions, and lessons learned.
Modern space missions have increasingly complex operations, both on-board and on ground. Mars Express is no exception to this rule, with frequent special science operations and a bespoke plan for each individual orbit. While well validated and competent planning systems are the basis on which such operations are built, a routine system of validating the output of such systems is an important component to ensuring safe and successful operations. In this sense we never stop validating on Mars Express and by doing this we are able to ensure a high level of success and minimize and catch errors long before they reach the spacecraft or the ground station.In order to be able to do this for complex and varied operations we employ a multilayered approach featuring both manual and automatic checks. This paper will describe what these different levels of checking entail and how the decisions have been made on whether or not to automate. This is a dynamic process with automation being consistently introduced when it can provide more efficient and cost-effective than manual checking. One of the major new introductions on Mars Express is a highly configurable state machine engine that is flexible enough to perform a wide number of checks that traditionally have been manual. The design of this tool and its potential will be explored in the paper.As well as the methods and type of checks that are employed on Mars Express; the paper will go into some detail on the underlying reasons for why we consider that careful validation of routine operations products is key to a safe and successful mission. The paper will detail the importance of the concept of independent checking -that validation must be separate from generation, whether manual or automated. We will also discuss the balance between delivery validation of a system that will produce ground and space commanding products and the continued validation of products that system produces even after acceptance. Not only is this continued verification valuable and critical in a complex operations environment but it is also very important to eliminate sources of error not covered by delivery validation, including human error and the use of the system beyond its original design purpose. Through all of these factors this paper will present the routine operations practice on Mars Express of constant validation of products and how this has ensured that such a complex mission can be conducted safely and efficiently.
Since 2011 Mars Express has been flying a truly "file-based" concept for commanding the spacecraft and science operations, after a hardware anomaly forced a change from the previous "on-board schedule of commands" approach. As a consequence of the anomaly, new concepts had to be developed and applied to not just the technologies of file transfer and file management, which have received much attention over the years, but also to answer the questions: What are file-based operations? What goes in the files? How are they practically used both on ground and on board? ESA's first generation of deep space missions implement a packet-based large file transfer protocol to enable "files" to be transmitted from ground and guarantee the completeness on-board. However this is a transport-layer protocol and does not address the usage of the files operationally. Similar proposed standards (such as CCSDS File Delivery Protocol) also do not address the functionality of the files, and so by themselves do not necessarily lead to file-based operations. The Mars Express approach relies on seeing spacecraft operations not as a stream of commands, relayed from a mission planning system to the spacecraft via files, but as a collection of discrete activities, with one commanding file per activity. The contents of the file is the responsibility of the science planners, combining elementary sequences as per the rules defined by the spacecraft/payload operations engineers to construct self-contained, fail-safe sets of commands. For example, one science observation activity includes power, thermal, data link and instrument configuration changes that can be combined into one file (of telecommands, as an On-board Control Procedure, or a combination). Rather than individually scheduling the low-level commands, the activity file is simply executed at the correct time. If the observation needs to be modified or cancelled, the file is replaced or deleted. Spacecraft operations can conveniently be abstracted to the simpler paradigm: "which file to execute and when?" -the low-level constraints and resources checking having already been performed at mission planning level when that activity was planned. The ratio of commands-in-files to scheduled "execute" commands (the amount by which operations have been "compressed" by considering the higher-level "activity" file rather than low-level commands) is about 30:1 but is limited by available memory resources. Future missions with larger memories should achieve better than this, with fewer files containing more complex operations, resulting in even more abstracted operations overall. Mars Express has demonstrated a use case and implementation of file-based operations with regards to spacecraft commanding, but this is only half of the picture. The return of science data and housekeeping telemetry remains based on "packet stores". Packet stores are akin to looped SpaceOps Conferences magnetic tapes, in that the data is stored in the order in which it was written. For Mars Express this is not usually a pro...
Mars Express has been in orbit around Mars since 2003, relying on the Solid State MassMemory (SSMM) to hold a "mission time-line" (MTL) of 3000 commands, refreshed daily, to execute the mission. The MTL schedules transmitter switching, spacecraft pointing and instrument operations. The original operations concept called for the MTL to be kept as fully loaded as possible, as early as possible. In 2011, Mars Express suffered from five SSMM-related anomalies, three of which put the spacecraft into safe mode. The safe-modes were caused by an inability of the MTL to refill its cache of commands, due to the SSMM anomalies. As each safe mode expends roughly 6 months' of fuel, the decision was taken to halt science and non-Earth pointing operations. Another MTL independent from the SSMM exists in processor RAM. This "short MTL" contains space for only 117 commands. A new concept, the File-base Activities with Short Timeline (FAST), was devised to restart science operations as soon as possible. The core of the concept relies on storing commands in the SSMM in discrete files that contain an entire "activity". Critically, "an activity" always starts and ends with the spacecraft in a safe configuration. These command files, always fewer than 100 commands, are loaded into the short MTL in a just-in-time scheme, via "trigger" commands in the MTL, and activating the activity commands only if the file loads completely. This all-or-nothing approach provides robustness to possible further SSMM anomalies by preventing them from causing a safe mode. This paper will present the new operations concept, the additional safety mechanisms, the implementation approach onboard and on-ground, the challenges for team and knowledge management, as well as the achieved performance of the so-rehabilitated mission.
Mars Express has been orbiting Mars since December 2003, utilising the Solid State MassMemory (SSMM) to store commanding, telemetry and science data. Following an anomaly with the SSMM in August 2011, the Mars Express team developed a new operations concept to work around the problem, using a smaller in-memory Mission Time-Line (MTL) that is unaffected by transient SSMM errors, whilst using the SSMM to store command files to refill the in-memory MTL in a just-in-time manner with a transaction-like approach that ensures operational safety. The "File-based Activity with Short Timeline" (FAST) concept enabled Mars Express to return to full operations within a matter of months. The technique of using command files to refill the MTL required the scheduling of "Execute TC File" commands ("triggers") within the MTL itself, blocking around 30 to 40 of the 117 command slots available. Furthermore, the FAST concept meant that the MTL only contained the bare minimum number of commands for the upcoming operations and gave no visibility for the longer weekly "commanding period" (CP), which is the basis of the mission planning concept. Mars Express is an "offline" mission with no real-time commanded operations in routine flight and while FAST allowed the whole CP to be loaded in one go, the upload of triggers remained a daily operation, requiring a delicate balance between "not too many" and "not too few loaded in advance", sensitive to the schedule of ground station passes during the day. Managing the uplink of these trigger commands gave rise to an operations overhead to ensure sufficient margin to deal with anomalies during station passes and various on-board constraints. An On-board Control Procedure (OBCP) was written to act as an additional MTL, to be tasked with the scheduling of the triggers. It provides a File Execution Scheduler, with which a CP's worth of files can be managed, freeing up the reserved space in the MTL and separating the CP schedule from the low-level contents of the activities: the actual unit commands themselves. This requires an OBCP to be running all the time, and of a greater complexity than had been tried before, with tight performance constraints determined by its role as the "mission activities scheduler". Implementation required four distinct strands of development: mission planning; mission control system; on- SpaceOps Conferences board software; and flight control procedures and rules. The mission planning system was required to plan at two distinct levels; the spacecraft activities and the schedule for executing the files. Changes to the MCS were required to implement a parallel On-board Queue Model and to trap and process the specific OBCP management commands. The biggest challenge was the OBCP itself and the management of associated changes to telemetry modes and system resources that accompanied it. Finally, new flight control procedures needed to be written and validated for the installation and use of the new OBCP; routine operational rules were redefined to enforce the revised conc...
ESA's Solar Orbiter mission, scheduled for launch in 2017, will enter into an elliptical orbit around the sun with a perihelion of 0.3 AU and an increasing inclination of up to 35°. Three ten-day "remote sensing windows" will be centred on the closest, most northern and most southern points of each 160-day orbit. During this remote sensing window, remote sensing instruments will peer through slots in the spacecraft's heat shield to observe the evolution of solar features and will only see a small fraction of the solar disk. However, due to the difficulty of modelling the movement of these features, they can migrate out of the instrument's field-of-view within about 3 days. It is therefore mandatory for Solar Orbiter to implement ground-based feature tracking as part of the science planning process. With ground-and Earth-orbit based observatories not always able to observe the same part of the sun as Solar Orbiter, the instruments themselves will need to provide data which can be used by the science planning team to select and track the path of the features across the solar surface. A subset of "quick look" data will need to be defined which is sufficiently detailed to enable the analysis of the movement but also small enough to be downlinked completely in one ground station pass. The rapid processing by the science ground segment located at ESAC in Spain will be critical to turn this "quick look" data into data sets that can be analysed and form the basis of spacecraft pointing requests. Flight Dynamics will be required to check and convert these pointing requests into spacecraft commands forming a complete chain of attitude segments, one for each day of the science window, such that a complete and coherent guidance profile is always available to the spacecraft. Finally, the uplink to the spacecraft must be performed on a daily basis, in such a way as to minimise disruption to on-going science observations. Furthermore, there's the question of how this can actually be achieved operationally in a safe manner. What if a ground-station pass is lost? How do we prioritise the downlink of the quick look data? How do we uplink the new guidance profile safely? How do we transition from one guidance profile segment to the other in a smooth manner, that doesn't interrupt on-going observations? The remote sensing
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.