The process of implementation and upgrading has been more evolutionary than revolutionary, but the advances within each generation and from one generation to the next have nevertheless been considerable.
A spacecraft control system is used to operate a spacecraft from the ground. The more general term 'Mission Control System' (MCS) is commoner these days and will be used throughout this paper. The MCS covers the needs of the whole mission, including support to preparing operations, in addition to the spacecraft operations themselves; it can also cover the ground-system operations.
The MCS consists of a computer system connected to one or more ground stations, which are responsible for communication with the spacecraft. Via these ground stations, the MCS receives telemetry data from the spacecraft, which it uses to monitor the spacecraft's health. The MCS controls the spacecraft by sending it telecommands, which are in effect instructions to the spacecraft. An MCS thus operates on the same principles as a process control system, in which the process is monitored via readouts from sensors and controlled via commands to the process. The telemetry data contain so-called 'house-keeping' parameters; typically these are regularly sampled onboard the spacecraft to provide information about its subsystems. These parameters can contain analogue values, e.g. battery charges and currents, temperatures of particular components, or binary values, e.g. an on/off indication for an onboard experiment.
The classical core functions of a Mission Control System are:
The above description is somewhat simplified, but covers the basic principles.
One of the challenges of building a Mission-Control System is to provide the above functions in a user-friendly and easily configurable way. In addition, performance and reliability are big challenges:
MSSS first generation
The very first reusable MCS was the first- generation
MSSS put into service for the first time in 1976, for the Geos-1 scientific satellite mission.
With the processing powers of conventional computers at that time, the functions and performances required could not be provided by a single unit but had to be distributed over several computers. A combination of Siemens-330 minicomputers (front- ends) and CII-10070 (back-ends) mainframes was employed, using the architecture shown in Figure 1.
Figure 1. The first-generation MSSS
This network of computers was expensive and to be cost- effective had to be able to support several missions in parallel. To achieve this it was decided the software should be data- driven, by files describing the spacecraft characteristics and the configuration of the control system itself.
The first MSSS was made up of the following computers:
Because of the limitations of the computers of that time (the Siemens 330 had only 64 kbytes of main memory!), the system had to be written in assembly language to achieve efficiency and compactness. The communication techniques were engineered in- house. Consequently, the system was expensive in terms of testing and maintenance effort. The system was able to support a total telemetry rate of 60 kbit/s for three to four spacecraft concurrently, with up to 15 single-screen displays.
Given the complexity of the above system, and the advent of much more powerful computers, it was decided in the late 1970s to use a single computer to host all of the MSSS software. This had the advantage of simpler software and a simpler backup procedure. Because missions were still being operated using the existing system, however, the move to the new system was made in two steps:
Figure 2. MSSS-A
At the completion of each step, the old system and the new system were run in parallel on currently operational missions, thereby allowing extensive testing of the new system and building user confidence.
The final MSSS-A shown in Figure 2 heralded another interesting advance in that it used a three-screen work station based on an Intel microprocessor, replacing the 'dumb' terminals commonly in use at that time. Much of the screen data presentation processing was performed on these work stations, thereby relieving the host application of the burden of formatting the screens. These Intel work stations were connected to the host with a serial V24 interface via a switch panel, thus allowing reconfiguration of work stations between host computers.
The performance of MSSS-A was much better than the first- generation MSSS. In particular, it was able to support simultaneous operation of up to 14 three-screen work stations (a circa three-fold improvement). The overall telemetry rate supported was still about 64 kbit/s.
Advances of MSSS
The spacecraft database
Undoubtedly, the towering achievement of MSSS was its development of the concept of a spacecraft database (although it was not called that at the time), whereas earlier ESOC control systems had all been mission-specific. MSSS made extensive use of 'table-driven' techniques to define the telemetry and telecommand characteristics of the missions. This included the specification of length and type of parameters, essential for an infrastructure usable for different spacecraft. In effect, it used data-description techniques when such approaches were little known. The contents of the various displays and other system parameters were also defined in the database, laying the foundations for the concept of a Mission Information Base containing all static mission data. In practice, the first MSSS 'database' was a simple sequential text file (it was originally a card file!) which was then converted into tables that could be used efficiently at run-time.
Other MSSS advances
The advances of the second generation, MSSS-A, were:
The first two points made the system easier to maintain. The simpler configuration and use of the switch panel to interconnect work stations made recovery from host-computer failures faster and easier.
In 1984, ESOC had to decide which MCS infrastructure should be used for the Hipparcos, Eureca, and ERS- 1 missions. The needs of these missions, with their demanding mission-specific tasks and higher telemetry rates, could not be accommodated with an MSSS-A configuration, even by running it in a single-spacecraft mode. In addition, Eureca had adopted new packet-telemetry and telecommand standards which were not supported by MSSS.
The concept of a single hardware configuration to be shared between different missions (like MSSS) was abandoned because of the sharp fall in the cost of computer hardware. In addition, there would have been difficulties in running such different missions on a single computer configuration. This led to the idea of using a dedicated hardware configuration for each mission. Each such dedicated hardware configuration took the form of a redundant pair of DEC/VAX computers - a real-time (RT) computer and a backup and development (DV) machine. The power of the VAX was dependent on the load profile of the particular mission. The approach was thus a centralised one and a new infrastructure called the 'Spacecraft Control and Operation System' (SCOS) was implemented to support this new approach.
The initial implementation of SCOS - the DEC/VAX resident part eventually became known as SCOS-A - re-used the Intel-based work stations since these were a relatively recent investment at the time SCOS-A development began (1984). SCOS took over the user requirements of MSSS. However, for financial reasons no generic telecommand system was developed, but care was taken to allow for easy interfacing of telecommand- related applications. Schematically, the hardware configuration for each mission (Hipparcos, ERS-1 and Eureca) was similar to that depicted in Figure 2, with the SEL/Gould machines replaced by machines of the DEC/VAX series. Of course, the SCOS-A software system running on each such configuration was completely different.
Between 1989 and 1991 the (by then) obsolete Intel work stations were replaced with off-the-shelf SUN work stations. ESOC developed Intel emulation software to retain full compatibility with the existing host-resident applications to allow 'plug-compatible' replacement of the Intel work stations. The software on the SUN was referred to as SCOS-B. The implementation on the SUN side was done in Ada and C under SunOS, the standard SUN operating system at that time. SCOS-A (DEC/VAX part) and SCOS-B (SUN part) together form what is known today as SCOS-I.
A later development was the use of the network TCP/IP communications protocol to replace the serial V.24 interface connecting the work stations to the host computer, which also involved modifications on the host (SCOS-A) side. This permitted interconnection of the VAX and the SUN work stations using a Local Area Network (LAN), thus making the patch panel used on MSSS-A and the initial SCOS implementation obsolete. The result was even easier switching of the work stations between host computers. Figure 3 shows a complete SCOS-I set-up for several missions, including the common operations LAN (OPSLAN). The OPSLAN is itself redundant (although this is not shown in Fig. 3).
Figure 3. SCOS-I
SCOS-B was used for ERS-1, ERS-2, ISO, and was also prepared and validated for the Cluster mission. It is also planned to be used for the Envisat and XMM missions.
Advances of SCOS-I
SCOS-A advances were:
SCOS-B made the key advance of using the X11 protocol, which enables applications to make use of modern WIMPS (Windows, Icons Mouse Pop-Up Menus) and Graphical User Interface (GUI) techniques. This in turn permitted: many more displays than the physical number of screens on a work station, via for example overlapping windows; use of mouse as an alternative to the keyboard; and the use of menus to start/stop/control applications. It also permitted a new kind of 'mimic' display, consisting of a schematic representation of a spacecraft subsystem that can be driven by the actual values of telemetry parameters, e.g. switches can be shown whose settings react to on/off parameters in the telemetry. The example shown in Figure 4 is, in fact, taken from a later infrastructure, SCOS-II, but the mimics on both systems are very similar.
Figure 4. Example of a mimic display
In conclusion, the major advances of SCOS-I were both functional and technological: functional in terms of the handling of packet telemetry, and technological through the use of more powerful computers, the use of a commercial database system for maintenance of the mission database, the introduction (in SCOS-B) of modern work stations and the X11 protocol to provide modern WIMPS/GUI interfaces, and finally the introduction of the TCP/IP protocol and LAN technology.
The development of SCOS-II began in 1992 and is intended as a completely new replacement of SCOS-I, which it was foreseen would become obsolete in the late 1990s. Its aims are to:
SCOS-II addresses these points as follows:
SCOS-II achieves scaleability by devolving processing to the user work stations as far as possible. This is achieved by: (a) broadcasting telemetry data to all user work stations (rather than making point-to-point transfers to all work stations, i.e. one transmission on the network replaces n, where n is the number of work stations): (b) providing data caches at each work station, so that the user work station can normally get data from its local cache, thereby reducing network loading.
SCOS-II is based upon the client-server paradigm, allowing client and server tasks to be distributed flexibly according to mission needs. With SCOS-II, each user work station will typically run one or more telemetry-processing tasks and the associated displays, so there is little or no interference between clients. Tasks such as commanding, retrieval, archiving, etc. can be similarly distributed. The client mission examples shown in Figures 7 and 8 represent two extreme cases, an extended distributed configuration and one on a single work station.
Advances of SCOS-II
In its present implementation, SCOS-II has monitoring facilities equivalent to or better than those of SCOS-I (see below for the advanced features). It has an incomplete telecommand subsystem; despite this, elements of its telecommand facilities have been successfully customised and extended for its three client missions.
A selection of SCOS-II's advances are:
Figure 5. The SCOS-II 'display container' concept
Figure 6. The SCOS-II telemetry query display
The performances achieved using recent releases of SCOS-II are good. For the Huygens mission, for example, playback data rates of 120 packets/second have been achieved on a SUN Sparc 20 work station (about double that for the Eureca SCOS-A system), this enhanced performance resulting from the combination of the new hardware and the SCOS-II approach. At the same time, SCOS-II is still a young system and there are areas where further optimisation is needed.
SCOS-II client missions
The current client missions are SOHO, Huygens and the LEOP (Launch and Early Orbit Phase) of the Meteosat Transitional Phase (MTP) spacecraft. Also, the Envisat OBSM system will be based on the SCOS-II OBSM, while its spacecraft control system is based on SCOS-I .
For the SOHO project, SCOS-II has been used since the satellite's launch (in November 1995) to complement the nominal control system, primarily in the area of historical telemetry data retrieval and monitoring. This system was set up at the control centre (at a non-ESA site, namely Goddard Space Flight Center, Maryland, USA) at very low cost and at short notice (about two months).
The MTP LEOP, scheduled for September 1997, will involve a total of 11 client work stations and two servers, one dealing with centralised-type tasks (limits checking) and the other handling retrievals (so that heavy retrievals use will not affect the rest of the system). This fully distributed configuration is shown in Figure 7.
Figure 7. The MTP SCOS-II configuration
SCOS-II will be used to support the Huygens Probe for the joint NASA/ESA Huygens/Cassini mission. This deep-space mission to study Saturn's moon Titan is due for launch in late 1997. The control system has recently been successfully tested in an on- ground System Validation Test with the Probe (Fig. 8).
Figure 8. The Huygens SCOS-II configuration
This brief summary has shown that, over more than 20 years, ESOC has undertaken six major implementations or upgrades of its MCS infrastructure. The MCS architectural concept has changed considerably during this period, starting with a distributed architecture (MSSS first generation) and then moving to an essentially centralised architecture (MSSS-A, SCOS- A/B). SCOS-II, the successor to SCOS-A/B, returned to a basically distributed approach, although not the rigid one of the MSSS first generation. In fact, it permits any chosen degree of distribution, since the mapping of tasks on the network is flexible, varying from a non-distributed 'SCOS-in-a-box' approach to the highly distributed approach of the MTP LEOP configuration. It has come to be recognised, however, that even in a distributed implementation, certain tasks have still to be centralised (e.g. limits checking, telecommand transmission).
It is interesting to note that the overall process of implementation and upgrading has been more evolutionary than revolutionary. For example, SCOS-I took over the centralised architectural concept of MSSS-A and initially it even used the same work stations. SCOS-I also re-implemented the MSSS spacecraft data-base concept using modern database technology, and implemented packet telemetry. SCOS-B pioneered the use of SUN work stations including the TCP/IP and X11 protocols, and introduced modern WIMPs techniques into the ESOC operations community. In this way, SCOS-I helped prepare for SCOS-II. We believe that today's SCOS-II will help keep ESA at the forefront of the technology that it led with the initial MSSS implementation.