European Space Agency

The Cluster Data-Processing System

E.M. Sorensen

Mission Operations Department, ESA Technical and Operational Support Directorate, ESOC, Darmstadt, Germany

G. Di Girolamo & M. Merri

Ground Segment Engineering Department, ESOC, Darmstadt, Germany

The Cluster mission is designed to study the small-scale structures that are believed to be fundamental in determining the key interaction processes in cosmic plasma. The mission will be controlled from ESOC, which will also be responsible for commanding the scientific payloads of the four spacecraft, in collaboration with the Cluster Principal Investigators (PIs), and for collecting and distributing the mission results to the scientific community. To support the Cluster mission operations, ESOC has developed the Cluster Data Processing System (CDPS), the architecture of which is based on three main components:

This article reflects the ground-station scenario for the original Cluster mission in as far as the station complement for the re-flight mission is still under discussion.

Introduction

The definition, design and implementation of the Cluster ground segment is the responsibility of the European Space Operations Centre (ESOC) in Darmstadt. The Cluster Data Processing System (CDPS) is an important part of that ground segment and one which is crucial to achieving the complex scientific objectives of the mission. Its definition was started at ESOC in late 1990 with a careful analysis of these objectives to map out an operational scenario, and hence define a system that would best fulfil them.

The CDPS is part of the ESOC Operations Control Centre (OCC), which is the central facility responsible for operating the four Cluster spacecraft, and has been developed by ESOC's Data Processing Division. It is a distributed system, the main components of which (see Fig.1) are:

CDPS overall architecture
Figure 1. Overall architecture of the Cluster Data Processing System (CDPS)

In addition, there is an offline Flight Operations Procedure System (FOP) that is used by the Mission Operations Team to prepare Cluster operations procedures and command sequences, which can then be imported into the operational database.

These last two systems are not addressed in this article as they form part of the generic ESOC infrastructure, i.e. they can be reused with minor modifications in support of other missions.

In designing this distributed system, great care has been paid to the concepts of back-up and redundancy in order to be able to cope effectively with emergency situations. The CMCS, for example, is a fully redundant system with no single-point failure.

The Cluster Mission Planning System

The Cluster mission planning is a complex task that involves four spacecraft each carrying eleven identical and independent payloads and all of the traditional platform subsystems. The main role of the CMPS is to schedule the onboard and ground operations so as to maximise the scientific return from the mission within the prevailing constraints, e.g. the onboard storage and power capacities, and ground-station visibility and availability.

The routine mission-control concept is based on the use of a single control centre in conjunction with the two dedicated ESA ground stations, at Redu in Belgium and in the Odenwald in Germany. All payload operations will be conducted according to an agreed plan produced in advance via an iterative mission-planning process. A slightly different approach is necessary for Cluster's WBD payload as it requires coverage from the NASA Deep Space Network (DSN). The CMPS partially supports the coordination with the DSN authority for allocation of Cluster coverage periods and, upon their confirmation, it allows the scheduling of WBD operations. Real-time payload operations are not foreseen during the routine phase, except for special operations like payload software maintenance activities. In addition, most of the platform operations, including dumps from the onboard recorders, are also included in the planning exercise.

To coordinate and consolidate the requests for scientific observations by the Cluster scientific community, a Joint Science Operations Centre (JSOC) has been established in the United Kingdom. The process of planning the Cluster operations (Fig.2) involves several iterations between ESOC and JSOC. These have been formalised into four planning levels:

Overview CMPS
Figure 2. Overview of the Cluster Mission Planning System (CMPS)

  1. The long-term plan, which initialises the planning process and covers a period of about six months. It has to be finalised three months before the start of the period itself.
  2. The medium-term plan, which has a duration of six orbits (the average period of the nominal Cluster orbit is about 57hours).
  3. The short-term plan, which covers the period of three orbits.
  4. The operational plan, also for three orbits, which is completely frozen and can only be used to generate the six Detailed Schedule Files (DSFs): one for each of the four spacecraft and the two ground stations.

The Cluster Mission Control System

The CMCS provides the functionalities needed to support the real-time and near-real-time data-processing tasks that are essential to control the mission (Fig.3). One of the main drivers for its design is the fact that Cluster is the first multiple-spacecraft mission to be controlled by ESOC. In particular, the CMCS must support four spacecraft simultaneously during the Launch and Early Orbit and Transfer Orbit Phases (LEOP/TOP) and two during the subsequent Mission Operations Phase (MOP). This is achieved by exploiting the remotely controlled Redu and Odenwald ground stations. In addition, the CMCS receives and distributes the entire Cluster telemetry data stream.

Cluster Mission Control System
Figure 3. The Cluster Mission Control System (CMCS) architecture

Most spacecraft and payload operations are pre-planned and executed from the on-board time-tagged queue. Nominal real-time operations are limited to the acquisition of on-line and stored telemetry data of all types - housekeeping, memory dumps, science - the former generated at fixed time intervals and the latter recorded during the non-coverage periods and dumped to the ground station during the real-time contact periods, and the uplinking to the onboard memory of time-tagged commands for the execution of all pre-planned platform and payload operations. In addition, real-time operations are also used for special situations like complex payload software maintenance and contingencies.

The main CMCS functionalities, all of which are supported by a sophisticated and homogeneous graphical user interface, are as follows:

Ground-station interfacing and control
The Cluster Network Controller and Telemetry Receiver System (CNCTRS) performs the ground-station interface and control functionalities. It includes facilities for telemetry reception, telecommand transmission, range and range-rate measurements, and ground-station scheduling for the two unmanned stations. The CNCTRS receives the telemetry as delivered by the ground-station equipment and, after having run some basic checks, 'stamps' it with the Earth Reception Time (ERT). It then passes the housekeeping telemetry to the relevant Cluster Dedicated Control System for more specific processing, whilst the science telemetry data are delivered to the Cluster Data Disposition System.

Furthermore, the CNCTRS gathers all the telecommands sent from the CDCSs and forwards them to the relevant ground station. The CNCTRS is also used for acquiring and locally storing range and range-rate measurements made at the ground station, which are made available to the Flight Dynamics System.

Housekeeping telemetry processing
Each of the four Cluster Spacecraft Dedicated Control Systems (CDCSs) performs the spacecraft monitoring and control functions and onboard software maintenance for a specific spacecraft. Each CDCS handles housekeeping data, special Cluster event messages and memory-dump data. The spacecraft housekeeping and science telemetry are time-stamped with UTC with a measurement error of less than 2ms. Parameters can also be processed using calibration curves to convert them into engineering and/or functional parameter values. The housekeeping telemetry is subsequently made available to the operations staff via the On-Line Monitoring System. It is also filed in so-called 'Short-Term History Files' covering the last 10 days of the mission, used for remote access and offline investigations by the flight-dynamics control team. The telemetry is also regularly transferred to the CLUSPEVAL system for long-term archiving.

Telecommand uplinking and verification
The CDCSs also support the telecommand chain, providing the spacecraft operators both with interactive manual commanding and automatic schedule uplinking capabilities. They are also responsible for performing the telecommand uplinking and on-board execution verification via returned telemetry.

On-board software maintenance
The On-Board Software Maintenance (OBSM) facility provides management and configuration control of all changes performed to the on-board software. It relies on the CDSN as its host.

Database management
The Cluster Database System (CDBS) is an offline system that supports all database functions. It is a single database in which the telemetry/telecommand characteristics for all four Cluster spacecraft are defined and maintained for the entire duration of the mission. It also allows the importation of the spacecraft manufacturer's assembly, integration and test database.

The CMCS is based on a distributed system architecture with six primary computers, each with a dedicated standby machine (yellow boxes in Fig.3), linked by the operations local area network (LAN) and accessible only to the operations staff at ESOC. The CDDS provides the necessary secure connection between this operational LAN and the external world, including the Cluster scientific community, in order to support, for instance, payload command requests.

Cluster Mission Control System
Figure 3. The Cluster Mission Control System (CMCS) architecture

For monitoring and control purposes, telemetry data from within the CMCS database can be displayed in alphanumeric form, graphical form and as synoptic mimics, two examples of which are shown in Figures 4a,b.

Mimic display solid-state recorder
Figure 4a. Mimic display of the solid-state recorder

Mimic display part Cluster's AOCS
Figure 4b. Mimic display of part of Cluster's attitude and orbit control subsystem (AOCS)

The Cluster Data Disposition System

The main role of the CDDS is to deliver the mission data to the Cluster scientific community. This is performed via two distinct services provided by the system: the On-Line Data Delivery Service, which allows mission data to be delivered on request via an electronic network for quick-look purposes; and the Off-Line Data Delivery Service, which allows the offline production of recordable compact disks (CD-Rs), to be used as masters for the mass production of CD-ROMs containing all Cluster data.

The types of mission data that are made available by these two services are basically identical, and include housekeeping and science telemetry from all four Cluster spacecraft and all auxiliary data required to correctly process and interpret the telemetry.

Figure 5 is a schematic of the CDDS's interactions with the other Cluster systems and its main products. An important feature of the CDDS is that it provides a safe gateway for secure communication between the highly protected operations environment in ESOC and the external world.

CDDS and its environment
Figure 5. Overview of the Cluster Data Disposition System (CDDS) and its environment

On-line data delivery service
As shown in Figure 5, the CDDS receives housekeeping and science telemetry data in real-time from the ground stations via the front-end computer of the CMCS, whilst the auxiliary data comes from both the CMCS and the Flight Dynamics System (FDS), to which external users have no access because they are protected by the ESOC network security system (firewall). The CDDS stores this data upon receipt and makes it available to users just a few seconds later. Only the last ten days of data are kept on-line.

The on-line data-delivery service works as a client-server model, with no interactive interface for the user, just the exchange of request files and resultant data transfers. The user copies a request file from his own network node to his CDDS account, and the ensuing transfers are effected using FTP-based protocols. The data requested can be from three sources: telemetry data, auxiliary data or master catalogue data.

Off-line data delivery service
The off-line data delivery service is totally separate and asynchronous to the on-line service, although both use the same data pool that is available on the CDDS. This service is wholly controlled by ESOC and does not require any input from external users. As shown schematically in Figure 5, the CDDS stores all housekeeping, science-telemetry and auxiliary data on Recordable Compact Disks (CD-Rs), which are shipped by courier to an external manufacturer for duplication (in about 100 copies) onto Read-Only Memory CDs (CD-ROMs). The latter are then packaged and shipped to authorised members of the Cluster scientific community around the world.

Production of an average of two CD-ROMs per day is foreseen during the Mission Operations Phase, although the CDDS can support the production of up to four CD-ROMs per day on an exceptional basis. The set of CD-ROMs for any given day must be delivered to the scientific community within three weeks of the data's generation. This period includes the consolidation of orbit and attitude data (6 days), the production of the CD-Rs at ESOC (1 day), the shipment to the CD-ROM manufacturer (2 days), the CD-ROM replication (5 days) and, finally, the delivery of the CD-ROMs directly to the science community members (7 days).

A format compliant with the ISO 9660 level 1 standard is used for the CD-ROMs, so that the disks are readable on all common platforms, e.g. UNIX, VMS, PC and Macintosh.

Conclusions

The multi-spacecraft Cluster mission represents a considerable operational challenge, and all of the complex requirements associated with the novel mission characteristics and user requirements have been carefully analysed by ESOC. The resulting distributed computer system was chosen for several reasons:

  1. The distributed architecture reduces the cost of both hardware and software.

  2. The cost of procuring and maintaining several smaller computers (VAX stations) is significantly lower than that of a large computer capable of hosting the whole system.

  3. Software licences are also usually cheaper for small computers. Furthermore, the distributed architecture allows the optimum use of licences across the system.

  4. In the distributed architecture, each element is independent, and failure of one does not generally affect the others. Furthermore, the global system performance is not compromised by heavy resource usage by one element. This can be of particular importance for time-critical tasks, such as real-time commanding.

  5. In the distributed architecture, off-line tasks are hosted on a dedicated computer. Important resource-hungry off-line tasks are thereby totally decoupled from the actual spacecraft operations and other time-critical tasks (e.g. database maintenance).

The experience gained in the Cluster preparation activities has shown that this distributed architecture for science operations provides greater flexibility and allows trade-offs to be made between the various elements of the overall system. This in turn allows the end-to-end data return to be optimised, as well as leading to an overall reduction in project costs.

On the other hand, distributed systems call for clear definitions of the interfaces between the elements. Great effort has therefore been devoted to the definition, review and configuration control of these interfaces for Cluster. The science community have been encouraged to take an active part in their definition, review and testing.

On the assumption that the Cluster data-processing system will perform as well throughout the mission as it has during testing, ESOC's challenge for the future is to build on the novel experience gained to implement systems capable of meeting the needs of future, ever more challenging science missions.


About | Search | Feedback

Right Left Up Home ESA Bulletin Nr. 91
Published August 1997.