European Space Agency


Implementation of a Communications Infrastructure for Remote Operations

U. Christ, W. Frank & M. Bertelsmeier

ESA Directorate for Operations, ESOC, Darmstadt, Germany

R. Jönsson

ESA Directorate for Manned Spaceflight and Microgravity, ESTEC, Noordwijk, The Netherlands

The communications scenario for support of the Columbus Orbital Facility (COF) operations led to the definition of an Interconnection Ground Subnetwork (IGS) to serve as the baseline communications infrastructure for flight operations for all manned spaceflight elements. This IGS communications support concept has been re-examined in the light of new requirements associated with the Automated Transfer Vehicle (ATV) and the optimisation possibilities have been re-investigated. The implementation as now planned will combine existing common network resources, services and management functions with requirements to minimise hardware investments, the cost of the links, and the number of staff needed for communications operations.

The concept for this revised communications network supporting remote decentralised payload operations has already been successfully demonstrated during the Atlas-2, IML-2 and Atlas-3 Spacelab missions. This article describes the infrastructure itself and reports on the results achieved during those Spacelab missions as a step towards the efficient operation of experiments aboard International Space Station Alpha (ISSA).

COF communications for operations

In defining the communications support needed for operations purposes when the Columbus Orbital Facility (COF) is attached to International Space Station Alpha, a number of distinct communications scenarios need to be considered (Fig. 1):

Further scenarios relate to the cases of:

both of which are to be provided via ESA's Data Relay System.

In the nominal end-to-end communications scenario shown in Figure 1, all data from experiments accommodated in the Columbus laboratory module are to be relayed via the DRSS inter-orbit link to the Central Earth Terminal. The CET represents one of the entry points for space-to- ground data into the IGS, which then provides direct dissemination to remote sites according to the individual service requirements of each site (data, command, voice, video, etc.).

Data from those European experiments that may be accommodated in the US part of the Station are routed to the COF multiplexer. The ESA Relay at Marshall Space Flight Center (MSFC) selects, configures and multiplexes the various services and transmits them via a trans-Atlantic trunk (TAT) to the IGS central node at ESOC in Darmstadt (D), from where they are disseminated to the remote sites as required. Forward inputs from Europe to the Station are transmitted via the ESA Data Relay System for uplinking by NASA facilities.

In addition to the space-to-ground audio, video and data services, the IGS will provide resources and services for operational ground-to-ground communications, including voice, video and data.

In Europe, the IGS will be based on a core network consisting of leased lines where a continuous and stable bandwidth is required, and additional on-demand resources and services. The latter will be accomplished by employing such technologies as multiplexing a number of 64 kbps Integrated Services Digital Network (ISDN) channels and/or B-ISDN (Broadband-ISDN) services where available. In addition, high-bandwidth multipoint video conferencing and broadcasting will be supported by Switched Very Small Aperture Terminals (XVSAT) offering dynamic bandwidth allocation. The choice of technology for each individual link and service is primarily driven by the need to minimise carrier cost for the operational phases.

One of the achievements of the NASA/NASDA/ ESA Space Network Interoperability Panel has been to ensure compatibility of the DRSS and TDRSS S-band inter-orbit links, which provides the ability for cross- support should one or other satellite system service fail. This agreement presently covers only the inter-orbit S-band links; additional agreements are necessary to complete the on-ground cross-support scenario, the needs of which may differ from project to project.

In the event of a TDRSS S-band failure, this cross-support capability could become highly important within the operations scenario. On the basis of the compatibility agreements that have been achieved, the inter-orbit S-band link can then be relayed via DRS to the CET. In this case, the IGS will provide a transparent service between the CET and the ESA Relay at MSFC, where it constitutes the interface for NASA to the forward and return links to the Station.

In the event of a TDRSS Ku-band failure, the CET ground service processor will have the ability to extract an ISSA Virtual Channel (VC) from the COF return link data and forward it to the co-located IGS node for transmission to the USA. In the case of a DRS failure, all laboratory data can be multiplexed onboard the Station and transmitted in Ku- band via TDRSS to the ground. The ESA Relay will then receive the COF composite Virtual Channel via the same interfaces as in the nominal communications scenario.

End-to-end Communications
Figure 1. Nominal end-to-end communications scenario for the Columbus Orbital Facility (COF)

ATV communications for operations

The ATV Control Centre (ATV-CC) will be responsible for the vehicle s navigation, orbital manoeuvres, and attitude control. During the vehicle s free-flying phases, the connectivity between the ATV and its Control Centre will be provided primarily by ESA s Artemis/DRS spacecraft, with a backup possibility via space-to-ground links to suitably located S-band ground stations and onward links via ESA s operations support network OPSNET. Inside the control zone of the Space Station, and also in the ATV attached mode, the space-to-ground link will be routed nominally to the Space Station Control Center (SSCC) and from there via the IGS to the ATV Control Centre.

The OPSNET communications infrastructure for the backup S-band stations already exists and is also being used by other ESA projects.

The ground communications connectivity for the ATV docking/attached phase is comparable to the COF scenario. In this case, the IGS Relay will provide the interface to link the data between the Space Station Control Center and the ATV Control Centre. Since the associated data rates are rather low, this service can be accommodated within the resources and services provided for COF operations.

Common resources, services and network management

Figure 2 provides a summary of all of the communications elements required to support operations. It shows existing ESA resources which it is proposed to reuse (S- band stations for backup, OPSNET, communication control, and the spacecraft control facilities of ESA/ESOC) and the resources that need to be added in terms of augmentations of existing IGS implementations, new IGS elements, and the Artemis/DRS ground terminals.

The OPSNET nodes within the ESA ground stations and the IGS share the same state-of- the-art technology, which both prior studies and Spacelab mission-support experience (Atlas-2, IML-2 and Atlas-3) have demonstrated to be the most appropriate as well as providing adequate migration possibilities to future communications technologies.

Like OPSNET, the core network of the IGS will be based on leased lines, providing permanent connectivity with fixed data rates. The major data volumes will be handled by the IGS in essence using on-demand (dial-up) resources like narrow- or broad-band ISDN or switched satellite links.

OPSNET communications control functions include management of the communications network to all ESA stations. The IGS communications control functions include the management of the total network and higher level services for element control facilities and remote user sites.

One justification for the proposed communications architecture is commonality of the network management implementations for IGS and OPSNET. Although the IGS management has a higher functional complexity (with respect to dynamic changes in resources and services and the support of new services), key elements of the network management systems appear identical to the communications operator. A common communications management system could therefore be established after completion of the IGS network s build-up.

Integrated Communications
Figure 2. The integrated communications ground segment

Recent Spacelab mission experience

During the Atlas-2, IML-2 and Atlas-3 missions, the IGS provided data, voice conferencing, video distribution/conferencing and high-rate data services for remote user centres in Europe. During the IML-2 mission, for example, five such centres were supported simultan-eously. The combination of services provided allowed the experimenters to operate and interact with their experiments from their home institutes in very much the same way as they had from the Payload Operations Control Center (POCC) at Marshall Space Flight Center (MSFC) during earlier Spacelab missions.

In addition to achieving enhanced returns from their scientific experiments, experimenters were now also able to make use of reference facilities, computing resources and resident expertise at their home laboratories, which typically are not available at the POCC.

Particular features of the IML-2 communications implementation were its adaptation to the different user needs based on the modular service capabilities of the IGS, and the minimisation of connectivity costs. This was achieved by using a combination of classical leased lines, satellite links and bundled multiples of narrow- band ISDN channels, according to the simulation and mission schedule requirements for each remote site. The IGS s central management system allowed both the staffing and the involvement of communications personnel at the remote sites to be kept to a minimum.

The success of this communications approach for the very demanding IML-2 scenario proved the validity of the concept for COF communications.

As Figure 2 shows, the emerging IGS co-exists with OPSNET, the network dedicated to space-craft operations support, and ESANET, a large multipurpose ESA network handling all other communications tasks (e.g. administrative support, access to public and research networks, LAN interconnects, mission planning and non-real-time payload data transport). ESANET includes two high-speed trans-Atlantic links to Goddard Space Flight Center (GSFC), where it has a gateway into NASA's multi-purpose Program Support Communications Network (PSCN). The ESANET Control Center is located at ESOC, and the PSCN Management Center at MSFC. Since the trunk capacity required for the IGS was less than the bandwidth of the existing high-speed links, it was an obvious choice to use PSCN/ESANET resources as the trunk between MSFC and ESOC. Availability and mean-time-to-repair levels were made commensurate with the mission requirements by adopting 24-hour operator staffing during simulations and the missions themselves.

In April 1993, the Atlas-2 Spacelab mission was launched, for which the IGS and ESANET provided the communications support needed to operate the two European payloads remotely from the Principal Investigator (PI) site in Brussels. These services included data exchange, voice and video conferencing. This was the first time an experimenter conveniently sited at home base had exercised full control over an experiment aboard Spacelab.

Following this successful demonstration, in July 1994 five European remote user centres participated in a remote experiment operations scenario on the IML-2 mission with Spacelab. They were able to monitor and adjust their experiments by sending commands directly from their home institutes, again relying on the IGS and ESANET. The five remote sites were: CADMOS in Toulouse (F), DUC in Amsterdam (NL), MARS in Naples (I), MUSC in Cologne (D), and SROC in Brussels (B) (see Fig. 3, where the acronyms are also explained).

Communication from Spacelab to MSFC was via NASA's TDRSS and NASCOM systems. ESA established the IGS Relay, the communications and mission operations link to Europe, at the Huntsville Operations Support Center (HOSC), one of the MSFC facilities.

In Europe, the complementary IGS central node terminated the PSCN/ESANET trunk at ESOC and provided the connectivity for voice, video and data services to the remote sites in Europe. IGS nodes were installed at all remote sites, providing the end-to-end network management capabilities needed for the reliable operation of networks of this complexity.

As a low-cost approach was a pre-requisite for the IML-2 remote operations support, no back-up systems or redundant communi-cations links were foreseen except for the Microgravity User Support Centre (MUSC) in Germany, where an ISDN back-up capability was implemented.

IGS Communications
Figure 3. The IGS communications scenario for the IML-2 Spacelab mission

Phased IML-2 network implementation

During the test phases that preceded the IML-2 mission, the communications network capabilities were built up in phases in terms of bandwidth, services, etc., leading up to the last tests that resulted in the freezing of the configuration to be used for the mission.

Three different technologies were chosen for the individual sites based on the particular cost and performance requirements:

Figure 4 provides an overview of the communications links that were established for IML-2.

Communications Links
Figure 4. The communications links for the IML-2 mission

IGS communications services supporting Spacelab missions

The communications services provided by the IGS were derived from the remote operations requirements at each site, which included:

The voice conferencing system was based on an extension of the NASA Huntsville Voice Distribution System (HVoDS), with its proprietary formats and signalling. The remote sites could access up to 32 voice loops at the same time.

For the video services, the analogue video signals were digitised and compressed to 384 kbps. Besides simultaneous distribution of onboard video to multiple sites, a digital video multipoint control unit (MCU) at ESOC allowed any video conferencing configuration between the remote sites, ESOC and NASA to be supported.

A data-exchange capability was provided by a LAN interconnect service, linking the LANs at the different sites and the HOSC LAN, from where databases and planning data could be accessed. As the latter was also interconnected via several other networks, including TDRSS, with the LAN onboard Spacelab, the European remote user centres were able to communicate directly with their experiments in space, sending commands and receiving their scientific data.

At the network-management level, all of the communications services and the systems that provide them are managed by the same centralised management platforms: the Network Management System (NMS), and the IGS Integrated Management Facility (IMF). The NMS is the proprietary management system of the core switching nodes on which the network is based. The IGS IMF is based on an expert system, which was customised for IML-2 to integrate the management of the additional subsystems (codecs, routers, etc.) under a single management. This centralisation of the network operations, both for routine operations and trouble- shooting, is a key aspect of the management architecture.

Trouble shooting and out-of-service network management operations can also be conducted centrally from ESOC. Support at the remote sites is required in principle only for hardware replacement.

Operational aspects

The overall timeline defining the activation and duration of the experiments aboard Spacelab was scheduled well in advance of the IML-2 mission. The remote-operations time-line derived from it gave the schedule of communications service requirements for IGS operation.

The planned operational activities were:

Figure 5 shows the nominal communications operation scenario for IML-2. The IGS operations team (IGS Control) monitored and configured the network by means of the integrated Network Management System (NMS). IGS Control was in permanent contact with the Remote Operations Coordinator, who resided at the science operations area in MSFC throughout the mission.

The Remote Operations Coordinator issued requests to IGS Control to perform service changes and received reports on the service status. Some service changes required the support of the HOSC Communications Control, e.g. provision of Spacelab high-rate data flow to the IGS Relay at MSFC. For this and similar reasons, IGS Control remained in permanent contact with its MSFC counterpart. IGS Control permanently monitored the performance of all IGS resources and services and informed the Remote Operations Coordinator of any identified or potential problem.

Nominal Communications
Figure 5. Nominal communications operation

Conclusions and lessons learned

The IGS s successful operation for IML-2 has demonstrated that decentralised remote telescience can be performed both reliably and cost-effectively. The modular service capability of the network allows easy adaptation to different user needs. The connectivity charges, which are the major cost driver for remote telescience, were minimised by employing phased implementation of only the most suitable connectivity techniques, including the exploitation of existing multi-purpose communication networks. The centralised network management approach proved to be a big advantage, allowing the staffing/communications expertise required at the remote sites to be kept to a minimum. It was also shown that multi-purpose network resources can indeed be used successfully for operational support, as yet another example of a growing trend to merge traditionally distinct network domains.

The Interconnection Ground Subnetwork, known as IGS, is based on today's state-of-the-art technologies and the strategy being applied in its development will allow its successful migration to exploit future connectivity techniques such as B-ISDN as when these new services can be demonstrated to be still more cost-effective than the current arrangement.

The scientific return of the subsequent Altas-3 in November 1994 was also considerably enhanced by reusing the already proven capabilities outlined above.

Acknowledgments

The authors would like to thank the ground-segment manager C. Reinhold, the IGS development team K.-J. Schulz and M. Incollingo, the IGS Control team G. Buscemi, A. Boccanera, F. Sintoni, J. Lazaro, including Mr. H. Wüsten from DLR Oberpfaffenhofen, and ESOC's ESANET team for their great dedication, and their NASA colleagues from HOSC as well as PSCN Engineering and Operations for their highly motivated participation.

The work reported relied heavily on the support of the Columbus Programme, which also funded some of the IGS testbed activities. The contribution of ESTEC, and J. Degavre in particular, in helping the user centres to set up their infrastructures to handle the remote science operations is also gratefully acknowledged.


About | Search | Feedback

Right Left Up Home ESA Bulletin Nr. 81.
Published February 1995.
Developed by ESA-ESRIN ID/D.