The chances are whatever hardware you are reading this on would not cope well with space. Assuming its mechanical structure survived the launch acceleration and vibration, it would then face sustained hard vacuum and temperature extremes. And within a matter of months or even weeks its central microprocessor would doubtless be fried by radiation exposure.
Space is awash with charged particles of various energy levels, either emitted directly from the Sun or the wider Cosmos beyond the Solar System or else confined within Earth’s magnetic field to help form the radiation belts.
When a high-energy particle strikes a computer chip, the consequences can include the random ‘flipping’ of microprocessor memory cells – known as a Single Event Upset – through to transistor gate ruptures up to a complete burn-out, called a ‘latch-up’.
Sustained radiation exposure can also weaken the underlying quality and electrical conductivity of the chip’s semiconductor material, potentially leading to degraded performance or excessive power consumption.
“As microprocessor gates become smaller and the absolute levels of power go down, our circuits are becoming more vulnerable to Single Event Upsets,” said Roland Weigand of ESA’s microelectronics section.
“Even terrestrial chip manufacturers are growing more concerned about hardening against radiation – especially for products like network routers or medical applications where reliability needs are absolute.
“For the radiation-heavy space environment the problem is, of course, many orders of magnitudes worse.”
Robustness through redundancy
So dedicated microprocessors like ESA’s LEON family are essential for space missions and radiation-hardening is one of the main factors driving their design.
Physical shielding has a role to play, but can only extend so far. Heavy ions can still pass through an aluminium box, or else interact with it to produce a shower of secondary particles that could be almost as harmful.
“The key to designing for rad-hardening is really redundancy,” Roland added. “You might duplicate your bits at different sites around the microprocessor or use ‘parity coding’ to add on extra bits that help with detecting errors.
“Or you can triplicate your bits and then use a voting system to detect and correct errors: the result that comes up the most is likely to be right.
“Alternatively you can perform the same calculation multiple times – temporal instead of spatial redundancy.
“Whatever mode of fault tolerance is used, there is a price to pay for that redundancy. Your chip will be larger, run slower and consume more power – in return for its increased reliability.
“To limit these penalties requires a careful optimisation of the design, striving for compromises with the expected processor timing performance.
“So before introducing radiation tolerance features, the chip designers should ideally have in-depth knowledge of how the processor works. This is a real problem with commercial processors – based on proprietary information – and it is difficult to add in such features after cores have already been designed.
“Instead for the LEON we decided to start from scratch, adding redundancy from the beginning.”
(Continued in LEON: a new recipe for chips).