Not sure that I get it.
Error correction and cryptography are both a thing right? What's special about what these guys are doing?
Maybe they should talk to Sky TV? They were encrypting satellite TV signals back in the 80s...
The European Space Agency (ESA) unveiled an experiment it hopes will overcome the problems that prevent encrypted communications between the Earth and orbiting spacecraft. The Cryptographic ICE Cube, launched into orbit in April as part of the NG-11 mission, has been installed on the ISS' Columbus laboratory and is currently …
I get so sick of comments like this.
Before you comment next time please note
1. The ESA does not employ stupid
2. Unless you have some sort of skills you are carefully keeping hidden (A PHD in cybersecurity/space science maybe), it is highly unlikely that you that you have thought of anything they have not considered
3. Reading a El Reg article does not make you an expert. Same applies to Breitbart, Daily Express, Daily Mail, only more so.
4. See 3
5. Its OK to admit your ignorance by asking questions , but not to prove your ignorance by making ill informed statements
6. See 3
"Reading a El Reg article does not make you an expert."
But I DID stay at a Holiday Inn Express last night.
Joke aside, honest question. The second solution says two cores. How does it know which one is right if one flips, and what happens when both flip differently at the same time?
But it still doesn't say 2 - it says "a series". The example afterwards describes one core being reset while another carries on the comms, but the series implies that there would be more cores available to maintain a quorum.
Wouldn't cost much to put a dozen simple cores onto a decent FPGA - not compared to the budgets involved in building, launching and operating a satellite.
I did wonder what they'd be wanting encryption for, because (like OP) my first thought was that the comms is generally a ground->ground relay so the decryption need only happen at the ends. ISS can have the big expensive box because it's a big expensive thing. But then you start considering things like manoeuvring commands, and encryption becomes a real nice-to-have. Besides that there's always a bunch of spooks want their illicit photos from orbit to remain secret.
I did wonder what they'd be wanting encryption for, because (like OP) my first thought was that the comms is generally a ground->ground relay so the decryption need only happen at the ends.
Since this is just a test, they're probably thinking about banking and military data transfers via satellite. Just a guess as to the "why".
This is probably true... but if you only have 2, which one is the "correct" one and which has had bits flipped?
The one that decodes correctly (verification with the embedded checksum) did not have its bits flipped, the that doesn't decode correctly has and can automatically reset because the verification failed.
The press release is vague as to the problem that's being addressed and raises more questions than it answers. It may be true that "they're bright so they must know what they're doing", but it doesn't actually answer the reasonable questions that arise.
If anyone knows some specifics, it would be interesting to hear more about:
1/ How the reliability of communication is currently protected, given the radiation hazards
2/ Given that multiple sacrificial FPGAs are being used to soak up damage, what's protecting the Pi Zero?
3/ What's wrong with the periodic re-keying that you would use in something like TLS to recover after key corruption?
4/ Whether MRAM was considered for key storage and, if so, why it was rejected.
I think the answer to those are in the article?
1. Using harderdend electronics with radiation casing
2. They aren't being sacrificed, the radiation isn't "killing" the hardware, it is messing with RAM, causing bits to flip.
3. When a key is curropted you can no longer decrypt anything, including the new key. You don't send a new key in plain text as then anyone who is listening has access to the new key.
4. That detail wasn't in the article :-)
Hope this helps!
I think the answer to those are in the article?
They're neither in the article nor in the linked press release, or I wouldn't have asked.
1/ The article specifically says that "bulky and expensive radiation-hardened equipment is not practical for use with most satellites" so if that has to be present for error correction, it's not in principle a big step to use it for encryption too. So that clearly can't be the issue.
2/ Whereas bit flips are most likely, there must be some likelihood that damage is permanent, so I'd be surprised if cores cannot be permanently disabled/ignored on some basis.
3/ You can indeed exchange new keys over a clear channel - using Diffie-Hellman, for example, so presumably there's a requirement above simple privacy.
4/ If the principle problem is the long term stable storage of private keys, then there's presumably a good reason why using a form of storage that isn't susceptible to radiation - or simply using high level of redundancy - won't cut it.
The great thing about The Register is that there's usually someone out there is who is actually associated with the story and can fill in the bits the article left out. Can't always be lucky, though...
I do remember JPL had a mission to Jupiter. The radiation field was much stronger than they anticipated so the computer had bits being flipped in the active registers. Their comment? The computer is running slower because of the radiation. Wow!
Careful design can mitigate many problems.
"5. Its OK to admit your ignorance by asking questions , but not to prove your ignorance by making ill informed statements"
Good point, but to be fair, consider the website we're on - that's the general vibe.
The question crossed my mind too, but if phrasing it your way, I'd be asking what benefits multiple cores have over error detection/correction within a single core. Storing a 1024-bit key in 1026 bits (for example) allows you to detect 2 bit errors, and correct 1 bit errors.
Essentially this is combating the data inconsistency by introducing redundancy. Where the redundancy should be built is the question.
Yes, I do have such a PhD, and have worked on an onboard satellite crypto.
There is nothing new in the general approach of using voted redundancy on FPGA for crypto keys, with reload on error.
This is just “new for ESA”, not for anybody else.
Obviously nobody can say how previous crypto projects were implemented, since both design details and security approach are classified on national security projects. That’s true whether you are a citizen of UK, France, Germany or Italy.
I disagree with the use of that term in relation to scrambling. If I scramble a message to send it to someone then the constituent parts of the message are still in the message. For example
the cat sat on the mat
mat on the the sat cat
ota mhe nta ttc hsa te
You can still reconstruct the message from the parts using a descrambler.
However if I encrypt the message then the original message is indecipherable even if I have all the parts of the message.
Now you can't reconstitute the message by just reassembling the bits of the message. You need to be able to decrypt the message.
Now that would be a handy thing to have around if you are in that field. Can’t imagine how much paperwork they probably had to go through to get one up there.
But what would they call a Rad Hardened PI ?
A hard crust Pi
An over cooked Pi
A burned Pi
I bet the SD card is more of a problem - because they barely work properly down here in the first place.
Reading the press release fully it's not in the crew area, that beige box in the articles photo is it.
"CryptIC measures just 10x10x10 cm."
“A major part of the experiment relies on a standard Raspberry Pi Zero computer,” adds Emmanuel. “This cheap hardware is more or less flying exactly as we bought it; the only difference is it has had to be covered with a plastic ‘conformal’ coating, to fulfil standard ISS safety requirements.”
On a normal trans-Atlantic flight bits get flipped in memories. It's a known thing and has been dealt with for years. There are even countermeasures for logic gates that get broken open or closed. And on a larger scale there are the 'multiple input/multiple processing redundancy' schemes (like Boeing forgot to do!)
Commercial space companies use COTS with mitigation.
The news here is why ESA is spending money on a problem that is already solved!
I suggest you search for ECC or 'single event upset mitigation'. Cosmic rays reach ground level also; it's just that you encounter many more at higher flight levels and yet more in LEO. There is much debate as to whether RAD hardened or mitigation is the better answer in LEO. For deep space both are required.
Point is, it can still LAND, period. Meaning once you isolate the upset, you can still get the plane back on the ground at some point, remove the faulty hardware, and replace it. Airlines can be tended during their working life.
Satellites are one-offs. Once they go up, they tend to only come down at end-of-life. Meaning if a satellite suffers the equivalent, an eight-to-nine-figure piece of electronics gets bricked. That's make-or-break levels of concern.
So, you have a challenge: make a satellite reliably rad-safe through its service life WITHOUT making it too heavy to launch such as by using traditional rad-hardening.
Nitpick: I am fairly confident that if you put a Pi 4 in space it would likely die quite quickly due to OVERheating. Keeping things cool in space is a MAJOR pain in the butt, vacuum is in fact a really, really good heat insulator (vacuum flasks, anyone?) and there is only so much heat an object can radiate out.
Good joke, though :-)
Because there are things you want to do that involves off-the-ground. If you want to manoeuvre your satellite then you need to send it an instruction. If that instruction is not encrypted then somebody else can replicate it.
Yes, there are (presumably) other safeguards in there, but go and look up DVD Jon and we'll come back to why having one example of structured data unencrypted is a big security hole. If somebody can deorbit your new bird "for the lulz" then you're going to have an awful lot of explaining to do to your financiers.
And that's besides military / covert usage.
Early PROMS were very radiation resistant - (the structures were far bigger than current devices and used blown fuse technology - the SN54S473 had the grand total of 4k bits (512bytes!!) in a 20 pin DIL package). If they can still be obtained then they could be used to store multiple copies of the keys each with a checksum.
There are many ways to make data at rest single- or multiple- bit flip resistant, but the usual assumption in processors is that data travelling along buses and placed in registers is correct. This assumption breaks down in aerospace applications. Assuring that data that should remain unchanged while being processed actually does remain unchanged, and changes are the ones actually wanted is a bit more difficult. In aviation, using an odd number of processors to execute the same calculations and assuming the majority decision is correct is a common approach, but if you are in an environment where 3, or 5, or 7 different processors can give multiple different results such that there is no reliable majority decision then different approaches are necessary. The probability of unresolvable conflicts increases as the duration of processing increases. Sending a message such that it can be guaranteed to be uncorrupted is called the Byzantine Generals Problem, probably best known from Bitcoin - but the linked paper is from 1982.
To do things properly, all data buses within the processor and communicating with devices external to the processor need to have sufficient ECC to assure data integrity to the desired level (which can be arbitrarily high). Data being processed needs to be represented in forms that are robust to disruption e.g. instead of using single bits to represent binary states, use an odd number of bits and define the state as 1 if a majority of the bits are 1, and zero if the majority are zero. Other, better, encoding schemes are available. Such approaches have the disadvantage of increasing the amount of die space needed to store and process information - imagine using three bits per binary digit: this requires registers that are three times as wide as 'normal', Of course, you can spread the information in time instead of space, so instead of widening a register, you use it three times and emulate physical separation by temporal separation; this means your calculations get slower. Repeating calculations in time has a problem in that bits can get latched, either temporarily or permanently, so getting the same result three times in a row doesn't mean it is correct if a bit in the output register has been latched into an incorrect state.
So, spread your calculations across many physical instances of processors - sufficient to solve the Byzantine Generals Problem given a target corrupted message rate between processors to overcome. Use ECC everywhere. Use repetition of calculations judiciously, bearing in mind that cosmic ray events, while of short duration in themselves can and do have long-term consequences. Now do this on commodity hardware that hasn't been designed with the above in mind. Remember, what you write to a register does not necessarily remain unchanged until you read it - so a jump instruction can go to the wrong destination, a cached processor opcode can be changed to an entirely different instruction, a memory location sent to the MMU can be changed, the contents of a memory location can vary from one read to the next, any bit can get latched at any time for a variable duration; and you might need to provide results in real time...
I take my hat off to those who do this stuff for a living. Mother Nature patiently waits for you to make a false assumption and...
(In telecomms, it is possible to test network hardware and protocols with neat equipment where you can dial up a particular error rate on a circuit. I don't know if an equivalent is possible for processors - sticking them near to a potent alpha-, gamma, and/or neutron source might be an approach; or maybe you have to emulate the silicon and run it (slowly) in software to allow random faults to be fired into the system. Building fault tolerant processors can't be easy, or cheap)
The problems with all the spectrums in space has been studied. Shielding, manufacturing radiation hardening resistant process, error correction design, redundancy. - even dual mirror channel architecture with fault error design between the two was used in military hardware since the beginning.
We also have facilities that does this here on earth such as https://www.bnl.gov/nsrl/ . Someone convinced the grants team that they need real time data.
I think the biggest problem is radiation emitting contamination getting into our electronics (as well as all man made products including shielding) manufacturing which we have not been able to completely rid of since the beginning of the above ground nuclear tests and more recent nuclear disasters (Chernobyl and Fukushima)
1987 - https://books.google.com/books?id=a-pQAAAAMAAJ&dq=redundancy+computer+space&focus=searchwithinvolume&q=redundancy+
1971 - https://books.google.com/books?id=cTuT_TnzCjUC&dq=space+radiation+resistant+computer+design&focus=searchwithinvolume&q=radiation
Recent - http://www-physics.lbl.gov/~spieler/radiation_effects/rad_tutor.pdf
Biting the hand that feeds IT © 1998–2020