back to article Compsci boffin publishes proof-of-concept code for 54-year-old zero-day in Universal Turing Machine

A computer science professor from Sweden has discovered an arbitrary code execution vuln in the Universal Turing Machine, one of the earliest computer designs in history – though he admits it has "no real-world implications". In a paper published on academic repository ArXiv, Pontus Johnson, a professor at the KTH Royal …

  1. Disgusted Of Tunbridge Wells Silver badge

    > But in this case, all the mitigations of this that I could think of, they need to be add-ons, you can't build it into this machine.

    If the user input was fixed width ( either by spec, or determined by the first byte of the user input ), this would prevent this attack. It wouldn't be an add-on.

    1. Julz

      I don't think that is correct. You could just put the 'user input ends here' at the beginning of the user input and happily jump to your code. Perhaps if only the user input portion of the 'tape' was mutable, but no, that wouldn't work either as however the immutability was implemented could always be corrupted by your code. Buggered really.

      1. Disgusted Of Tunbridge Wells Silver badge

        If the reader says first byte determines length of user input, everything after that is program then it would look like this:

        021message="hello world"println(message)

        Would be read as:

        vars: message="hello world"

        program: println(message)

        I suppose the question is whether you can prevent the user writing to the first byte of the tape or not as we're assuming they can't write to the end of the tape. If the user can write to the end of the tape then it's not really a vulnerability because they can already execute what they like anyway.

        If the user has full control over the entire tape then this discussion is a waste of time.

      2. chuBb.

        Could just read it forwards then backwards (similar idea to magstripe), should only encounter 2 user input ends, but you would have 3 from the quick scan of the exploit details, but i dont know the mechanism well enough to know if the tape reader had a reverse gear

        Want to see this exploit demonstrated on one of the many lego UTM's on youtube...

        1. katrinab Silver badge
          Unhappy

          The tape is of unlimited length, so no. A second tape would work though.

          The tape does have a reverse gear by the way.

          1. EnviableOne

            My thoughts exactly

            an input tape and a program tape

      3. b0llchit Silver badge

        The UTM is vulnerable to both virtual and physical compromise. Even if you have an immutable program, then you still may be compromised by a "screwdriver".

        As an example; with all the modern electric grids being online and so, you may time a glitch perfectly to make the immutable mutable. Any and all defense will eventually be broken in a new offense strategy. Otherwise we'd still be fighting wars with sticks and stones.

        1. EnviableOne
          WTF?

          Ah, the "were going to be compromised anyway, so why do security" defence? It holds no weight with me, just because the outcome is improbable, doesn't mean you shouldn't strive for it.

          Security isn't all about the Red team, The Blue team fight valiantly against the onrushing tide, knowing that their efforts, may not deter or repel all attackers, but they are going to have a fight on their hands...

    2. katrinab Silver badge

      That wouldn't work for the use-case, as it needs to be able to handle arbitrary amounts of data.

      I think you would need to have separate program and data tapes.

  2. Pascal Monett Silver badge
    Holmes

    "where in the design process should we start trying to implement security features?"

    AT THE BEGINNING.

    1. Julz
      Pint

      Re: "where in the design process should we start trying to implement security features?"

      Hum, would that be before of after performance engineering which also needs to be designed in from the start. Oh, and extensibility and a whole host of other properties. Not to forget the documentation, leaving that to the end is a disaster. I guess we just have to do everything at the start of the design process except the implementation. As for testing, if we design it all ok, who needs that :)

      Down to the pub (where we will soon be able to not freeze our collective butts off) as all good design starts there.

      1. Anonymous Coward
        Boffin

        Re: "where in the design process should we start trying to implement security features?"

        No. A thousand times no.

        For mission critical programs, from a programming point of view, it is far better to have a poorly performing secure program than a well performing insecure program. Management may disagree but the former can be better optimized post hoc, the latter cannot.

        The key problem, as the article points out, is that data must be distinguished from code. Prior to Turing et al., this was done on unit record equipment by having the data on punch cards and the code on a control panel wired with patch cables.

        Later computers solved this by loading the program first and then running the program to access the data. This, of course, would not work for a UTM and the only solution I can see would be to fix the length of the input data.

        1. Claptrap314 Silver badge
          Mushroom

          Re: "where in the design process should we start trying to implement security features?"

          Acksuawlly....

          Maintainability comes even before security. Because you don't know what "secure" is going to mean tomorrow...

          What happens when a newly discovered vulnerability comes out in your "secure" code if you cannot maintain it -->

    2. Anonymous Coward
      Anonymous Coward

      Re: "where in the design process should we start trying to implement security features?"

      > AT THE BEGINNING.

      But... but... but... that's <whispers> waterfall

    3. Doctor Syntax Silver badge

      Re: "where in the design process should we start trying to implement security features?"

      "AT THE BEGINNING."

      And do so be designing a clear physical separation between what's instructions and what's data. Simply making a logical distinction based on what's no more than housekeeping information allows for that housekeeping information to be modified and then the machine has nothing to tell it that its reading data as code.

      1. Nick Ryan Silver badge

        Re: "where in the design process should we start trying to implement security features?"

        This, just this. As soon as data and code mix there are opportunities for bad stuff to happen.

        Where does the vulnerability in SQL injection exploits come from? Developers utterly forgetting this simple separation between data and code. Particularly stupid that this still happens all the damn time in 2021 despite the solution having being provided in SQL for a couple of decades through using parameters.

        Of course it's possible for the SQL interpreter to have a vulnerability, but that's a differrent problem altogether.

        1. Mike 16

          Re: "where in the design process should we start trying to implement security features?"

          IIRC, the original paper (1936?) described a machine that did physically separate the "program" defined by a state machine) and the "data" stored (and alterable) on the tape(s). Looks like some time between 1936 and 1967, somebody decided that was uninteresting and defined a machine with a fixed (but "magic") state machine that would interpret a "program" preloaded on the tape. I'd be looking at the hack that made ENIAC sortof "stored program" by wiring an interpreter to read (PROM-ish) "data tables" and operate on (RAM-ish) data. Of course, that program could probably be another interpreter...

          I Recall a DataMation article on emulation that referenced a Uni running "precious" software written for a Bendix G-15, on an emulation of the G-15 running on an IBM 1620, running on an emulation of the 1620 running on an IBM System/360. Of course, nowadays the 360 would also be emulated, on something like a Xeon, whose ISA would be emulated by the various hair CPUs made this millennium typically use.

          1. Claptrap314 Silver badge

            Re: "where in the design process should we start trying to implement security features?"

            No, the idea of a UTM is tightly associated with Godel's work. Remember that in the early days, CS was nothing other than a branch of mathematics. I expect that the paper that proposed a Turning machine also included a UTM.

    4. jake Silver badge

      Re: "where in the design process should we start trying to implement security features?"

      Actually, BEFORE THE BEGINNING.

      First, the programmer needs to fundamentally understand security within the context of the situation. Then the design can begin.

  3. b0llchit Silver badge
    Alert

    The illusion of absolute security

    There is no computer that can be completely secure. Perfect security is an illusion.

    Software can be written defensive but it will not secure you against all possibilities. You are also using hardware, which can make mistakes. That cosmic ray just hit the wrong gate at the wrong time throwing the software into a different state, etc...

    As shown, you cannot build security into the system from the start and expect it to be perfect. And when you add it in the process of creating other facets, like software, you still will be lacking or leaking somewhere at some stage. There will always be a compromise possible, maybe unlikely, but the chance is there.

    Now then, we people tend to be very bad at evaluating risk and are subject to influences from all sides. The question whether a risk is acceptable is a question already answered. We all want perfection but none can perform perfection. So it seems, the machines are just as bad as we are. So, when are the machines replacing us all?

    1. chuBb.

      Re: The illusion of absolute security

      The one powered off buried in 10foot of concrete is pretty secure, just somewhat lacking in utility

      1. Peter Gathercole Silver badge

        Re: The illusion of absolute security

        Back in the early days of one of the major UNIX variants, one of my colleagues actually wrote in the "Remediation" section of a serious security problem report sent to the developers something along the lines of "Turn the system off, unplug it, put it in a secure cupboard and throw away the key".

        He felt that this was the only way to prevent this particular security exposure from being exploited.

        1. jake Silver badge

          Re: The illusion of absolute security

          That was TCP/IP connectivity, right?

        2. Onen hag Oll

          Re: The illusion of absolute security

          Even the remediation had a critical flaw. He omitted 'Lock the cupboard' and probably should have put 'Destroy the key'.

      2. Cuddles

        Re: The illusion of absolute security

        Exactly. It's pretty much always a trade off between security and convenience. You can make a computer arbitrarily secure by making it arbitrarily difficult to access. There really is no other way - no matter how much you try to build security into the system itself, it's always going to be vulnerable to things like malicious insiders, supply chain attacks, or just good old rubber hose cryptography. You really can't protect against an authorised user with a gun to their head, so the only way to be completely secure is to eliminate user entirely. As a wise computer once said, the only way to win is not to play.

        1. Nick Ryan Silver badge

          Re: The illusion of absolute security

          This is the same with data. The only secure data is data that you do not have. As soon as you have data then it will be insecure in some way.

          Which does lead into some of the data experience that many people forget: only collect the data that you need and nothing else.

      3. gnasher729 Silver badge

        Re: The illusion of absolute security

        I remember 25 years ago when military liked a really primitive web server that could only be controlled from the keyboard connected to the computer it was running on. Security? The two guys with machine guns outside the server room.

    2. Anonymous Coward
      Anonymous Coward

      Re: The illusion of absolute security

      "We all want perfection but none can perform perfection. So it seems, the machines are just as bad as we are. So, when are the machines replacing us all?"

      According to my QC Monte Carlo simulation it will be on or about 14 July 2057.

    3. amanfromMars 1 Silver badge

      Re: The illusion of absolute security

      So, when are the machines replacing us all? ..... b0llchit

      Virtual machines are already replacing former leading SCADA commanders and controllers/elite executive office systems administrations, b0llchit.

      And if you want to know more about the style of machine, and obviously just a handful of their inspiring aspirational and extremely rational goals, for a sentence or two or three is not going to reveal too much of anything groundbreaking revolutionary and Earth shattering to worry and terrify the natives inordinately, whenever all that is shared is designedly benign and easily thought too fantastical to ever be likely, ..... although in truth a current rapidly expanding dilemma and Sublimely ACTive Stealthy Action against which there are no known effective defences or attack vectors ..... Another Approach

      That would make IT a Superb Almighty Weapon well deserving of the Great and the Good having.

      I wonder if that is similar to anything Dominic Raab, the most recent media face for the UKGNI Foreign Office, is pimping GB can supply to African nations, with government trying to hold on to doing their usual thing of assuming the position of a vital indispensible middleman/snake oiler between canny supplier, that and/or those with the much sought after, sensitive and secure proprietary intellectual property portfolios, and exceptionally excited and enthusiastic customer client partner. ......... https://www.theregister.com/2021/05/12/cyberuk_dominic_raab_22m_indopacific/

      Oh,... and why did nobody around the Cabinet table tell Mr Raab that £22 million is peanuts nowadays and practically always buys one next to nothing worth having? Such suggests to me that he is not up to the task of securing the brief ...... with his close colleagues attending meetings at No 10 also leaving far too much to be desired and for too much left undone to be thought the best available to make a success of the job required.

      J'accuse.

    4. jake Silver badge

      Re: The illusion of absolute security

      "So, when are the machines replacing us all?"

      Never. Or, rather, not until the machines don't need humans in the loop .... which is as close to never as makes any difference to everyone reading this.

  4. Howard Sway Silver badge

    Of course a Universal Machine is vulnerable to exploits

    In theory it could run Windows.

  5. Binraider Silver badge

    A logical memory model clearly identifying program RAM and length, and data RAM and length would go a long way to addressing this type of exploit... Rather than just a contiguous address space. Not new ideas, but do they make it to hardware or software implementations? Without bugs in their own right... That'll be a nope

    1. Claptrap314 Silver badge

      The UTM by definition has an infinite tape. You don't get to change that.

  6. Bruce Ordway

    nothing is totally secure

    I sometimes think insecurity is just part of our nature, on an emotional level.

    I trust myself so... why shouldn't I be granted full access?

    "Other" people though, yeah probably need some rules for "them".

    1. Anonymous Coward
      Anonymous Coward

      Re: nothing is totally secure

      I don't trust anybody, myself included.

      1. stungebag

        Re: nothing is totally secure

        And THAT'S why you'll never be in a Who, Me? column.

  7. Mike 137 Silver badge

    Turing -> von Neumann -> Intel et al

    This is a fundamental property of any process which relies on sequential dependencies. The (now essentially standard) von Neumann architecture is a classic example: a word is interpreted as instruction or data depending on what the previous word was interpreted as. So get out of step just once you're pretty much lost for good. Of course Turing wasn't considering security or even robustness - just functionality. But in fact the von Neumann architecture is basically an implementation of a Turing machine. The Harvard architecture, where data and instructions are stored and accessed separately, was invented for robust systems to avoid this problem.

    1. Charles 9

      Re: Turing -> von Neumann -> Intel et al

      Even so, this suggests that any computational architecture has its limits against a truly determined adversary. Even a true Harvard architecture would still be potentially vulnerable to something like Return-Oriented Programming which can work entirely on already-existing code (which can even be signed and/or read-only).

      What this article reads to me is that computational security is essentially a siege problem: intractable long-term against a sufficiently-resourced attacker for the simple reason the target MUST be fixed (locked in) at some point.

    2. Mike 16

      Re: Turing -> von Neumann -> Intel et al

      In the "First Draft" report on which the term "Von Neumann Machine" is based, the machine uses what we would today call a "tagged" memory architecture. If a word is intended to be an instruction, that tag will be set (magically, of course, by the not-really-specified processing of loading the program, and cannot be changed thereafter. The interesting thing is what happens to "crossed use", i.e. fetching "data" as an instruction or read/writing "instructions".

      Tagged as Instruction, used as an instruction:

      "normal"

      Tagged as data, used as data:

      "normal"

      Tagged as data, used as instruction:

      effectively a "load immediate"

      Tagged as instruction, used as data:

      Read normally, but masked on store (only address bits of instruction changed)

      Note that like the ENIAC example, one could always write an emulator for a machine that could alter its (the emulated machine's) instructions on the fly, but the only way to change the actual machine's program was to load a new one, in some unspecified way.

      Another interesting wrinkle was that there were no conditional branches. Only conditional expressions like the C "?:" operator (or ARM predicated instructions). The resulting value could then be used to alter the target address of a Jump instruction, which could then be executed.

      1. stungebag

        Re: Turing -> von Neumann -> Intel et al

        This is exactly what Burrough/Unisys Large Systems and their successors had, and still have. Everything in memory is tagged, and if it's data you can't try to execute it - the hardware won't have it. Except these days it's firmware rather than hardware.

    3. Anonymous Coward
      Anonymous Coward

      Re: Turing -> von Neumann -> Intel et al

      @Mike_137

      1. Harvard Architecture. If instructions and data are kept separate, then a compiler would be in the instruction part, and both the source code and the compiled object would be in the data part. Libraries would have to be in both the instruction part (so that dynamic linking could work)......and in the data part (so that static linking could work).

      2. Prolog. In this language there is no distinction between the "code" and the "data".

      3. Self-modifying programs. Is the "instruction code" completely debarred from writing into its own space?

      Can someone help me out?.....I really don't understand!!!!

      1. Robert Carnegie Silver badge

        Re: Turing -> von Neumann -> Intel et al

        The theoretical Turing machine has only one form of storage, an infinitely long tape.

        I may get a lot of this wrong, but I think it's been proved that any computing task and computing hardware is logically equivalent to one Turing machine with enough tape.

        Now... hypothetically, a more elaborate Turing machine, with a tape for programs and another tape for user data, would not have this security problem.

        But it also would be logically equivalent to a basic Turing machine with one tape. So that actually can be secure... theoretically. I leave details to be worked out by the student. :-)

      2. MarkSitkowski

        Re: Turing -> von Neumann -> Intel et al

        We use self-modifying code to create 'virtual' encryption keys. These are scattered throughout the executable, and are inaccessible to anyone except the executable itself, once they're set.

        1. Robert Carnegie Silver badge

          Re: Turing -> von Neumann -> Intel et al

          Make sure that the compiler doesn't optimise your key value to 000000000000 ;-)

  8. amanfromMars 1 Silver badge

    Special AIR Service with/for Advanced IntelAIgent Resources on Sensitive Operational Missions.

    A computer science professor from Sweden has discovered an arbitrary code execution vuln in the Universal Turing Machine, one of the earliest computer designs in history – though he admits it has "no real-world implications".

    Though he admits, as far as he knows, it has "no real-world implications" is a singularity view with every possibility and therefore inevitable probability that it definitely does have real-world implication as observed and experienced by others enabled to be able to share the results of the consequences, as they know how they can be ..... and would Present the Information and Intel on Advanced IntelAIgents to Current Extant Mass Multi Media Mogul Operations to BroadBandCast and Deliver to Audio/Visual Output Outlets what is Successfully Already Well Done and Providing Sustaining Driver Instruction to Virtual Machinery/Universal Turing Machines via Stellar Per Ardua ad Astra Internet Service Providers ...... is news which would fundamentally contradict Pontus Johnson, a professor at the KTH Royal Institute of Technology in Stockholm, Sweden

    UKGBNI MoDified for NATO Protected Project Programming. That's where we're all at today. What be you at, and where? What have you come from and where are you going if you think it is worth following. Anywhere special and heavenly or do you fear you veer towards the terrible and hellish? There's pills and potions for those deadly destructive blues ....... as there are also for those gifted with sight in the more enjoyable, creative hues that obliterate such debilitating darkness.

    1. Francis Boyle Silver badge

      By jingo , I think you've got it

      If you can't understand it you can't exploit it!

  9. amacater

    So - when should you halt it to patch it?

    And, if it's unpatchable/obsolete - when should you stop the production line ...

    1. Anonymous Coward
      Anonymous Coward

      Re: So - when should you halt it to patch it?

      Aha ... the halting problem! (perhaps you could check all programs with another Turnign Machine first to deterime if they were self modifying)

  10. MrMerrymaker

    "where in the design process should we start trying to implement security features?"

    The concept stage. The concept is not complete without preventative security considerations.

  11. Anonymous Coward
    Anonymous Coward

    the real question

    I'm still looking for the real question: will the vulnerability be patched right away, or will we have to wait until Pach Decade for an in-band release.

    Oh, and shame on the researchers for releasing a 0-day. Come on, notify the vendor. Sure, Minsky being dead is inconvenient, but security ain't an easy field. Call a seance or something!

  12. Claptrap314 Silver badge

    Publish or perish?

    What I assumed about a UTM when I first read about it was that the state machine would be on the "even" bits, and that the data would be on the "odd" bits. Alternatively, one could put the state machine on the left bits, put the right bits of the tape on "even" right, and left bits of the tape on the "odd" right.

    Problem solved. Full stop.

    The lack of imagination by the computer scientist is really disturbing to me.

    --

    It never occurred to me to use some sort of stop sigil to separate the machine state description, as this would require skipping over the state machine description every time the head needed to switch from one side of the description to the other.

    But one of the points of a TM is that it can never be realized in physical hardware. We don't have infinite tapes. So we can never have a physical UTM. The entire point of the UTM is to translate Godel's theorem into the realm of CS. If someone implemented a quasi-UTM, the first thing you have to understand is that it is not a UTM.

    The finiteness of any quasi-UTM that can be physically realized is an obvious point to check for problematic behavior. That the famous implementation did not worry about attacker behavior isn't particularly interesting in any context I can imagine.

    1. Robert Carnegie Silver badge

      Re: Publish or perish?

      An answer to the lack of infinite tape is to have the UTM order more tape from Amazon as necessary. ;-)

  13. jake Silver badge

    Calling it a "vulnerability" is a bit of a stretch.

    Did the author of the Machine ever make the claim that it was not vulnerable? Has anybody else? Where did this researcher come to the conclusion that it should be invulnerable? Just because it is a simplistic example "invented" by Minsky? That's pretty fuzzy thinking, if you ask me.

    Honestly, the only reason nobody "wrote a paper" on this before is because it's completely irrelevant to what the UTM represents, and what it is for.

    1. Anonymous Coward
      Anonymous Coward

      Re: Calling it a "vulnerability" is a bit of a stretch.

      "the only reason nobody "wrote a paper" on this before is because it's completely irrelevant to what the UTM represents, and what it is for."

      The field of papers in "security theater" is now so crowded with irrelevancies like things like data leaks at 1 bit per month via error corrected modulation of fan speed and other such delights that there's very little left for the noobs to aim for. Where haven't "security theater researchers" already been? The Turing machine, apparently.

      Who wants to guess what comes next?

      Meanwhile an oil pipeline in North America is looking truly vulnerable. Has anyone looked into how that kind of thing might be prevented? Are there any column inches (sorry, page views) to be had with something even slightly realistic and relevant?

      1. jake Silver badge

        Re: Calling it a "vulnerability" is a bit of a stretch.

        "Meanwhile an oil pipeline in North America is looking truly vulnerable. Has anyone looked into how that kind of thing might be prevented?"

        Many decades ago, actually. Try connecting to the gear that monitors The Beam at SLAC, for example. Or the controls for the Stanford Dish. Or San Francisco's Hetch Hetchy water supply. Or rather, don't bother. You can't. Grad students wanted to hook 'em up to the 'net back in the late '70s or early '80s; the sane among us put the kibosh on their plans.

        Commercial interests of today, however, are truly insane. We tugged on their capes, and were shrugged off. We tapped 'em on the shoulder & were elbowed away. We tugged on their shirts, and were thrust aside. Some even kissed their boots, and were trodden upon. Our message was always the same: "Please, PLEASE, **PLEASE!!** don't connect SCADA to publicly available networking systems!"

        But did they listen? No. They did not. The idiots.

        On the bright side, those of us with a clue are making a pretty penny in our retirement, cleaning up the resulting mess :-)

        Yes, I know, I've posted this or similar before. It's still accurate.

    2. bombastic bob Silver badge
      Devil

      Re: Calling it a "vulnerability" is a bit of a stretch.

      it lacks proper input sanitization, and is therefore vulnerable to code injection.

      How about that - the world's oldest 0-day exploit is a code injection vulnerability!

      [I was actually expecting 'buffer overrun' when I started reading the article]

      so yeah - in MY book of definitions, that'd be "a vulnerability".

  14. Steve B

    Shows we have lost the plot!

    Nearly 50 years ago, our computers would only run precompiled code that had been loaded from a code library.

    On top of that, code could only be loaded into memory designated as code memory and data could only go into data memory. Attempting to write data into a code block would cause an exception and halt the program. Attempting to execute data would also create an exception.

    All very simple and going well until IBM and Microsoft came along with their high falutin marketing and destroyed IT for decades.

    1. bombastic bob Silver badge
      Devil

      Re: Shows we have lost the plot!

      eh, that's not _ENTIRELY_ true...

      You're describing "Harvard Architecture" where code/data spaces are separate things. Your typical minicomputer never did this. In fact, PDP-11 code could even be categorized as "self modifying" when you put variable parameters after the function call, directly in INSTRUCTION SPACE, by using the previous program counter as a base register, and then cleaning the stack up with the 'RTS' instruction. Soft interrupts are similar, parameters are expected after the EMT instruction and the stack gets cleaned up when you return from interrupt. And to pass those parameters, you literally poke the values into the code space before making the call.

      So it's worth pointing out that many non-IBM computer systems have had code/data in the same address space, particularly microprocessors and minicomputers. The big iron machines may have had separate code/data, but not necessarily all of them.

      Anyway, some computer history from 50 years ago form someone who was there...

      [worth pointing out - AVR microcontrollers use 'Harvard Architecture' so that you can run the program directly from NVRAM]

  15. amantrappedincebu
    Coffee/keyboard

    Infinity

    Of course the fault with the UTM is in the spec. There's no such thing as a tape, or anything else physical, of infinite length. That's the same fault as Godel's infinite string problem. Yes pi is of infinite length at least in decimal, but that number base can't even express a third, which is simple in trinary, 0.1. So I suggest there are three tapes each of finite length, the first being the size of the base being used, the second being the code and the third the data. Now the base to be used could be a large number (say 1 less than 2 to the power 282,589,933) and would have to be specified in a pre-defined number base otherwise you'd need a fourth tape to specify that . . . I begin to see a problem

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like