back to article Machine-learning model creates creepiest Doctor Who images yet – by scanning the brain of a super fan

AI researchers have attempted to reconstruct scenes from Doctor Who by using machine-learning algorithms to convert brain scans into images. The wacky experiment is described in a paper released via bioRxiv. A bloke laid inside a functional magnetic resonance imaging (fMRI) machine, with his head clamped in place, and was …

  1. Anonymous Coward
    Anonymous Coward

    As usual a Nobel goal that will undoubtedly be subverted and used for repression

    1. KittenHuffer Silver badge

      Other research into replacing the sound sources in church steeples is hoped to lead to a noble no bell nobel!

      1. Anonymous Coward
        Anonymous Coward

        Bloody autocorrect

      2. David 132 Silver badge

        On the other hand, research into artificial insemination in cattle is a no-bull endeavour...

        1. jake Silver badge

          No bull? Were you planning on volunteering to donate? Because I don't think that'll work very well ...

    2. Mike 137 Silver badge

      Very unlikely

      This is very unlikely to be usable for repression and it's a pretty impressive piece of work.

      "There is significant overlap between the training and testing data. Something like Brain2pix, for example, cannot recreate an image from a brain scan it hasn’t seen before. That means if a participant was asked to watch a brand new episode of Doctor Who and the fMRI images were given to the model, it would not be able to accurately recreate what the participant had seen. The machine-generated images are also heavily dependent on an individual’s brain scans, too: the software's training involved learning a specific person's activity in response to watching the TV show.""

      So it can only reproduce (person-specifically) what is currently being presented to the visual field. Consequently it can lead to understanding of the way visual images are processed, but it's not a tool (and never could be a tool) for investigating what's stored by the brain already. Nor is there any suggestion (or likelihood) of it being usable to modify either the interpretation of what's being seen or what's already stored (it only works one way).

      1. RobLang

        Re: Very unlikely

        The researcher is bound by process to list the limits of the research as presented, not the limits of the technology as a whole.

        As the variety, veracity, value and volume of training data increases, there is no reason why this technique could be used for broader tasks across subjects.

        1. teknopaul Silver badge

          Re: Very unlikely

          isn't this just pattern matching? wouldn't you get similar results with input data that was the person's face while watching the tele?

        2. teknopaul Silver badge

          Re: Very unlikely


          looks like real mad ai science and the results are much more prittier.

      2. Anonymous Coward
        Anonymous Coward

        re. This is very unlikely to be usable for repression

        ANYTHING is usable for repression! Remember that great (noble) idea of Wholly Inquisition?! It was design to inquire, in whole, the unbelievers, round-earthers, and all such infideli(s), in order to make them confess, repent (etc, etc.) and give them entry through the heaven's gates and such, while in reality, all it came down to was one pathetic "Comfy Chair"! I mean, WTF?!

        1. David 132 Silver badge

          Re: re. This is very unlikely to be usable for repression

          Well, I didn’t expect that!

      3. Zimmer

        Re: Very unlikely

        "So it can only reproduce (person-specifically) what is currently being presented to the visual field."

        Interesting that the image produced had good construction of the face but almost ignored the hair. Further studies may point to how the brain prioritises elements of what we "actually" see (or are concentrated on) and how much is "inferred" by our peripheral vision...

        1. ThatOne Silver badge

          Re: Very unlikely

          > good construction of the face but almost ignored the hair

          One would have to check if this originated in the original human brain, or if it is due to something gone askew in the image creation AI. Maybe it has "learned" that hair isn't important for the picture to be considered valid, so it doesn't bother?

          On the other hand, I could imagine that some people might notice hair more than others; A hairdresser for instance might have a way more conscious look at her hair than somebody who is used to seeing her and doesn't notice details anymore (common problem in couples...).

          On the other hand there could be some sort of pattern recognition in the brain: While a human head is clearly recognized, parsed and remembered (hair or not), the squidhead alien for instance is a mess: Clearly the brain doesn't know how to parse the visual information. I guess it only results in an abstract verbal memory ("head like some sort of octopus"), which could explain why you can recognize two different humans, but would probably not make the difference between squidhead and his kin, all fitting the same abstract and too vague description.

        2. Graham Cobb Silver badge

          Re: Very unlikely

          If the example in the article is correct it seems to suggest that the "image" in the brain is being collected after a lot of processing. In particular, the "image" appears to show an ear, which is not actually visible in the screenshot. Of course, the subject knows there is likely to be an ear hidden behind that hair but interesting that the "image" includes it.

        3. Cynic_999 Silver badge

          Re: Very unlikely

          The image that we "see" is entirely an imaginary construct by the brain, which is merely *guided* by the information coming in from our optic nerves. Just for starters, the image that is formed on the retina takes a whopping 200mS to travel along the optic nerve. To compensate, the brain continuopusly *predicts* what will happen in 200mS time, and that's what we think we are seeing in real time. It enables us to be able to catch a ball, or dodge a sabre-tooth tiger. But only if everything moves predictably.

          The brain also fills in the huge gaps caused by each eye's blind spot, and erases the shadows of blood vessels and nerves that criss-cross in front of our retina, whilst "correcting" things such as discontinuities in straight lines and regular shapes. Your brain is continuously operating as a real-time graphics processor so well that you don't even notice until the shortcomings are exposed in "optical illusions"

          If the incoming image is significantly different to the image our brain fed us 200mS ago, the brain will simply alter the short-term memory of what we thought we saw - so we will never know! Which is why you so often hear of accidents where the other object "came out of nowhere".

        4. jake Silver badge

          Re: Very unlikely

          "Interesting that the image produced had good construction of the face but almost ignored the hair."

          Not all that surprising. The victimsubject of the research was a guy. Ask any gal, us men never notice their hair. This is proof that it's not spite or rudeness, we really don't see it. Its just not important to us, for whatever darwinian reason.

          So, ladies, if you're getting your hair done just to make yourself attractive to men, you may as well save yourself a few quid. It's now been scientifically shown that not only does it not work, it apparently can't be made to work ... we're just not wired that way.

          1. Evil Auditor

            Re: Very unlikely

            Fully and totally agree with you.

            However, I used to know a girl whom I identified by her hair only. Mind you, she had very long hair, way beyond her bum. And once cut I did not recognise her at all - didn't even look familiar.

          2. This post has been deleted by a moderator

      4. Cynic_999 Silver badge

        Re: Very unlikely

        The danger is that any major advance in the field of brain activity interpretation is likely to be able to be adapted to lie-detection or emotion-detection. And if these become reliable enough to be deemed "proof beyond reasonable doubt", it allows for the prosecution of thought-crime. Which you may think nobody would accept - but it certainly would be acceptable for many people wrt certain types of crime. e.g. having sympathy for terrorists or their methods, or having sexual thoughts about children.

    3. HildyJ Silver badge

      Non noble

      This will be subverted but not for repression, for profit.

      I smell Elon's Neualink.

    4. Tim99 Silver badge

      Blowing things up and then giving a prize?

  2. Wellyboot Silver badge

    Not a Dalek?

    Does anyone else think that using Karen Gillan as the stimulus might invoke mental activity slightly different from the image recognition they're after.

    1. Fruit and Nutcase Silver badge
      Paris Hilton

      Re: Not a Dalek?

      30 hours of watching --->>

      should prove the case

    2. Rich 11 Silver badge

      Re: Not a Dalek?

      Potentially even doing away with the need for an fMRI machine.

  3. Anonymous Coward
    Anonymous Coward

    Someone with access to an MRI machine has misunderstood machine learning again...

    A very similar trial was commented on by Rob Newman in his radio 4 series "Neuropolis" (For those of you going "Who?" Rob Newman is one of the guys from the Mary Whitehouse Experience... Admittedly, that won't help the younger readers here though.) I *think* it's still available to listen to, but I'm afraid I can't remember the episode number, so you'll have to listen to all 4....

    The images aren't being reconstructed from brain imagery, it's just a machine learning platform using pattern matching. It isn't going to be able to predict what is being watched without having been trained on the exact combination of Doctor Who episode and a particular fan. This means that the AI can't, for example reconstruct a frame showing Bonnie Langford and Colin Baker *unless* that video has been included in it's dataset and the AI trained to recognise the brain activity of the Doctor Who fan when (s)he sees it...

    1. Tom 7 Silver badge

      Re: Someone with access to an MRI machine has misunderstood machine learning again...


      1. Rich 11 Silver badge

        Re: Someone with access to an MRI machine has misunderstood machine learning again...


        Did you just thcream and thcream and thcream until you were thick?

    2. DrBobK

      Re: Someone with access to an MRI machine has misunderstood machine learning again...

      I heard the Rob Newman radio show in question. He is a very funny comedian. He is not a neuroscientist. He appeared to seriously misunderstand what is going on in analyses of fMRI signals which aim at stimulus decoding. There are some freely available papers by Niko Kriegskorte that explain it. Here is a good one (I think this one is now free) [ BTW I am a neuroscientist. I was working information processing in neural networks, artificial and real, in the late 1980s, although I do more work with neuropsychologial patients these days. I don't know as much about this stuff now as people like Kreigskorte, who I have heard talk and who I am very impressed by. ]

    3. Martin an gof Silver badge

      Re: Someone with access to an MRI machine has misunderstood machine learning again...

      Rob Newman is one of the guys from the Mary Whitehouse Experience

      You might have been better pointing out that the Mary Whitehouse Experience was Rob Newman, David Baddiel, Hugh Dennis and Steve Punt. The other three have all had higher profile subsequent careers while Rob Newman seemed to drop off the scene for a while until a recent series on Radio 4, which I found rather good. All fourl are alumni of Cambridge colleges.

      Newman and Baddiel are notable as an early "rockstar" comedy duo whose gigs famously filled Wembley stadium.


      1. David 132 Silver badge

        Re: Someone with access to an MRI machine has misunderstood machine learning again...

        I seem to recall that Newman and Baddiel presented a thoughtful and intellectual history discussion series at one point too.

        1. I am the liquor

          Re: Someone with access to an MRI machine has misunderstood machine learning again...

          You see that Eddie The Eagle Edwards? That's you that is.

    4. RLWatkins

      Re: Someone with access to an MRI machine has misunderstood machine learning again...

      It won't be able to *predict* anything, and it can't read minds. The brain's V1 area is a distorted but otherwise pretty much 1:1 map of the person's visual field. Nothing odd about being able to extract images from it, and people are attempting just that. This is an outgrowth of that research.

  4. Kane Silver badge

    Doctor Who’s fictional sidekick

    As opposed, to say, their non-fictional sidekick?

    1. Wellyboot Silver badge

      Re: Doctor Who’s fictional sidekick

      Multiple characters across dozens of episodes interacted with Amy so we can rule out her being a fictional sidekick within the confines of the fictional construct that is Doctor Who.

      Of course if the 'Doctor' was(is?) a real person and the TV series are dramatizations of real events with characters added for effect then she could indeed be a fictional sidekick.

      Are there any other ways we can twist this grammatical logic :)

      1. TRT Silver badge

        Re: Doctor Who’s fictional sidekick

        Except... wasn't she a ganger? A ginger ganger at that?

    2. Fruit and Nutcase Silver badge

      Re: Doctor Who’s fictional sidekick

      K-9 was the best sidekick of the real Doctor, Tom Baker

      1. davyclam
        Thumb Up

        Re: Doctor Who’s fictional sidekick

        Leela !

        Every time.

        1. Rich 11 Silver badge

          Re: Doctor Who’s fictional sidekick

          Never trust anyone whose eyes change colour for silly reasons.

        2. TRT Silver badge

          Re: Doctor Who’s fictional sidekick

          Sarah Jane...

  5. Anonymous Coward
    Anonymous Coward


    At least brain2pix only slightly distorts the images, rather than the completely wokification of the series under the 13th doctor.

    1. Anonymous Coward
      Anonymous Coward

      Re: shame

      a complete wookiefication might be quite interesting :-) [1]


      [1] for mysterious values of interesting.

    2. Rich 11 Silver badge

      Re: Distorted

      Oh dear. Someone is struggling to cope with the basic concept of science fiction.

      1. jake Silver badge

        Re: Distorted

        Maybe not the concept of SciFi itself, per se, but rather a meta concept.

        I think it was Ted Sturgeon who first voiced the revelation ...

      2. JDX Gold badge

        Re: Distorted

        The basic concept that it should be a vehicle for whatever agenda you want to push?

  6. TheProf Silver badge

    Speculative Fiction

    Sounds like

    Keller Machine

    You have been warned!

    1. Jellied Eel Silver badge

      Re: Speculative Fiction

      This is nothing new. I first saw this in work from Carpenter, Piper et al back in 1988 where the technology had been successfully incorporated into a pair of glasses.

      (Damnit, I'm out of gum again)

  7. TRT Silver badge

    I've never quite understood this fixation on getting AI to learn what's going on in the brain as an aid to restoring function... SURELY the actual human brain is a far better "learning" machine than a box of binary bits. After all, it comes into the world with just a rough soma-totopic set of connections which it then spends the next three or four years pruning and training and associating... fine tuning the connections so that it can run, see, hear, jump, carry, pick, catch, predict.

    It strikes me that providing one stimulates roughly the right bit of cortex, and is consistent in that simulation, then the brain itself will do the tune up in order to make use of that stimulus.

  8. davyclam
    Big Brother

    30 Episodes?

    Made to watch 30 episodes of Dr Who with their head clamped in a vice ?

    This sounds more like the Ipcress File or 1984, they probably came out completely bonkers.

    1. TRT Silver badge

      Re: 30 Episodes?

      A Clockwork Orange?

      1. David 132 Silver badge

        Re: 30 Episodes?

        Thanks, but the gears always stick in my teeth. I’d rather have a chocolate one.

  9. Winkypop Silver badge

    Dang it

    I thought this article was about scanning someone’s memory to recreate the remaining “lost” episodes....

    1. David 132 Silver badge

      Re: Dang it

      John Nathan-Turner’s brain in a jar, hooked up to a van der graaf generator, several erlenmeyer flasks of dry ice, a jacob’s ladder or two, and a giant we-belong-dead blade switch, all operated by a postgrad student with a hunchback and a lisp?

      Now that’s the kind of cutting-edge real science I can get behind. Fund it immediately!

  10. RLWatkins


    The brain's V1 area is pretty much a map of the person's visual field. There is some distortion, which assists the brain in compensating for rotation of the field, but otherwise there's a part of the brain which reproduces a picture of what the eyes see.

    Being able to read images from V1 is ongoing research, and this is an outgrowth of that.

    If you can put a MEG helmet on a subject, you can also point a camera at what they're looking at, so it's not like it's extracting secrets from the human brain. It can't read memories, it can't read visualisations, it can't read non-visual thoughts.

    Not much scary about it. Pretty nifty, actually. May wind up helping some blind people.

    1. David 132 Silver badge
      Thumb Up

      Re: V1

      Thank you. That was a wonderfully clear explanation of something I didn’t know.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021