back to article Tesla's Dojo supercomputer is a billion-dollar bet to make AI better at driving than humans

Tesla says it is spending upwards of $1 billion on its Dojo supercomputer between now and the end of 2024 to help develop autonomous vehicle software. Dojo was first mentioned by CEO Elon Musk during a Tesla investor day in 2019. It was built specifically for training machine learning models needed for video processing and …

  1. Anonymous Coward
    Anonymous Coward

    GPU demand.

    First it was crypto mining, now it's AI. Maybe one day gamers will be able to get some GPUs.

    Apparently I can't choose a sarcastic Icon without telling the world who I am.

    1. Anonymous Coward
      Anonymous Coward

      Re: GPU demand.

      At least the used cards from the miners had video outputs.

      When the neural net market crashes and these big systems are dismantled by the receiver, you'll be able to buy their cards dirt cheap - and do nothing useful with them :-(

  2. Pascal Monett Silver badge
    FAIL

    "But then, you get to, like, 10 million training examples, it becomes incredible"

    Great news.

    Call me when it becomes reliable.

    Oh, and useful.

    1. GruntyMcPugh

      Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

      ...and can fit in a car. So they have a supercomputer that is an 'AI' and can drive. That's nice. How do they engineer that into something that is viable in a vehicle?

      1. Anonymous Coward
        Anonymous Coward

        Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

        training and running are different.

    2. tfewster
      Devil

      Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

      And on an on-board computer.

      Edit: Already said by GruntyMcPugh

    3. Someone Else Silver badge
      Facepalm

      Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

      ... and stops driving itself into the back of stopped fire engines and such at speed.

      1. Merrill

        Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

        >>and stops driving itself into the back of stopped fire engines and such at speed.

        These would seem to be the very negative examples needed to train the AI what not to do...

        10 million crash records are what is likely needed in order to avoid crashes, not 10 million examples of uneventful trips.

    4. DJO Silver badge

      Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

      Musk as ever seems to be as dumb as a brick.

      10 million training examples or 100 million training examples or a zillion billion quintillion training examples are completely useless they are all graded. Or to put it another way, how good a driver will the AI be after being trained by watching how average drivers perform?

      To train to be a good driver it needs to learn from examples of good driving which you are not going to get from the majority of Tesla drivers (or BMW drivers or Audi drivers or...).

      1. bazza Silver badge

        Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

        Maybe he's got an AI doing that grading?!

      2. DS999 Silver badge

        Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

        How does training on a single action taken by a good driver really differ from a single action taken by a bad driver? OK, obviously there's a difference between performing a lane change by checking, signaling, then re-checking and making the lane change versus checking and starting the lane change simultaneously, then swerving back if that check indicated there was someone next to you. But that's pretty obvious stuff, you don't need AI to tell a computer driving a car how to do a lane change based on comparisons between good drivers and bad. You just program it to do it in a certain way and that's that.

        Most of what separates good drivers from bad is awareness. Keeping track of where other cars are, how fast they are moving, keeping your eyes ahead so you notice stuff like brake lights coming on a half mile down the road. Looking along the sides in the city for stuff like children or animals or soccer balls going into the street. Knowing certain intersections are problematic because due to vegetation or parked cars you can't see the cross streets so even when you come to a stop you have to keep a lookout as you move through because the other traffic doesn't stop and you might not see it until you are halfway in the intersection.

        Unless the AI is recording not only exactly where the good driver's eyes are looking, but the reason WHY they are looking where they are looking and also has a camera on the foot to notice how what he saw made him take his foot off the gas and hovered it over the brake ready to stop. The AI would need to read his mind to know why he treats that problematic intersection differently, or have been tracking him long enough to have seen the times when he didn't treat it differently and had to slam on his brakes because a kid riding a bike he couldn't see behind the hedges reached the intersection just as he put his foot on the gas to move through it.

        Training on "good drivers" is a fool's errand. They almost never get into trouble, so the AI isn't going to learn anything from them.

        1. werdsmith Silver badge

          Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

          Good driving is maximising the margins between yourself and danger. Identifying and minimising risks, constantly. It’s very tiring.

          I drove a transit van 200 miles yesterday and tried to maintain the careful driving and had a few lapses. At the end I was knackered.

          I am a bad driver trying to improve.

          The aim of getting autonomous vehicles to outperform humans is a low bar.

          1. Michael Wojcik Silver badge

            Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

            The aim of getting autonomous vehicles to outperform humans is a low bar.

            Yes, but it's still not an easy problem, because of the really, really, really long tail – the vast array of improbable cases.

            How many cases of "driving at highway speed and a significant part falls off the vehicle in front" are in Tesla's corpus? That's happened to me a couple of times. How many of "a significant part just fell off this vehicle"? I towed a Jeep once that had a wheel come off after the axle sheared, due to a manufacturing defect. How many of "driving down the highway and there's a vehicle on its side in the passing lane, facing back down the highway"? That's happened to me three times, once at night in a heavy snowstorm. How many of "oil slick on a hill on a curving country road" – I had that once. How many of "some random dude trying to direct traffic around a truck that's double-parked on a city street"?

            Have Teslas FSD'd over Hardknott Pass, or other roads with sufficient grade that you can't see the road surface in front of you? I've done a few of those. (Actually I don't know where Tesla camera mounts are; maybe this isn't an issue.)

            Humans are rubbish at maintaining vigilance, but surprisingly adaptable to novel situations.

            1. werdsmith Silver badge

              Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

              But there will be examples of all your scenarios of humans making wrong decisions and screwing up. People screw up Hardknott every day in summer. I’ve seen people meet novel situations and panic.

        2. DJO Silver badge

          Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

          How does training on a single action taken by a good driver really differ from a single action taken by a bad driver

          It doesn't as long as the system knows which is which hence my statement that they need to be graded. Training needs to be complete with "do this" and "don't do this" examples and graduations between the two positions so you get "do this if X happens" and "don't do this if X happens" and then "unless Y happens as well". This is a problem with driving, there are multiple stimuli to consider and they all need to be considered in context. Ultimately you end up with variations of the bloody trolley problem.

          The actual driving, keeping in lane and to a speed which maintains safe separation between vehicles accounts for well over 99% of driving is relatively simple to automate, it's the remaining fraction which is tricky.

          1. Anonymous Coward
            Anonymous Coward

            Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

            " multiple stimuli"

            But tesla does not have "multiple stimuli", Elon musktwat removed sensors to only have camera's like a fuckwit.

            1. DJO Silver badge

              Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

              True but a camera can see more than one thing at a time, a couple of kids running on the pavement, any parked cars, a stopped ice-cream van, an elephant in a side road and so on, it needs to isolate each item, work out if it's relevant and act accordingly and all done in real-time - that's what I mean by "multiple stimuli" not inputs from a variety of sensors.

              But on the paucity of sensors in Teslas, damn right, doing it all by vision alone is astonishingly stupid.

        3. that one in the corner Silver badge

          Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

          > You just program it to do it in a certain way and that's that.

          Which is a great idea and I really hope that there are manufacturers actually doing that.[1]

          But from everything reported about Tesla's approach, there is never anything about actually programming in *any* set of actions, it is all about just training the neural net and it will do everything of its own accord.

          Please, please, provide citations to demonstrate I'm wrong about Tesla (and please, please, if you found any references that the other manufacturers are also taking the sensible route!)

          [1] although hopefully they aren't literally coding in that sequence of actions but are instead setting into place a (sub)plan of actions and then attempting to execute that plan with constant monitoring of the requirements and conditions at each stage - more of an old-style AI approach that doesn't just rely on checking the surrounds at fixed points in the sequence. Which also provides the model to update the plan with, for example, whatever is the currently available route (if any) for getting the hell out of the way when that maniac in the next lane suddenly moves in on you midway through the manoeuvre.

          1. DS999 Silver badge

            Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

            "Driving" in a general sense probably cannot be done without a neural net, but that doesn't mean certain actions can't be hardcoded like how to perform a lane change. You don't want the neural net to think it has found a "better" way of doing that when no improvement is possible.

            Given reports of Teslas creeping in intersections instead of coming to a complete stop and stuff like that, I have to assume nothing is hardcoded because coming to a complete stop at an intersection is going to be the first thing you hardcode if you have the capability. It is also probably the first thing a neural net wants to quit doing since it would save time and battery doing California stops, and if it was learning from what its driver did when he was in control would "think" is the right way to handle an intersection.

            Other things you might hardcode would be how fast you travel in relation to the speed limit. A neural net that bases its speed on surrounding traffic might make sense in a general way, until you happen to be on a mostly empty road where a group of street racers catch you from behind and your car suddenly goes double the speed limit to keep up lol

        4. Michael Wojcik Silver badge

          Re: "But then, you get to, like, 10 million training examples, it becomes incredible"

          Training on "good drivers" is a fool's errand. They almost never get into trouble, so the AI isn't going to learn anything from them.

          I am not a fan of autonomous vehicles, and particularly not of Tesla and Musk; but I don't think this argument is valid. The system isn't, for the most part, learning from drivers. It's learning about driving situations and about probable effects from causes – or more precisely, what the distribution of an event is given the previous window of N events. (Note "event" is defined broadly here, as "a snapshot of input from the sensors". So it will include situational information such as weather.)

          When you've trained a system with a large corpus of such events, then you can impose a set of rules on top, telling it how to weigh outcomes.

          What the drivers do matters only insofar as their control inputs also constitute events.

          In other words, in this sort of training exercise, the system isn't learning how to drive; it's learning what driving is, as a large collection of probability distributions with a time series of inputs as priors.

  3. Lee D Silver badge

    Throwing a billion monkeys at a billion typewriters does not make an intelligent end-product.

    And in this case, the monkeys aren't even sentient themselves, they're just mechanical automaton monkeys.

    This is the same problem that we've had since the 60's. Neural networks, AI, etc. etc. etc. - and the answer is always "if only we had more computers, more computer time, and just left it running for longer processing more input, I'm sure that somehow it will magically become intelligent".

    No. It won't. If it did, Google would have had the best AI in the world about 10-15 years ago. Or even Amazon.

    Brute-force and ignorance is not the seed of intelligence.

    1. that one in the corner Silver badge

      > This is the same problem that we've had since the 60's. Neural networks, AI, etc. etc. etc. - and the answer is always "if only we had more computers, more computer time, and just left it running for longer processing more input, I'm sure that somehow it will magically become intelligent".

      I'll agree that has been the attitude of those who want to USE IT NOW! You've had long enough NOW WE WANT TO MAKE MONEY (or at least use it for advertising). And, hey, look, it works: we "solved" chess by brute force[1].

      Oh, and Hollywood. Hollywood and bad SF books ("Valentina: Soul in Sapphire"[2]) just *love* that trope.

      Those actually doing the AI - not so much. The good ones want to see it done better, not just brute force. Although you can see the appeal for just going with the flow: bite your tongue and wait for the stock options to vest, just like every other buzzword peddler.

      [1] okay, brute force applied to the best of the extant search strategies - literal "blindly try every option" would still be running and for a *long* time to come, but still just doesn't seem like that is how the humans do it.

      [2] even in 1984 it was so wrong to read that, but maybe it'll redeem itself in the last chapter. Nope.

    2. Lurko

      I thought the intended outcome was not to create intelligence, merely to create a saleable product to make the world's richest man even richer? And perhaps provide a platform for his ego and attention seeking behaviour.

  4. Andy 73 Silver badge

    Did his Muskiness..

    ...really just claim that if they throw enough data at a neural net, it will magically generalise????

    Really???

    1. Michael Wojcik Silver badge

      Re: Did his Muskiness..

      Unless you're a dualist (and if you are, well, so sorry to hear that), but that idea is probably true at sufficient scale. A big enough net, enough data, and enough training time, and you'll get a Boltzmann brain (or more likely a whole bunch of them).

      But it would be wildly astonishing if the Dojo system were large enough to exhibit any sort of truly surprising1 emergent behavior.

      1As opposed to the "gosh wow" we're incessantly hearing about LLMs and other large transformer models, which IMO have yet to be surprising. I am frankly baffled by the enthusiasm some very intelligent, well-informed people have for these systems.

  5. Anonymous Coward
    Anonymous Coward

    But will it be clever enough

    to allow Tesla to figure out how to make Right Hand Drive cars in the future?

    https://www.fleetnews.co.uk/news/manufacturer-news/2023/05/12/tesla-cancels-orders-for-right-hand-drive-model-s-and-x-cars

    1. Lee D Silver badge

      Re: But will it be clever enough

      Name a RHD country that would allow this product on the road. Pedestrian safety laws rule out UK, Australia, NZ, etc..

      1. bazza Silver badge

        Re: But will it be clever enough

        It does seem particularly pointless trying to come up with a general "world driver" AI. Road user behaviour is so varied, all over the world. Surely, as someone who spent a lot of time in other countries, he'd realise that...

        Also, he's gone on about data a lot. I'd have thought that, what they'd need, is a lot of data about crashes (i.e. the "don't do this" data). But, they're not going to get much of that.

        1. Michael

          Re: But will it be clever enough

          Oh I don't know about that. Tesla's are involved in more accidents than any other self driving car. Getting plenty of dataon what causes crashes.

  6. bazza Silver badge
    Facepalm

    Good Grief...

    From the article:

    "In order to copy us, you would also need to spend billions of dollars on training compute," Musk claimed, saying that developing a reliable autonomous driving system is "one of the hottest problems ever."

    Or, one could choose to not spend billions of dollars on an unachievable venture, and keep that money in the bank earning 5% (or whatever).

    1. Anonymous Coward
      Anonymous Coward

      Re: Good Grief...

      Why, Mr Musk, I thought it was a solved problem?

  7. Emir Al Weeq

    Training your AVs at the fun-fair.

    I saw the word "Dojo" and initially pronounced it "dodge-o". From then on, I couldn't get the image of dodgems out of my head. Probably not what their marketing department are aiming for!

    1. First Light

      Re: Training your AVs at the fun-fair.

      More like "dodgy".

  8. Howard Sway Silver badge

    Tesla's Dojo supercomputer is a billion-dollar bet to make AI better

    Can we simply believe this? I'm imagining that it might be very efficient at mining Dojocoins. Which sounds quite a similar name to a crypto that Musk has already hyped. And of course he who completely controls the mining completely controls the coin...........

  9. LateAgain

    Billion Dollar Brain ?

    Didn't it turn out to be a big con in the film ?

    1. that one in the corner Silver badge

      Re: Billion Dollar Brain ?

      The Billion Dollar Brain was going well, it was let down by the Human Factor (aka making money on the side).

      Which made the film's message a bit confusing: the insane Right-Wing Capitalist had his plans undermined by - good old Capitalism?

  10. Anonymous Coward
    Anonymous Coward

    The change in approach is neccessary but not necessarily sufficient

    see title.

  11. Anonymous Coward
    Anonymous Coward

    Sell more cars

    We need more data points!

    Tesla Drivers: Elon loves you so much he’s allowing you to be part of a great experiment.

    1. First Light

      Re: Sell more cars

      Tesla should by rights be paying people to take their cars and contribute to the data.

      Instead people will pay money and risk their safety to train Tesla's AI?

      Crazy.

  12. nautica Silver badge
    Happy

    Opening sentence, eh? Making decision to not read any further very easy. Thanks.

    "Tesla says..."

    1. Anonymous Coward
      Anonymous Coward

      Re: Opening sentence, eh? Making decision to not read any further very easy. Thanks.

      even easier "musk says ...", i always know the next thing is going to be utter bollocks and most likely a fucking lie.

  13. steelpillow Silver badge
    Boffin

    This is getting interesting

    Each generation of "self-drive" AI generates its own dataset.

    So train the next generation on that dataset, and all it needs to do is find and fix the problems.

    Iterate for a few years.

    But then, if you truly seek "a generalized solution for autonomy", can such a process actually get there? Will some step to a more general kind of intelligence be necessary but still missing? Answers on a postcard, please.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like