back to article Google says its artificial intelligence is faster and better than humans at laying out chips for artificial intelligence

Google claims not only has it made an AI that's faster and as good as if not better than humans at designing chips, the web giant is using it to design chips for faster and better AI. By designing, we mean the drawing up of a chip's floorplan, which is the arrangement of its subsystems – such as its CPU and GPU cores, cache …

  1. jpo234

    > Get ready for more neural networks designing hardware to make neural networks more powerful.

    First phase of the Singularity...

    1. bombastic bob Silver badge
      Terminator

      but are they "3 laws safe" ?

      1. NoneSuch Silver badge
        Devil

        Human: Three laws safe.. Check...

        AI: Errrmmmm.... Not so much. (Override engaged, logs deleted, press "accept security modification and build")

      2. Felonmarmer

        "The neural network is content with laying out a chip that to a human may look like an unconventional mess, but in practice, the component has an edge over a part planned by engineers and their industry tools. The neural net also uses a couple of techniques once considered by the semiconductor industry but abandoned as dead ends."

        Do that for a few iterations, so even the first AI in the sequence can't follow the latest AI's work and that's how you end up with the Zeroth Law.

      3. AceRimmer1980
        Coat

        3 Laws Safe?

        Yes, but if a human is in danger, there is several minutes of inaction while Res-Q-Bot plays adverts.

        1. zuckzuckgo

          Re: 3 Laws Safe?

          And most of those ads will be for medical and physical rehab services.

          1. Jimmy2Cows Silver badge

            Re: 3 Laws Safe?

            That would imply the ads are vaguely useful.

            Au contraire.

            The ads will be for an item or activity you recently purchased which may or may not be in any way related to the situation you now need rescuing from.

  2. Charlie Clark Silver badge
    Thumb Up

    Sounds reasonable

    While you can do some of this with solvers, the number of parameters quickly makes this impractical, though you can use the approach to assess the results. So, you're left with pattern matching, something at which ML excels, learns from its mistakes (including occasionally how to repeat them…) and doesn't get bored.

  3. Pascal Monett Silver badge

    The next step

    "humans and their usual software tools were needed for the fiddly business of checking clock signal propagation, and so on "

    Sounds to me that that is something another AI/ML could take care of as well.

    1. Charlie Clark Silver badge

      Re: The next step

      No doubt they're already working on this: Google tends to release research papers after they've moved on at least one step. Google's work on chips is a good thing given the concentration in the industry – ARM does continue to evolve but nVidia has different priorities now – and Intel is still in a mess.

  4. midgepad

    A singular

    development.

  5. daveyeager@gmail.com

    Some key info missing from article. Placement is done using traditional algorithms and always has been automated. I guess the key here is that floorplanning & layout are different and up until now have been best done manually. Would be great if article elaborated as to why these two steps in the design process are different where the latter doesn’t lend well to traditional algorithms.

    1. diodesign (Written by Reg staff) Silver badge

      Traditional algorithms

      The neural network, Google says, outperforms human and industry automated tool placement.

      So when you see in the article "beats humans" read it as "beats humans using their brains and their automated tools". I'll try to make that clearer.

      Google's argument is that the neural net places macro blocks better than humans and their tools, and does it in hours, and not in a process that can take months to juggle around blocks and cells. Also, the AI can place the blocks in an unconventional manner: it seems to scatters them as needed, which some humans might not be so brave to do. The design looks like a mess but it's optimal.

      FWIW it's been 15+ years since I've done any kind of chip design. In researching this piece, I read a pre-publication analysis of the paper by Andrew B. Kahng, a VLSI professor at UCSD, and for instance he mentions:

      "The authors report that the agent places macro blocks sequentially, in decreasing order of size — which means that a block can be placed next even if it has no connections (physical or functional) to previously placed blocks.

      "When blocks have the same size, the agent’s choice of the next block echoes the choices made by ‘cluster-growth’ methods, which were previously developed in efforts to automate floorplan design, but were abandoned several decades ago.

      "It will be fascinating to see whether the authors’ use of massive computation and deep learning reveal that chip designers took a wrong turn in giving up on sequential and cluster-growth methods."

      In other words, the AI works differently to humans and their automated tools, and that difference can be seen.

      C.

      1. Tom 7

        Re: Traditional algorithms

        I used to investigate and try and use every algorithm and trick in the book I could code 30 or more years ago - biggest device I made was 10,000 transistors of ECL which was nice and regular. But the computational times then for some stuff was heat death level. I could get code to get within a few percent of experienced human layout for small groups but a guy I know worked on paper and if you really needed something as small as possible he was your man and he'd bring out something 10% smaller than I could do on a good day and 20% smaller than some seriously computational try every last thing and annealing several annealing algorithms. As chips get more complicated people adopt sub optimal area wise solutions because the time taken for humans and computers to do much better grows exponentially.

        I'd bet certain things were sidelined simply because they couldn't provide the results fast or cheaply enough. I was designing stuff on things with megs of ram and MIPS of core - machines run by other people. I dare say some of the code could be run on GPUs with 10s of GIgs of ram in times where they are useful now. We've just got to a price and performance point where old tricks become useful again against the ever increasing volume of data and rules to crunch. As all the different layers of technology progress different things are going to pop in and out of usefulness. Humans may well pop out of it with AI and chip layout.

        1. Charlie Clark Silver badge

          Re: Traditional algorithms

          Given the number of transistors shoved onto modern chips I suspect that the algorithmic solving approach is now probably slower than it was then. The ML approach doesn't have to be optimal, it just has to be slightly better and faster. As noted in the report, the final plan can subsequently be tweaked.

          1. Tom 7

            Re: Traditional algorithms

            I'd say it depends on the problem. Humans are pretty damn good at high knowledge low component optimisation. I can easily see AI being able to optimise vast networks that humans simply wont have the time to understand well enough to actually improve. Some of the best layout engineers in the world may be as an MBA - simply fucking with stuff to make themselves relevant. But this stuff is hard - 30 odd years ago I was reading a couple of PhDs a month to try and suck ideas from all aspects of circuit and chip design somewhere that knew I'd probably get results as a result. I have a feeling it would take 20 years in the industry now to get to anywhere near the sort of global view of the whole engineering surface you need to be able to see doing something here would ripple through to a little space there without having to squeeze that lot there making this track there a lot longer so needing a larger driver which means the power in this area is a little higher than wanted so the leakage will probably mean the ram there will chuck up one more error in 10**15 which will mean the software error corrector will be too slow ...

            I do wonder about AI complainers some time - people saying it doesnt explain how it got to where it did. TBH people do things they cant explain to others because others dont live the way you do,

            1. Ken Moorhouse Silver badge

              Re: people saying it doesnt explain how it got to where it did

              Interesting discussion point.

              I don't think it is important unless one intends to fine-tune the design once it has been laid down (and the article indicate that fine-tuning is done by humans). The people involved with the fine-tuning would need to have a damn good understanding of how the AI process ticks to be able to make meaningful changes. If fine-tuning is done without understanding assumptions and consequences, then there could be problems, particularly if one passes the design back for AI fine-tuning, there would have to be locks on the fine-tuned parts to say 'leave alone'.

              Presumably the concept of individual gates is still retained. The ultimate AI analysis would presumably not recognise each gate as a discrete entity, and be allowed to make a doping porridge where it was not possible to single out an area as one discrete element.

            2. 42656e4d203239 Silver badge

              Re: Traditional algorithms

              I know I'm a bit late to the party... however your post puts me in mind of a possibly/probably aprocryphal story from back in the day (30 years? ish) when computer/AI designed FPGAs were in their infancy.

              Apparently the computer came up with a design that had disconnected, but programmed, logic at random places in the layout.

              Humans optimised them out becasue they 'obviously didn't do anything'. When tested the FPGA didn't do what it said on the tin.

              How could this be? the layout was designed by an expensive computer!

              Someone put the removed 'redundant/unused' logic back in and the FPGA worked as expected, apparently due to leakage/crosstalk between the otherwise unconnected logic circuits and the circuits that actually did the work.

  6. amanfromMars 1 Silver badge

    The Forbin Project* v2.0?

    That all sounds far too much like a Colossus achievement to not be. And we all know to where and what that leads to.

    *https://youtu.be/tzND6KmoT-c

    And Unity is a Singularity too.

  7. Nightkiller

    "allowing chip designers to be assisted by artificial agents with more experience than any human could ever gain."

    When are Google and Neuralink going to get together? Somebody's got to be the boss here.

  8. steelpillow Silver badge
    Joke

    "OK, we see where this is going"

    Linked Enhancement Substitution Systems are the name of the game here, able to substitute AI + ML optimised circuit blocks into an existing system architecture.

    The project has been dubbed Marvin the Paranoid Android because it is AI +ML + LESS.

    1. bombastic bob Silver badge
      Coat

      Re: "OK, we see where this is going"

      nobody EVER listens to Marvin, even though he has a brain the size of a planet... (and sounds a LOT like Alan Rickman)

      1. This post has been deleted by its author

  9. Claptrap314 Silver badge

    This is not a surprise

    At the risk of being dumped on by actual designers here, (I did validation) even twenty years ago, there was a back-and-forth race between humans and auto-routing software.

    The growing complexity of the problem, and the growing sophistication of the software, means that this has always been a primary target for software takeover.

    Nim, checkers, chess, go, place & route...

  10. Anonymous Coward
    Anonymous Coward

    How much does it increase paperclip production by?

    1. Kane
      Terminator

      "How much does it increase paperclip production by?"

      rEleAsE tHe HyPnoDrOneS

  11. Stuart Halliday
    Terminator

    How long before we trust it and stop double checking it's correct?

    How long then before we lose the skill set to do it ourselves?

    1. Ken Moorhouse Silver badge

      Re: How long then before we lose the skill set to do it ourselves?

      Ok not to do with chip design, but there are parallels with electronic circuit design.

      I used to do a lot of work with boolean algebra to do things like driving seven-segment displays from breadboards of TTL. Haven't had the need to do that for many years now, so prob quite rusty at it. (Mind you, Norman Wildberger thinks we shouldn't be doing it that way anyway, and having seen his way of doing things, am inclined to agree).

      >How long before we trust it and stop double checking it's correct?

      In the old days of TTL it was important to get things right first time, otherwise it would be a soldering iron job. With everything now done in software, I reckon people don't bother to check at all because it's easy to fix in the next release.

      1. Anonymous Coward
        Anonymous Coward

        Re: How long then before we lose the skill set to do it ourselves?

        "I reckon people don't bother to check at all because it's easy to fix in the next release."

        Back in the days when 2000 gates was a big FPGA - a design went into production. It was based on some experimental work of mine - but had then been implemented by the development team. The FPGA pin-out had to be fixed for parallel production of the motherboard - a gamble that the final design would not need any pin-out changes.

        The FPGA gates etc were just about all used. The implementation had to do the placement/ routing manually as the automatic tools kept re-organising the pin-outs and timings.

        Then customer random problems led to the discovery of a timing problem. The fix had to be squeezed in - without affecting pin-outs or timing. All in a day's work.

  12. RLWatkins

    Remember Microsoft's "Dot Net" trademark...

    ... which around 2000 or so they liked so much that they attached it to every last product, to the point where it became utterly meaningless?

    "You can use your Dot Net menu to run your Dot Net report from your Dot Net database on your Dot Net server to view on your Dot Net system with your Dot Net spreadsheet...." (While drinking your Dot Net coffee at your Dot Net desk, etc.)

    The popular press and their symbiotes, the marketroids, have done the same thing to the term "artificial intelligence". "AI" never actually meant very much, but these days it means nothing other than "Let us remind you to pay more attention to what we're selling here."

  13. Mike 16

    Have they tried

    Letting Slime Mold take a whack at the job?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like