back to article Google's claims of super-human AI chip layout back under the microscope

A Google-led research paper published in Nature, claiming machine-learning software can design better chips faster than humans, has been called into question after a new study disputed its results. In June 2021, Google made headlines for developing a reinforcement-learning-based system capable of automatically generating …

  1. lglethal Silver badge
    Facepalm

    Google doing something unethical? Who would a thunk it?

    I do think that Nature and similar publications should not publish papers making claims that cant be tested by other researchers, due to parts of the claim being hidden by commercial confidentiality or similar. If others cant test your claim in full, then it's not suitable for a scientific paper, is it? Then again maybe I'm just being naive...

    1. b0llchit Silver badge
      Unhappy

      If others cant test your claim in full, then it's not suitable for a scientific paper, is it?

      That is how science should work. But when (al lot of) money is on the line... shortcuts will be taken and processes abused to one's advantage.

      Peer review should filter out this, but it is underpaid work and does not give you points to promote your career. Therefore, like any other work, integrity suffers when money is on the line.

      1. nematoad Silver badge
        Unhappy

        "Peer review should filter out this, but it is underpaid work..."

        From what I have read peer review as practised by these scientific journals is unpaid as are the contributors of the article.

        1. lnLog

          Not just the reviewers, the majority of editors are also unpaid

    2. An_Old_Dog Silver badge

      Secret Sauce

      If relevant details are being withheld, then it is using some "secret sauce", cannot be independently-reproduced, and is thus not good science.

      1. CatWithChainsaw

        Re: Secret Sauce

        If there is a secret sauce, Google naturally doesn't want to yield any more ground to Microsoft or other AI suites. Gotta hook people onto their specific bot in this brave new siloed world.

        Still, sad day if money can buy "trust me bro" in a publication like Nature.

  2. nautica Silver badge
    Happy

    Perhaps scientific journals need to be more selectivel regarding from whom they accept "scientific" papers.

    ------------------------

    "WILKES [1959], “THE EDSAC” Be careful how you fix what you don’t understand.”― Frederick P. Brooks Jr., The Design of Design

    "Those in possession of absolute power can not only prophesy and make their prophecies come true, but they can also lie and make their lies come true."--Eric Hoffer

  3. Version 1.0 Silver badge
    Joke

    Super AI evilution?

    If "machine-learning software can design better chips faster than humans" then how will this work for us over the next 100 thousand years?

    Originally we were monkeys learning to climb trees by developing longer fingers to grasp the branches ... a very helpful evolution because now we can walk around everywhere holding a cell phone in our hand and posting on social media. Will we start to see Google telling us that our brains are no longer supported and that we need to upgrade to the new super-human AI chip brain?

    1. very angry man

      Re: Super AI evilution?

      So many brains are not fit for purpose these days that most should be down graded and no longer supported.

      I for one welcome our AI masters and bow before the AI greatness

  4. AceRimmer1980
    Terminator

    What a dull name.

    "I speak of none other than the computer that is to come after me," intoned Deep Thought, his voice regaining its accustomed declamatory tones. "A computer whose merest operational parameters I am not worthy to calculate - and yet I will design it for you. A computer which can calculate the Question to the Ultimate Answer, a computer of such infinite and subtle complexity that organic life itself shall form part of its operational matrix. And you yourselves shall take on new forms and go down into the computer to navigate its ten-million-year program! Yes! I shall design this computer for you. And I shall name it also unto you. And it shall be called ... The Earth."

  5. fg_swe Silver badge

    Male Cow Ex.

    I call Bullshit on this claim. The "AI" of Google is in the order of complexity of a worm. Thousands of Neurons.

    Mankind overcame the Tiger, with 5 billion of neurons, with much better muscles, powerful teeth.

    The Human brain has 100 billion neurons, with each connecting 10000 other neurons.

    Our brain learned to make very pointy spears and it learned to lob it into the tiger, before he can jump on us. While we gang up on him, using complex language.

    https://dinoanimals.com/animals/number-of-neurons-in-the-brain-of-animals/#:~:text=The%20number%20of%20neurons%20in%20the%20brain%20of,the%20human%20brain%20has%20an%20estimated%2086%2C000%2C000%2C000%20neurons.

  6. Anonymous Coward
    Alien

    Crop Circles

    "...Mirhoseini and Goldie left Google in mid-2022 after Chatterjee was axed."

    Everyone related to the original paper are disappearing from Google.

    1. Anonymous Coward
      Anonymous Coward

      Re: Crop Circles

      Yeah, and when they apply for the next job, everyone will know they left Google after ethical behaviour was uncovered ...

      1. plutoniumprophet

        Re: Crop Circles

        Sorry, what? This is ground-breaking AI work, and it's used in production at Google.

  7. This post has been deleted by its author

  8. IGnatius T Foobar !

    There is no such thing as "artificial intelligence"

    There is no such thing as "artificial intelligence" and there never will be.

    There, I said it.

    The GPT programs that the less intelligent portion of the tech world are clamoring breathlessly about are impressive, but call them what they are: a natural language interface to whatever body of data they have been given. They are not "thinking".

  9. plutoniumprophet

    Sorry Kahng, Goldie & Mirhoseini's AI work is legit

    So, I did the legwork here and read Goldie & Mirhoseini's statement (on annagoldie dot com), and Kahng's paper, and his supporting documentation, and I looked at G&M's open source repository on github (I'm a machine learning engineer who should probably get back to work).

    In bold at the top of G&M's statement: "Our RL method has been used in production for multiple generations of Google’s flagship AI accelerator (TPU), including the latest, and chips with layouts generated by our method have been manufactured and are currently running in Google datacenters."

    Does anyone seriously believe that Google would use anything other than the best available method for TPU? This is their main AI chip! (Kahng even confirmed himself that Google uses this method for TPU in one of his supporting docs)

    It is *bonkers* that Kahng doesn't do any pre-training. This is a huge problem! Imagine ChatGPT but it has never seen any words beyond the conversation you are having with it right now. Imagine AlphaGo but it can only learn from the one game it is playing with you right now. It's impressive the method performs well at all!

    Kahng whines that he can't pre-train because he doesn't have access to Google data. I just... maybe pretrain on other chips? There are other chips in the world? He found chips to run his study on, but couldn't find any for pre-training? I don't buy it.

    There are other big problems -- looks like Kahng confirmed his implementation with Google engineers for one chip, and then for the five other chips trained it way less (1 million steps for that one, but 150k-350k steps for the other chips -- you can see this in his supporting docs). It is borderline dishonest for him to present those results. I guarantee no Google engineer saw those plots and thought this was reasonable.

    1. mattaw2001

      Re: Sorry Kahng, Goldie & Mirhoseini's AI work is legit

      I wonder if we are looking at a domain problem, where folks not skilled in ML are trying to apply ML techniques?

      Two things I would like to highlight in your comment though. Firstly, apparently the Nature article does not describe the actual flow Google used, namely that the ML flow optimized an initial layout performed by Synopsys' EDA tools - this seems a serious omission that requires Google to respond and or amend their paper in Nature. Secondly, there are no modern large opensource IC designs, especially resembling a commercial flow on a modern fab for Kahng et al to test on.

      Overall while I appreciated your comment and how it highlights shortcomings in Kahng's approach, I feel this is proper science and publication should go ahead. If Google publishes a paper claiming something they have to defend it against criticism. Kahng tries to reproduce it and is struggling, documenting his approach in a reproducible way and cannot achieve the same results. You read both, spot flaws in Kahng et al's approach and highlight them.

      TLDR I do not agree on your take of not publishing, if nothing else the flow omission, but shortcomings of Kahng et al. will come out, and the onus is on Google to put up or shut up - published papers should not be waved through on a "trust me bro" basis

      1. plutoniumprophet

        Re: Sorry Kahng, Goldie & Mirhoseini's AI work is legit

        Thanks for your reply.

        The Nature authors only pre-train on 20 blocks -- a large number for the chip design community, I know, but still. We're not talking hundreds here.

        Kahng et al present their work as though they have a refutation of the Nature paper. That they were unwilling or unable to muster a pretraining data set does not make their refutation more valid.

        I agree that the fact that the field doesn't have large opensource IC designs is a huge problem. Perhaps senior members of the field like Kahng should prioritize helping rectify that situation!

        "the ML flow optimized an initial layout - this seems a serious omission" - I can see why you'd get this takeaway from the article and Kahng's paper, but it's not accurate.

        The ML model's job is to place macros (and after that, a standard cell placer will place the standard cells). The system in the Nature paper uses the initial placement only to cluster the standard cells, so that the ML model can more rapidly estimate what the standard cell placer will do. This initial placement doesn't even have to be good, but it does have to be better than "all the cells in one place", which is what Kahng et al tried. They tried putting all in one corner, then all in the center, then all in another corner. This does not seem like a serious exploration of the effect of initial placement on standard cell clustering. (In 5.2.2, the Kahng et al even admit that they tried two actual initial placement methods, and both gave the about the same results).

        In any case, this is not the same thing as using the initial layout as a starting off point for optimization.

        At a high level, I think you're right that the issue is that folks who aren't yet skilled at ML have more to learn than they realize.

    2. atropine blackout

      Re: Sorry Kahng, Goldie & Mirhoseini's AI work is legit BUT

      The original paper may well be legit, but surely 'repeatability by a third party' is the generally accepted method of validating new work described in a paper?

      In that context, I'd suggest that the authors' response, which seems to be along the lines of 'your mileage may vary', doesn't really cut it here.

      1. plutoniumprophet

        Re: Sorry Kahng, Goldie & Mirhoseini's AI work is legit BUT

        Just to be clear, open-sourcing your code exceeds the current academic standard, which is normally to describe the method. Goldie & Mirhoseini went above-and-beyond to open source the method.

        (Incidentally, Goldie's most-cited work is a paper exploring seq2seq models, and she open-sourced then as well: https://github.com/google/seq2seq )

        Other people didn't have trouble building on and comparing against the work:

        https://arxiv.org/abs/2211.13382

        https://arxiv.org/abs/2111.00234

        Doing ML work is a skill, and Kahng et al's struggles here don't seem to be shared by other researchers.

        1. Erik Beall

          Re: Sorry Kahng, Goldie & Mirhoseini's AI work is legit BUT

          Good points and you've convinced me. Proper replication can be hard work, and the limitations on Kahng's study are big enough to explain his results. You pointed out that some of the limitations weren't insurmountable, it sounds like Kahng is a researcher who isnt careful enough or doesn't care about results as much as press, both of which describe more than half of researchers I've worked with.

  10. Anonymous Coward
    Anonymous Coward

    Not exactly "natural", is it?

    Is it just me, or does it seem odd that using ML to design computer chips would be covered in a magazine called "Nature"?

    1. Anonymous Coward
      Headmaster

      Re: Not exactly "natural", is it?

      Science, innit.

      Are you suggesting a science magazine with a history of well over 100 years of publishing science stories should stop?

      1. Anonymous Coward
        Anonymous Coward

        Re: Not exactly "natural", is it?

        No, I just would expect a magazine titled "Nature" to contain articles about things in, well, nature. (If they want to run other stories, that's their business.)

    2. James Anderson Silver badge

      Re: Not exactly "natural", is it?

      The magazine was up and running long before the meaning of “Nature” became a small sub set of er Nature which was restricted to cute videos of Meerkats and Penguin chicks.

      1. MJB7

        Re: Not exactly "natural", is it?

        The magazine was up and running long before the meaning of “Nature” ...

        Exactly. I studied "Natural Sciences" at University, which in my case meant Physics, Chemistry, and Metallurgy.

        1. Anonymous Coward
          Black Helicopters

          Re: Not exactly "natural", is it?

          Maybe Other AC thinks that there are unnatural sciences? Rigourously explored by articles published in 'The Unexplained' magazine?

  11. Alan Hope

    "Nature" simply must prominently label papers like this one from Google as "advertorials" where independent researchers can't duplicate the methods. This is not science, and Nature goes right down in my estimation.

  12. TheWeetabix Bronze badge

    “In all key metrics”

    You mean “in all the ways that make us money.”

    FTFY

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like