Re: Sorry Kahng, Goldie & Mirhoseini's AI work is legit
Thanks for your reply.
The Nature authors only pre-train on 20 blocks -- a large number for the chip design community, I know, but still. We're not talking hundreds here.
Kahng et al present their work as though they have a refutation of the Nature paper. That they were unwilling or unable to muster a pretraining data set does not make their refutation more valid.
I agree that the fact that the field doesn't have large opensource IC designs is a huge problem. Perhaps senior members of the field like Kahng should prioritize helping rectify that situation!
"the ML flow optimized an initial layout - this seems a serious omission" - I can see why you'd get this takeaway from the article and Kahng's paper, but it's not accurate.
The ML model's job is to place macros (and after that, a standard cell placer will place the standard cells). The system in the Nature paper uses the initial placement only to cluster the standard cells, so that the ML model can more rapidly estimate what the standard cell placer will do. This initial placement doesn't even have to be good, but it does have to be better than "all the cells in one place", which is what Kahng et al tried. They tried putting all in one corner, then all in the center, then all in another corner. This does not seem like a serious exploration of the effect of initial placement on standard cell clustering. (In 5.2.2, the Kahng et al even admit that they tried two actual initial placement methods, and both gave the about the same results).
In any case, this is not the same thing as using the initial layout as a starting off point for optimization.
At a high level, I think you're right that the issue is that folks who aren't yet skilled at ML have more to learn than they realize.