The fallacy of AI?
I think there is a logical fallacy underlying all of these things.
Why do we want computers to do things for us? One reason is efficiency, but equally important is the pseudo-fact computers are "perfect"; they will perform very simple tasks like adding two numbers, exactly correctly, every single time (I know this is not really true, but since the job of a programmer depends on it being "true enough" lets go with it).
When programmed correctly, these simple tasks become complex behaviors which can also be provably correct. If we assume that all CPU instructions are executed as documented, and the programmer made no mistakes, and the data is both correct and sufficient, then the output of your 3D model finding program (for example) will also be correct.
The issue facing us as we ask computers to do more things, is that there are lots of tasks which are difficult to specify to a program, generally because we ourselves do not have a complete logical model for them. In this case, the task presented to the computer is wading through the internet and figuring out what is true, and what is relevant to the user.
The AI solution seems to be to try and make computers think and behave more like people. The expectation seems to be that you will get a combination of the best aspects of computers (their infallibility) and people (their ability to tackle complex, abstract, often underspecified problems).
The fallacy is that once you stop writing provably correct programs and instead try to make the computer behave like a person, the infallibility chain is lost. The only reason computers can do things so reliably is because people have thought long and hard about how to harness the stack of guarantees they are sitting on top of. Machine learning throws that all out the window, so why would we expect models trained on them to retain the core reliability strengths of the computer?