That's why there are so many people who lie about what AI does. If LLMs do what they actually do, then it becomes clear that it will never make anywhere near the money it promises to. But if LLMs can replace lots of people's jobs, then you could get a little less than what all those companies were giving those workers, and that could easily exceed the investment being made this year. It can't replace so many jobs, but if investors think it can, they'll keep paying until the end. Combine that with a lot of people who aren't investors but think it can replace jobs for some other reason* and you've got plenty of noise supporting investors' confidence that this is the perfect investment for the long-term investor because they think it's guaranteed to make massive profits and they have to wait a few years. That's also the perfect investment for the fraudster because now they have a few years to take as much as they can before it collapses.
* There seem to be several reasons for people to assume AI can do everything, even if they aren't personally investors in it. Some of them seem to be:
1. They want to have AI do their work, so AI must be able to do so so they're not being unprofessional.
2. They want to not have to do work at all, so if AI takes all the jobs and they need to set up something else, then AI must be capable of doing it.
3. They want to hate someone, and if AI took all the jobs, then the people making money off the AI would be easily despised. It seems like you could hate someone, even the same people, for better reasons, but still.
4. They want something to exist which doesn't, they don't think it would happen without AI, so if AI was able to make it, then they would get what they want. This is the vibe-coders' position, in my experience. People want a piece of software, nobody's writing it for them, so they are sure that the next run of the AI code generation will compile to the thing they imagined.