What you get out is less than what you put in
Autocomplete on steroids is a pretty accurate summary.
Parent commentard's observations are also mine - LLMs confidently produce absolute shite when asked about something specific, will sometimes do a reverse ferret when challenged or, as Lee says, double down producing a pyramid of piffle.
When asked to produce a piece of code, what comes back is typically broken: the main benefit over an incompetent human developer is that it screws it up faster and more cheaply and doesn't argue about what you said in the first place. Try asking ChatGPT for something in OpenSCAD - it won't say it's beyond its capabilities it will just produce poorly parameterised code that (at best) is a broken object. As described, it doesn't care about solving the problem just about producing a plausible looking reply.
And this isn't a harmless bit of nonsense, it is:
a) draining investment out of productive economic activity
b) crowding substantive issues out of political and management discourse, and worst of all
c) an energy guzzling catastrophe, reversing years of savings from energy efficient computing