I think that LLMs are a tool. Specifically, they are a new tool that's not very much like any other tool that we've had before. As for all such tools, we're going to need a while to figure out where it's useful, where it's useless, and where it's harmful.
Asking it for moral advice sounds like asking for trouble; ditto for financial or medical advice. But the same goes for Google, really.
However, here's an anecdote. I'm developing a WPF application. XAML bindings are a complicated thing, and like all complicated things they sometimes exhibit unintuitive behavior in edge cases. Yesterday, I had a ComboBox inside a DataGrid cell, and the box was stubbornly refusing to update its VM when interacted with. After swearing at it for a half hour or so, I started to turn to Google - but, in a whim, I called up ChatGPT instead.
The bot started by telling me all the obvious problems that can cause a XAML binding to fail. Being an experienced developer, my problem was non-obvious, but I went along with it, calmly answering it with all the things I had tried and which failed to work. It kept telling me to check my syntax, but it also proposed new solutions. Eventually, it asked me to show it my code - both code-behind and XAML - and it started making non-obvious suggestions; for example, disabling virtualization on the DataGrid (not the actual solution, but a pretty good shot at one). After twenty minutes or so of this, it told me to change the UpdateSourceTrigger - and bingo, it worked! Turns out that being inside a DataGrid causes the box to retain focus slightly differently.
I could probably have reached the solution by myself or via Google, but I suspect it would have been a lot more frustrating. It's difficult if not impossible to tell Google to exclude the tens of thousands of posts that boil down to trivial errors, newbie mistakes and such.
I think that ChatGPT is poorly suited to problems where you need to trust the answer. For problems where verification is easy, though, such as mine, it can be pretty useful. It's basically Google, except that you can tell it to refine a query in natural language. I suspect that its main problem is going to be that it can't learn easily and its training set is from 2021. If some researchers ever comes up with a way to make LLM learning cheap, that is the day Google as we know it adapts or dies.