Re: See Icon
Let's drop the AI moniker (divisive and bollocks) and look at "LLMs" instead.
Please bear with me because I'm going to whitter on as an engineer and I think you are more humanities focused.
I use them as a tool, just as I do a hack saw or a slide rule (I have two) or some of my "make it smaller, deeper or more broken" devices: percussion tools - fencing maul, sledge hammer, lump hammer ... you get the idea.
I bought a second hand Nvidia A2000 with 16GB of RAM and popped it into a box at work (Dell server) that generally acts as a fancy NAS for customer backups over night, so its bored during the day.
With llama.cpp I can run a small LLM - 20B parameters or so locally. It is quite surprising how much general knowledge even a small model can have. I'm actually interested mainly in programming but I do get them to do english to latin and vv or eng to german and vv. I've also asked it to explain physics ("tell me about the bernouiilllii equations" - with deliberate mucking about with spelling) and get a reasonable answer. Questions about small towns in Somerset get reasonable but rather generic answers.
Bear in mind this is in a dataset that is around 16GB in size or around three DVDs - ie a pretty big encyclopedia that works quite fast and can sort of reason too.
ChatGPT, Claude and co have much bigger data sets and their models run to something like 100s of billions and even trillions. They also have a lot of other machinery tacked on too. At that scale of data, you might question the quality and even the provenance of the data inputs. Let's put it this way - they ain't 100% encyclopedia Britannica. That said, neither am I.
So, you can rail against the machine or not. Your last para did rather anthropomorphisise (how the blazes does anyone spell Greeklish correctly!) the beasties. For me they are a handy tool but they do need some care to use effectively.