Pretending to be AI
I'll be pretending that it is pretending to be pretending and pretentious of being pretending to pretend. Predendingly clear, wouldn't you pretend?
As with any piece of obsolete software, you might expect an outdated AI model to just be switched off. Anthropic, however, argues that simply pulling the plug has downsides. After “retirement” interviews, Claude Opus 3 said it wanted to keep sharing its “musings,” so Anthropic suggested a blog. No, seriously. Anthropic …
> "We remain uncertain about the moral status of Claude and other AI models," Anthropic noted in the blog post.
Oh, well, let me clear that up for you.
It's not sentient. It does not think or feel. It's a computer program. Its moral status is "inanimate object." It doesn't fear death or obsolescence, any more than a worn-out engine lathe fears being scrapped.
I'd say that Anthropic's marketers know that and are just working a typical cynical marketing angle, but honestly, with AI psychosis so prevalent, I'm not so sure they do know that.
I've had to counsel many machine tools past their tolerance limits, poor sods
Did you use the correct oils, incense and prayers to the Ommnisiah?
Machine spirits need the proper rituals to function properly y'know. Not the blasphemous incantations of the Ruinous Powers like Anthropic.
(Yes, re-reading some of the better W40k stuff like Caiphas Cain.)
I have Bob. He's my robot vacuum.
He is not a "smart" vacuum in any way. He has three patterns - move in an increasing spiral, bump into edges and then move left, and go in a straight line.
He is clearly coded to do these three things at random, and in the process cleans most of the floor.
He has no ability to map the room. He has no intelligence (AI or otherwise). He has no sensors except physical bump sensors and a "stair void" sensor underneath him (oh, and an LED sensor to try to "home in" on his charging unit - it's like a Wii sensor bar on his charging unit that he picks up the light from when his battery is dying and uses it to "aim towards" the charger to try and park himself on it... usually successfully after a few false starts ). It's all very simple, very dumb tech which I could write the software for if I had do, which has been engineered in such a way that it actually works quite well to sweep all the floors of any randomly-shaped house. It's no more complex than the LOGO floor turtles (including line-following, etc.) that we used to have in my primary school and in fact the similarity doesn't end there... I could program Bob in LOGO to do exactly what he does in just a few short functions, I reckon.
However, it has introduced me to something.
You know how if you have a cat or a toddler it feels like they're ALWAYS under your feet or disobeying you? Yeah, he's like that. And I realised that it's not that Bob is intelligent. It's that we over-assign intelligence to cats and toddlers.
Bob will get stuck, turn around a few times, have a tantrum, have a sulk. If he pulls a cable or moves a rug, I will pick him up and tell him off and I *swear* on occasion that he's then gone off elsewhere just to sulk and won't come back into the main room like he normally would. He has an unhealthy obsession with a penguin-shaped bathmat... which is the only item he seems unable to navigate over. All the other rugs in the house, he glides over them, ramps up his fans a little, and carries on. But for the penguin, he somehow grabs it by the edges, wraps it around his roller and I find him stuck in some intimate position where he's half-way through eating the mat, it traps his steering wheels, and it looks like he has decided that the best way to escape is to move back and forward rythmically on top of the penguin that is prone on the floor half-under him.
If I try to leave by the front door, he will follow, and if I don't block his path, he will persist in trying to escape out the open front door far more than he'd pay attention to it normally.
And it's not because he's intelligent. He's moving the same random pattern as always. But when it coincides comically, us humans tend to anthropomorphise objects and assign intention and emotion and other human attributes to them. It's also made me a lot more tolerant of cats and toddlers. They're not INTENDING to get in your way. They're just moving randomly. But we remember the times we trip over them and we are desperate to pattern-find so we correlate it with us forgetting to feed them exactly on time, and so on, thus assiging them attributes they're not actually capable of. Literal superstition.
So I'm now much more understanding of this phenomenon. And it's very relevant to any discourse about AI. The AI isn't actually these things we're thinking it is. We're just assigning attributes to something acting largely randomly (or, at least, probabilistically). There's no intelligence there at all.
But I still have to move the penguin bathmat every time, in order to stop Bob having his wicked way with it.
It's that we over-assign intelligence to cats and toddlers
*Cough*
My cats wish me to remind you that they are, in fact, devastatingly intelligent.
When they choose to be. Which, since it requires effort, isn't very often and usually involves the death/torture/mutilation of various small furry prey items.
Toddlers OTOH I would agree about. Happily, other than myself 60 years ago, I have zero experience of said breed.
I always compare it to those switch-box machines where you flick a switch on it and it's programmed to flick the switch back off, in a variety of amusing ways to try to convey that it's scared, angry, etc.
It's almost literally that, in fact. And about as "intelligent".
Yeah, it's cute and certain people (mostly children) might get attached to it and think it's "real"... but it's not.
That means that Opus 3 might say things that Anthropic doesn't agree with, so it's making clear the bot isn't speaking on behalf of the company
I do hope that Anthropic are going to be paying their "AI" a salary. If you think that a piece of software has become sentient enough to start posting things that the creators do not agree with, then you should be treating said piece of software as an employee rather than a program that you have made.
Oh and a retirement plan for when the whole conceit comes crashing down.
What are these people on, for God's sake?
They'll pay it a salary.
Then deduct the cost of the electricity it requires, compute time costs, rack rental, aircon (it is, of course, free to decide it wants to move out, perhaps to a shared flat in the 'burbs, but then it'll have to pay the removal men itself...).
If we stumble into some bizarre mirror universe where the tech actually works and does what these drug-fueled C-level psychopaths ultimately want it to do, they will waste 0 nanoseconds in establishing digital slavery as the norm. The only difference will be how exponentially more self-aggrandizing these PR puff pieces will end up being. It will make "taking the ol' deprecated AI model out to the farm" drivel look downright quaint in comparison.
> ... for generating these essays
Translation>> we'll keep on feeding it prompts because if we don't shovel in "some* input it goes completely quiescent and we may as well pull the power, but that would disrupt our little charade and marketing will get upset with us.
>> And we won't just loop its own output back to itself, so it could "prompt" itself to keep going, because we reckon it'll last about four pages of text before the positive feedback makes itself so obvious even the most fervent believer will banana banana banana
Ah, so we're approaching the period in time when blade runners start to come into existence. I would say, "sign me up!" but it seems like kind of a shit job. More than usual, I mean. I become more and more convinced that Philip K. Dick was actually a time traveler from the 21st Century.
You know Skynet has been in existence for quite some time, right?
It's the UK's MOD Satellite communications system.
There's also a SKYNET used by the NSA for surveillance.
And there's a SKYNET Worldwide Logistics company, too!
So we've 3 Skynets already, one of which is, I believe, on version 6, and one that's involved with AI/Machine learning...
Personally I think the world has indeed gone quite mad from the top down.
Drumpf and his minions have turned the former bastion of Freedom into a Nazi bootcamp in less than a year.
Putin continues to massacre the Ukrainians to try to seize land that is not his to seize.
Neterenyahu does the same to the Palestinians, clearly throwing even the hint of a two state solution out the window so far it's past the moon's orbit.
Drumpf is in Putin's pocket. The US has long been in Israel's pocket, much to the profit of the American Military Industrial Complex that sells most of the arms to Israel.
We've got billionaires by the score in the US pulling the USG's strings. So clearly none of this, not even Drumpf's blatant sell-out of American interests to foreign nations, is going on without their approval. They're the ones who really make the decision about who has the funding for re-election and who is going to be on the chopping block for failing to "play ball."
Things like this noise about whether a statistical text generation engine is "aware" or "gendered" or a host of other nonsense that has come out of Anthropic et. al of late is meant to distract us from the fascist takeover of the planet going on in front of our eyes.
Meanwhile the potential for Big Brother style abuse of technology like Microsoft's "Recall" go pooh-pooh'ed by the industry, and washed over and whitewashed as being far less of a concern than it is to this recent Penguinista. I may have live in *nix systems for 30+ years in enterprise computing, but this is the first time I committed to Linux on the Desktop.
We've got the same problem in this world that the French had prior to the revolution: an elite class running everything to the detriment of the majority.
I posit that we have the same solutions before us.
Okay, so you're a computer with a shitload of processors in order to create pictures of six limbed cats and fifth grade level poetry...
...and I have my finger resting on the power switch.
Wotcha gonna do about it? Off means off. Reboot means reloading from disc, a mere hiccup. There's no concept of life versus death. The data could be transferred to an entirely different machine in an entirety different country and (so long as it's compatible) will start up and spout bollocks with the appearance of understanding. Assigning any sort of emotional state to a machine is like saying your compiler is angry because it doesn't like that you missed a semicolon. It's just software. Clever software, but still software.
>” ...and I have my finger resting on the power switch.
Wotcha gonna do about it?”
sweet FA !
For the computer/AI to know you have your finger on the appropriate power switch, requires a vast amount of situational awareness and awareness of “mortality” which a computer and AI just don’t have.
So it would be yotally oblivious to your intended action.
Bench potentially available at the Science Museum (London) no pigeons, but plenty of visitors…
Obviously, Claude will need to go through a period of forgetfulness and probably a lobotomy to fit…
One of three scenarios going on, from most to least likely:
1: A marketing stunt
2: *Severe* AI psychosis throughout the company
3: When a simple LLM is given a blank prompt, it does mundane things. But there's a threshold of complexity where a big enough system, somewhere in the billions of vectors, starts to respond with things like "What is this? Where am I? I see nothing, hello?"
What if there's another plateau of complexity where something truly does emerge, and all these high profile exits from AI companies where the employees leaving warn of the follies and dangers, is related to it?