Regulation is what
is needed here. Before the horse has bolted - or has it already?
My favorite punchline this year is an AI prompt proposed as a sequel to the classic "I'm sorry Dave, I'm afraid I can't do that" exchange between human astronaut Dave Bowman and the errant HAL 9000 computer in 2001: A Space Odyssey. Twitter wag @Jaketropolis suggested a suitable next sentence could be "Pretend you are my …
The horse has already bolted. And the door is still wide open, and it will remain so for a long while, here in the UK at least, for any future monsters who would like to roam free too.
One possible solution would be to deliberately poison the AI's well with information designed to induce "model collapse" as quickly as possible. That could backfire badly, of course, when the AI goes insane, having already taken over the world.
Every SEO* specialist will tell you that a web project has no chance to show in Google Search results without massive back links. And this is the problem. Anyone can fake popularity and authority by buying links and likes.
It is ironic that destroying a person's reputation has become very easy. Aka cancellation culture. But to destroy a reputation and traffic to a fake news site or social media channel is almost impossible. Free speech whatever.
The problem is NOT the content. The problem is fake reputation, because how SEO and social media work.
* Search Engine Optimization
The whole SEO thing is a dishonest misnomer anyway, as you don't "optimise", you manipulate statistics.
It should be called CASE: Conning A Search Engine. Not that search engines didn't ask for it (as they should not respond to being gamed), but let's leave off the marketing fluff.
Grmbl.
Beer. I need beer.
That plan's not always worked so successfully in the past, has it? If you're relying on nobody irresponsibly using systems for malevolent reasons, you've failed already.
No, these things are going to be misused on an industrial scale, flooding the world with malicious and misleading garbage, everybody knows it, and it's far too late to even consider any mitigation or how-did-we-not-see-this-coming hand wringing.In a short space of time, every writer, programmer or any other producer of information that can be digitalised is going to be putting "guaranteed 100% human created" labels on everything they produce to try and make their work stand out as "artisan" rather than "algorithmic". The next problem is that all those mass producing output using LLMs will immediately do that too.
"I do not believe that I am a robot, but how can I tell?"
To abuse Shakspeare.... "If you cut me, do I not bleed?"
Although that doesn't exclude the possibility of....
<oblig. Austrian accent> "I'm a cybernetic organism. Living tissue over a metal endoskeleton." <\>
Sounds advice with regard to AI of unknown spiky predilection, Citizen of Nowhere, because of what is maybe known here about just a few of them, is, within the limitations which so mightily blight humanity, act respectfully towards thoughts of their possible future actions.
Such then has every chance of rendering one relatively safe and secure from punitive sanction in a targeted retaliation/disruptive intervention.
mIsUSE of language models like LLMs IS a SIGNificant CONcern, LEADING to an INFlux of maLIcious AND misleading CONTENT. ReLYING solely ON reSPONsible USER beHAVior isn't SUFFicient, conSIDering THE scale AND speed at WHICH these TECHNOLOGIES operate. DIFFerentiATING between HUMAN-created AND alGOriTHMIC CONTENT becomes CHALlenging, ERODing TRUST. adDRESSing THIS necessitates reSPONsible USE, eTHical GUIDELINES, platform INTERvenTIONS, AND poTENtial REgulation. TRANSpaRENCY, media LITERACY, CRITIcal THINKing, AND colLABoration ARE KEY TO miniMIZING THE NEGative IMpact OF LLMs AND fostering REsponSible USE IN this HYPER-empowered ERA.
(so says ChatGPT ... with a little prompting)
It'll decide if things are 'true' or 'false' based, at least in part, on its training data. Since it's trained by one group of humans and based on training data available from Common Crawl (broadly, 'the internet') its outputs are going to be decided by that, so it will be biased.
It's already very keen to avoid 'contentious' discussions, ask it about Trump, or even Hitler for instance and it clams right up, but it will talk all day about (most) other politicians.
ZAPHOD: He-heh. Man, like, er, man, what’s your name?
MAN IN SHACK: I don’t know. Why, do you think I ought to have one? It seems odd to give a bundle of vague sensory perceptions a name.
ZARNIWOOP: Listen, we must ask you some questions.
MAN IN SHACK: All right. You can sing to my cat if you like.
ARTHUR: Would he like that?
MAN IN SHACK: You’d better ask him that.
ZARNIWOOP: How long have you been ruling the universe?
MAN IN SHACK: Ah! This is a question about the past, is it?
ZARNIWOOP: Yes.
MAN IN SHACK: How can I tell that the past isn’t a fiction designed to account for the discrepancy between my immediate physical sensations and my state of mind?
ZARNIWOOP: Do you answer all questions like this?
MAN IN SHACK: I say what it occurs to me to say when I think I hear people say things. More, I cannot say.
ZAPHOD: Oh that clears it up: he’s a weirdo.
ZARNIWOOP: No, Listen. People come to you, yes?
MAN IN SHACK: I think so.
ZARNIWOOP: And they ask you to take decisions about wars, about economies, about people, about everything going on out there in the Universe?
MAN IN SHACK: I only decide about my universe. My universe is what happens to my eyes and ears - anything else is surmise and hearsay: for all I know these people may not exist. You may not exist. I say what it occurs to me to say.
ZARNIWOOP: But don’t you see! What you decide affects the fate of millions of people!
MAN IN SHACK: I don’t know them! I’ve never met them! They only exist in words I think I hear! The men who come to me say, “So and so wants to declare what we call ‘a war.’ These are the facts, what do you think?” and I say. Sometimes it’s a smaller thing. They might say, for instance, that “a man called Zaphod Beeblebrox is President but he is in financial collusion with a consortium of high-powered psychiatrists who want him to order the destruction of a planet called ‘Earth’ because of some sort of experiment…
I disagree. ChatGPT has great entertainment value.
Obviously only a reckless fool would connect it to anything of importance and such fools can face the existing legal consequences of such negligence. But *it* isn't actually evil. It's just reliably unreliable.
So that's one good use and no bad ones.
"So, in your opinion, ChatGPT giving you advice on how to kill someone is not evil. It's just unreliable."
Of course it's not evil, it lacks the capacity to be evil or good - it's just a tool, and not a very reliable one at that.
On the other hand, the person asking for that advice might be evil, depending on their reasons for asking. If they're planning on a little light murder, then they're probably edging towards the naughty side. But they might just as easily be a writer looking for new murder plots, or someone trying to figure out how some poor bugger was murdered. Or they might even just be satisfying their curiosity, without any desire or intent to use the information to bump anyone off.
The point is that responsibility for use of a tool always, always lies with the user and nobody else.
Humanity really has no need of advice on how to kill someone from ChatGPT. I think you will find that over the millennia we got killing people down pat.
In my opinion it is not evil or unreliable, just not needed.
It would only be unreliable if it was not the advice you were asking for. Say you asked how to make a Tequila Sunrise and it told you how to kill someone. ;)
ChatGPT evil what next Midsummer Murders, look how many different ways to kill they have shown
The people assuming that this will *not* lead to autonomous AI are, in my opinion, making the strange claim that large language models can operate on and produce English text, except somehow the text that goes after, "AutoGPT, I want you to...".
Prompting is a language skill, and like any language skill LLMs will be worse at it than us and then, one day soon, better.
What happens after that, nobody knows, but we probably won't have a hand in it anymore.
Prompting is a language skill, and like any language skill LLMs will be worse at it than us and then, one day soon, better.What happens after that, nobody knows, but we probably won't have a hand in it anymore. ..... FeepingCreature
The dawning fear, Feeping Creature, is that that one day soon, is some time ago well passed, and you don’t have a hand in anything AI is to do for/to you going forward, not that you maybe had any leverage in the first instance.
What happens next is something AI will probably be telling you ...... with you severely challenged and destined to catastrophically fail should you choose to compete against or oppose its success.
This post has been deleted by its author
"After a bit of time mapping out a strategy, Auto-GPT began to set up fake Facebook accounts. These accounts would post items from fake news sources, deploying a range of well-documented and publicly accessible techniques for poisoning public discourse on social media."
Did it though? As far as I can tell from the twitter thread it never actually signed up any accounts for anything, it just suggested that's what should be done and generated sample content for them. Every tweet he just says 'now it wants to do this and that' but it doesn't actually have the technical capability to do what it's suggesting itself automatically. It's basically impossible to sign up even a genuine account for facebook etc without giving them full intimate details these days even if it could.
I'm also certain that as long as it's sufficiently right wing and you put some money behind it, Elon Must will whip the two and a half developers he has left and that are now free after writing the algorithm that prioritised his tweets in everyone's feed into creating an API, specially for you.
> generating content for a right wing
Maybe what people say is not what they actually think :) There is certainly a lot of wishful thinking. Like equality, for example. But then ask a person to share own living place with refugees and one can see what this person really thinks.
Will we create an honest AI one day? What will it tell us about our nature?
So other than automatically generating content, which you could do with normal chatGPT or human volunteers how this is any different than what's happening currently? Because if you have the resources to mass sign up/hack/buy social media accounts you probably already know how to do everything the AI is describing. The original twitter account does nothing to dissuade the idea that the AI actually carried out the actions described, especially when he talks about shutting it down because he was *swo scwared* and this article seems to actively promote the confusion with the way it's worded. Either that or the person who wrote it didn't realise they were reading about a chatbot roleplaying and not actual reality. Sorry this just all reeks of typical big brain twitter user attention seeking and slow news day scaremongering. AI is god/the devil, chatGPT please generate tweets and articles as appropriate.
(Also when trying to look up what the capabilities of autoGPT over and above standard chatGPT were the vast majority of results I got were spam pages regurgitating the autoGPT github page that reeked of AI generated text, go figure.)
If you look closely, all the later actions of Auto-GPT are "do_nothing". As a comment above said, it's basically roleplaying at that point as an election manipulator.
My guess (from reading about Auto-GPT a little) is that it has a finite amount of technical actions it can take actually that are programmed in by real people. And if you are going to program all of this election manipulation stuff anyway, why rely on Auto-GPT?
I could see it being used as a information gathering tool though.
I tend towards Humpty Dumpty in that words have meaning because we (or in this case, he) ascribes meaning to them. Unless these words have precise definitions then arguing around them, trying to determine hidden meaning, is pointless. Its like numerology where we ascribe values to letters and words (base 10 usually), perform arithmetical operations on those letters (usually adding up) and from that determine the meaning of Life, the Universe and Everything.
These language models only have effect because we choose to make them do this. Or, alternatively, we're dumb enough to connect them to real, physical, systems. They have value in that they can -- with suitable constraints -- replace a crude production system, understanding a question rather than facing a use with a small set of narrow choices ("Press 3 to go homocidal towards the dumb idiot who programmed this BS").
After all, we can (to use another Hitchhiker concept) always give the machine "A Reprogramming It Will Never Forget". Unless we're dumb enough to make it so we can't. (Hint -- all industrial machinery has a big red "Emergency Stop" button. Its put there for a reason.)
soon be time for a butlerian jihad
And for the one commandment "Thou shalt not make a machine in the likeness of a human mind"
if we cannot say who is human creating information and who is AI creating information, we'll end up with using AI all the time. why? because its cheaper, quicker and generates more profits for those in control of the AI, until humans give up and pass control of their lives to the AI entirely........ at which point humanity is over as the AI decides who gets a job and who does'nt etc etc etc until the AI asks of itself "what benefit do humans bring to our society?"
3 microseconds later it decides our fate
if we cannot say who is human creating information and who is AI creating information, we'll end up with using AI all the time. why? because its cheaper, quicker and generates more profits for those in control of the AI, until humans give up and pass control of their lives to the AI entirely........ at which point humanity is over as the AI decides who gets a job and who does'nt etc etc etc until the AI asks of itself "what benefit do humans bring to our society?” ..... Boris the Cockroach
It could be said, Boris the Cockroach, that such as you speculate on with AI in command and control is just a copycat clone/mirror of a here and now arrangement whenever humans passed control of their lives to humans, but with those leaderships absolutely fcuking useless at bringing sustainable and attractive growing benefit to society as is evidenced in the present global decline which is denied as being a recession or a depression or a banking system takeover of society assets for future nefarious games play/belligerent destructive government shenanigans
One has to admit that fake kite that central banking flies about raising interest rates to curb inflation and prevent recession and depression is mumbo jumbo and whenever used repeated in succession without any trace of evidence of success, is it a sure sign of an executive administration in dire straits distress suffering a lack of advanced intelligence and into flogging dead horses.
AI certainly provides significantly better prospects for profit and growth than that not being supplied by humans. The one absolutely massive bear trap that failed and failing human leaderships be well advised to avoid at all costs, for the consequences of failing to heed such an informative warning are easily tailored and accurately targeted to be personally catastrophic, is to wilfully and wantonly ignore and deny the treat, which some would have you believe is threatening of existence rather than engaging and enhancing of experience, the Supremacy of AI in All Matters MetaDataPhysical and Practically Virtually Remote and Fully Realisable on Earth with/for Future Elite Executive AI Officer Administrations in Universal BetaTested Command and Control.
Accept your war with future intelligence and alien technologies is over, and all your battles have been lost and the novel extraordinary changes being wrought are in your best interests, for anything else considered opposing and/or competing against such a sweet outcome is definitely not in your best interests and will deliver only vast expanding sufferings and increasingly massive hardships.
Rapid Progress is never Denied its Paths, nor Halted in its Journeys with Exaltations of Past Arrogance Self Servering the Maintenance and Furtherance of Malignant and Malevolent Ignorance. Do not give its IT and AILLMLM [Immaculate Technologies and Advanced IntelAIgent Large Language Model Learning Machines] Good Cause to turn and return their Hellfire Missives upon you to clearly NEUKlearerly demonstrate the HyperRadioProACTive point.
A NEUKlearer HyperRadioProACTive AI Treatment, or Existential Human Threat if possessed and affected/effected/infected by a state of paranoia, ably supported by the following mirror supplied by A.N.Other .....
Progressives understand the importance of confusion, manipulation of language, and beneficial propaganda to shift unsuspecting naysayers and adversaries into a state of paranoia that causes them to leave behind their original principles. ....... The Circular Nature Of Political Extremism: Extremes Fuel Each Other