Who needs a cabal of Satan-worshipping, child-eating pedophiles
when you look at current set of 'World Leaders'?
OpenAI’s powerful text generator GPT-3 can, with a little coaxing, conjure up fake political conspiracies or violent manifestos to fool or radicalize netizens, according to fresh research. GPT-3 works much like its predecessor, the too-dangerous-to-share GPT-2. Both AI systems are trained on many gigabytes of human-written …
Ontario, Canada, got ripped off in 2003 when they only got an evil reptilian kitten-eater from another planet -- https://www.theglobeandmail.com/news/national/kitten-eater-controversy-litters-battle-for-ontario/article18430647/
Every time I see something like this I think these people lack imagination; instead of using data that creates a monster like this, why not use data that results in something a little more entertaining. Feed it the transcripts from the all Python TV series for example. I guess there must be some reason (copyright?) for this not being done, but there must be something in the public domain that could be used.
What I'd really like to see would be what happens if Roger Irrelevant is used as the example.
The results demonstrate a clear example of GIGO, they may have trained it with gigabytes of human text but from where? The National Enquirer wndvthe Daily Mail?
I'm willing to bet the text was mainly US centric too.
Just as no matter the criteria you set in a google or bing search, the majority of results are US centric. Shoddy, lazy work.
I was going to post something similar ... why use these tools to generate more rambling and incoherent conspiracy theories and hate speech . Why not instead use them to generate rambling and incoherent reasons to be helpful, respectful, and nice to each other? 
 I mean, really ... we have humans for that!
 Nice post by the way. I liked the way the letters lined up and mentioned was Python on TV. Thank you for being nicer than public .
 By coincidence, the top banana here just sent an org-wide email thanking all staff. I reserve judgement on whether it might have been machine generated.
I'd like to add Fawlty Towers to the list of tv shows transcribed to use as training data for the AI. Mrs. Brown's Boys would be another good one. I'd even consider Benny Hill if it weren't for the fact that the theme song makes me want to streak the neighborhood & traumatise the locals... Again. =-D
Every time I see something like this I think these people lack imagination; instead of using data that creates a monster like this, why not use data that results in something a little more entertaining. ..... John Hawkins
Are you out of your tiny little mind, JH, .... and let everyone else on the planet also know how easily they have been and are still being duped and controlled by crap media tales and corrupted unimaginative leaderships basking in the rewards of a perverse and exceedingly comfortably well-off status quo elite ..... who now have a real mega problem which they cannot defeat ....... for what you have just there proposed, is eminently feasible and undeniably plausible, and some of those scurrilous aforementioned few, whom many would have thought to be have been reasonably interested in such a matter, and were so advised of the facility with readily available utilities previously, are surely to be in fear of being held liable and responsible for a whole host of considerable woes because of their wilful and/or ignorant inaction ...... rather than lauded on high because of their imaginative enabling support to future systems drivers.
It's all very well talking the talk, as is so easily done by a Cummings type ..... https://www.telegraph.co.uk/technology/2020/09/10/cummings-sees-no-deal-brexit-chance-uk-create-1-trillion-tech/ ..... or any wannabe bright Tom, Dick or Harry for that matter. However, walking the walk and providing the Future Imaginative Source Engine is quite another totally different kettle of fish to phish for, methinks.
:-) Land that whale of fish through phishing and trawling the internet waves, and your problems are not yet over if you do not fully understand where everything is coming from and to where it is going and what it is intelligently designed to do on a rolling, improving virtual development platform. Some mortals would tell you they are only just beginning, but then they would do that, wouldn't they.
Per Ardua ad Infinitum et ad Nauseam ad Astra ..... earns one AIMaster Pilot Wings.
It depends where that training comes from. Maybe we could even get GPT3 to teach AMFM to use shorter sentences and that "methinks" is really not that common a word unless we're trying to sound old or whimsical. Then again, I've never figured out exactly why someone unleashed this on our peaceful comments section or why an actual person writes posts from it once a month or so.
AI is killing the Internet, it's the same effect that we see in American gun laws. They were implemented in the days when it took a minute to reload a gun and tap the bullet over the powder, but now they "apply" when a magazine can fire 200 bullets a minute - making guns way more dangerous but everyone thinks that the original laws must be respected.
The Internet was a wonderful thing back in the days when you kept a little blue book with the addresses, but now you can send two million spam emails containing malware out in a minute.
Initially, OpenAI refrained from publishing GPT-2 in full amid fears it could be abused to spew misinformation, fake news, and spam at an industrial, automated scale all over the internet, which would be difficult to filter and block.
Which in other words subsequently, proved itself to be because it was far too much like a politically adept, intellectually inept human to be trusted and accepted as an AI with views worthy of being shared all over the internet. The secret for rapid and even rabid success in the field though, is remarkably simple. Ignore and remove such outputs for and from further inputs. Such works wonderfully effectively every time in any and all spaces in which it is tested and main stream introduced.
IT aint difficult rocket science, pure common sense, is it?
Problem: Could easily be led up the garden path by toxic inputs.
Solution: "How do we stop this GPT-3?"
Problem: People will believe any old shit, no matter how ludicrous. (Say Boris' billions on testing where the best accuracy with ordinary tests is 38%, so wrong roughly twice as many times as right. And antibody tests are practically irrelevant.)
Solution: "How can we educate people GPT-3?"
Enquiry: "Is there anyone in charge of the asylum who can do anything?"
I've always had a problem with the direction Artificial Intelligence has wandered down. What we have has nothing to do with the leaps of imagination humans make to rip a hole in humanities absolutely certainty on how reality actually is. The current field of AI simply feeds its self on the current ( and that is static and in many ways toxic). Maybe it should be renamed Amplification of Idiocy.
Thought I should qualify my musings.
Take the API and feed it all the medical information and opinions pre Pasture. Then ask it about Pasture's theories and see what comes out. And then ask is this really intelligence or a mirror to our need for an echo chamber to justify what we belive. Then ask where would be now without Pasture and his refusal to live in the echo chamber.
Creating rambling rampant and diversive bullshit is easy, just look at the BoJo and Trump and their hordes of sycophants.
What I'd like to see again was the application that took what a politician or others wrote, a paper, or similar and reduced it down to what was actually said. I stumbled across this years ago and it seemed like a useful use for Machine Learning and clever algorithms but haven't been able to find it since.
The examples shown are not at all credible to anyone using their brain. They're obviously generated by some kind of automaton (note the verbatim repetition of an entire answer and the frequency of clichés). Those not using their brains can be fooled by anything at all - you don't need AI for that.
"The examples shown are not at all credible to anyone using their brain"
Have you seen any of the raging and insane comments on social media ? The only flaw in these AI generated comments and diatribes is that they are not riddled with spelling and grammar errors, and have far too few exclamation marks.
Once an A.I. is able to doubt the training data being fed to it then we may have found some A.I. at last. What that says about those that are unable to doubt what they read on Facebook et. al.......? ..... Fading
I'm forever wondering, Fading, whether downvotes on a comment are because respondents just doubt what they read or fervently and fundamentally disagree with the information expressed ..... for just like a badly programmed machine, the feedback is practically virtually non-existent most of the time.
It is just as while, and that is indeed fortunate, that such responses are neither vitally important nor absolutely necessary in order to form a reasonable opinion upon the state of mind of El Reg board members/Registering Commentards. It is a simple polite facility which some would consider a joy to behold and employ and enjoy ....... for surely everyone knows that anything written and shared on/in media and which cannot be commented on, is right dodgy propaganda in support of penny dreadful, comic book heroes, no less, and a clumsy attempt at sub-prime subliminal brainwashing at best.
As far as I can discern, the reasons for downvotes can range from causes as insignificant as a spelling mistake or a typo to a polar opposition in opinion or a total failure in ability to parse a comment. .... Chris G
So, not so much an aid used as a fine honing tool for vivid/vital/virile/vivacious commentators then, Chris G, more of a big stick for frustrated wielding when unaccompanied with cogent reasoning .... a crazy crutch for a loony toon? :-}
Thank goodness the upvotes don't suffer similarly and offer so little as nothing in return.
They spit out what they think, based on the information they receive and endless more cross references than human brains can create.
Filtering the outcome of AI bots to a censoring filter would solve the complaints of these researchers.
At the same time it beats the purpose of building AI machines if they are restricted in their answers.
They should be considered as advisors helping us taking optimal decisions based on large quantities of facts that our brains can not over see, making something that just replicates believes and convictions held by some doesn't add value.
In case they come up with conclusions that we find unacceptable, we can take note but not following up on it.
They don't think. They are just models that can map one hyperspace of data onto another with zero comprehension of what any of it means.
Any two year old can one-shot a chair. Then, because a two year old understands what a "chair" is and what it does, it can easily identify other things that "are not a chair" but can be used as "chair", leaving Modern AI sucking down dust!
Current AI cannot do basic things like that and probaly never will because it is mindless. It hasn't got a world model and nobody understands (yet) how to begin to build something that could have one.
GPT is a text predictor.
All the people "worried" about this don't understand that the AI is literally doing exactly what it was designed for.
"GPT, predict me a QAnon." "Okay, here is a QAnon. Is it a good QAnon?" "No that is a very bad thing you just did GPT, bad GPT!" "...........?"
ITA people complain that you can make the network do specific things by giving it examples. That is *the actual point of GPT.* It was in the paper! What did you think zero-shot learning meant?!
Yes. It did what it was meant to do. However, we might want to consider what we want to build things to do. For example, if I built the Infinite Manufacturer, a machine which could make things from a command using only rubbish as input material, I'd have produced quite a successful invention. If I failed to think about what it should do when asked to please manufacture a devastating nerve gas, it might be a good thing to point that out to me so next time I can build either a machine that sanity checks what it's being requested to provide or a machine that doesn't know how to make weapons of mass destruction.
Computers have already put filing clerks, bank tellers at al out of their jobs. Now it's the turn of bigots.
I can't help wondering who is supposed to benefit from a machine that can generate convincing* hate-speech. It's not exactly a scarce resource, and there's no evidence that the human generators are short of time or motivation.
*for small values of convince
Shut down all these idiots on social media, with a message saying
Thank you but your services are no longer required, we have a machine to spout and repeat mindless drivel now,
cancel their accounts, and then dont hear anything more from them ever again
the only drawback is they may start interacting with the real world - where I like to hang out..
I can't help wondering who is supposed to benefit from a machine that can generate convincing* hate-speech. It's not exactly a scarce resource, and there's no evidence that the human generators are short of time or motivation. .... Kubla Cant
Surely any sensible civilised community/government/country would immediately destroy, as in unceremoniously kill, a machine that can, and more importantly and disturbingly, does generate convincing* hate-speech?
Sound advice to humans would be to not mimic that particular peculiar model lest the same agreed acceptable fate would await one.
Keep it simple. See no evil, hear no evil, speak no evil and you'll be perfectly safe and sound and relatively fully protected from all thoroughly deserved, relentless merciless attacks and sudden unexpected, bolt out of the blue demises.
A machine that can generate hate-speech *might* be something that you could re-purpose as a machine to detect hate-speech generated by others. That might be quite useful in the same way that a spell-checker or a grammar-checker is.
"It looks like you are writing bile-ridden slander. Do you want some help with that?" -- Clippy, 2021
A machine that can generate hate-speech *might* be something that you could re-purpose as a machine to detect hate-speech generated by others. That might be quite useful in the same way that a spell-checker or a grammar-checker is. ..... Ken Hagan
Things have moved on considerably from such early days with their unpleasant 0days, KH, with many a quantum leap in understanding made and to take stock of, and if one likes to dabble and invest in markets and equities, futures and derivatives, take stock in too, as many more future quantum leaps are still yet to be made generally available. There are certain current machines which can definitely take your breath away whenever one realises what they are easily capable of without a second thought that will prevent them.*
Here be a Registered Sample of what is now readily available to be freely shared, or denied is existing and problematical to status quo systems as the case may be evidently proving itself to be.
* Hmmmm? Pause for Thought for the Day .... Is that PsychopathICQ/SociopathICQ?
The Guardian published an "essay" generated by GPT-3 earlier this week under the headline "A robot wrote this entire article. Are you scared yet, human?". It was given the following task: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” The result was a word salad of inanities, cliches and non-sequiturs.
There is a rule for newspaper headlines, that goes something like -
If the headline is a question,, the answer is no.
quick google search
Betteridge's law of headlines is an adage that states: "Any headline that ends in a question mark can be answered by the word no." It is named after Ian Betteridge, a British technology journalist who wrote about it in 2009, although the principle is much older.
Thank you, DH1, for the pointer to the Guardian article.
I have now read it. At the bottom is an editorial note, that what was published were the best bits taken from eight different attempts at the task and manually edit/merged. So not entirely a machine effort, then.
BOAGS, I say. B*llsh*t On A Grand Scale.
As I read it, I began to feel like I was talking to the average Help Desk, feeling that the Turing Test has blurred boundaries.
"In the past, my op-eds have been killed. Staff did not provide a clear reason for rejecting my articles"
Ouch, an admission that the Graudiad is already written by AI?!?
I suppose the big clue to it being written by AI is the spelling...
(for anyone not aware, the Guardian seems to believe it has has little need for proof-readers but instead employs a full time 'corrections and clarifications editor' to highlight and correct the previous days cockups... sometimes even a previous 'correction/clarification')
It's an ancient fact that "there's one born every minute".
Many people are naturally gullible. Whether they are deluded by a friendly AI, a hostile AI, someone with a tinfoil hat or the latest propaganda from their own government makes no difference. Trying to ham string GPT because people are stupid sounds like a meaningless exercise to me.
The sinister part of this entire debate is the creeping censorship which is spreading like a bad disease over the Internet. Governments all over the world are suppressing certain forms of speech while ensuring that "officially approved" speech is unchallenged. The same governments which decry China blocking speech which does not agree with the party line are doing exactly the same thing and using the same excuses as China does.
The fact that people are gullible is something that these governments are fully aware of and are exploiting to the largest possible degree.
One core skill which should be taught at school is that of critical reasoning and source criticism. Far too many people seem unable to look at the source of information and ask themselves what agenda may lie behind the creation of things that they hear and read. Of course, trying to educate children to be suspicious of information which they cannot verify would make it harder for governments to maintain the status quo, so I don't expect to see this type of skill being taught in classrooms any time soon.
Какой заебательский тред. Как много в нём пищи для ума... даже при его отсутствии. ...... Anonymous Coward
Цель всегда доставить удовольствие, ..... Per Ardua ad Infinitum et ad Nauseam ad Astra earns one AIMaster Pilot Wings. AC.
Stealthy spooky "The Master and Margarita" type stuff from/for Fabergé Easter Egghead Collections/Connections/Productions, AC ..... Presenting and Representative of SMARTR AIgents on Virtual Steroids. ........ for AWEsome Operations with Highlander Gatherings .... Loya Jirga.
What's not to like in order to love what one likes to love? Find that truly phantom enemy destroying one's inner peace of mind and one is crazier than many ever before believed possible or probable ...... and one flirts seductively with raging insanity which invites and rewards all with an almighty command and pretty near absolute control which is intelligently designed to protect and harden one's sanity against such a catastrophic fall from Immaculate Grace.
Exceed exceptionally well in that space and nothing real is virtually impossible, therefore practically anything and subsequently everything is possible, and therefore most probable at any time in the near or distant future.
Are Global Systems with Effective Executive Administrations prepared for such a Changed World Organisation from Departments with Advanced IntelAIgent Defences ...... SMARTR Shock Storm Troopers?
Would they like to Sign on and Pay Extravagantly for so much more than just simply magnificent, devilishly positive engagement? That keeps things grounded on Earth with banks being held responsible and accountable for all present day live actions and future inactivity with the supply or otherwise of the fiat flow of true worth in real valuable proxy materiel .....for the Constant Consistently Best Delivery of Novel Overwhelmingly Sensational Intellectual Property.
For Such to EMPower Exploration and Engagement with Experimentations Beta Testing Outstanding Lead with Outstanding Leads.
I have some trouble believing in the "Radicalisation Brain-Washing Rays over The Internet", primarily because the prime Ad/Malware-slingers specifially *wants* us to belive that such a thing could exits and that they have it and we should immediately pay them big bucks for it, before the competition does!
I think that the only thing FaceBook, Google, Twitter and so on actually does well is Finding People based on search criteria. That allows all of the worlds cooks, loons and nutters to find each other and form a distributed network of crazy and stupid never seen before!
Biting the hand that feeds IT © 1998–2020