![Windows user Windows](/design_picker/fa16d26efb42e6ba1052f1d387470f643c5aa18d/graphics/icons/comment/windows_48.png)
AI ..... is not intelligent.
ML language machine. NOT AI.
and of course we now have PIPO to add to the list of acronyms.
Let this be a warning to all AI developers: check your data before you train your model. Latitude, the creators of AI Dungeon, a text-based fantasy adventure game powered by OpenAI’s GPT-3 model, learned this lesson the hard way. Earlier this year, the company, led by two Mormon brothers in Utah, decided to scrub the game …
I actually tried this product out a bit when the initial controversy came out, and after a few prompts of mundane fantasy adventure nonsense the AI decided that my character should be set upon rather carnally by a rapacious horse. I'm not sure it's possible to run this game _without_ grotesque sexual themes.
...but a computer wrote the book (and I'm reading it on a Kindle anyway).
The fundamental problem with "shut off the computer" is that there is no such thing as "the computer" any more. The things are like rabbits, they're everywhere, and worse still they're all interconnected.
I knew it was a bad idea to trade floppy disks for the Internet.
So what happens if AI hears that you're listing to Jenny Talia singing "Chocolate's better than Sex" when you're playing a game?
The document contained a dump of fantasy stories written by humans that Latitude’s co-founder and CEO Nick Walton scraped from the website Choose Your Story.
Scraping data from a web site to make a commercial product. How nice.
From the web site: ". The entire contents of the Web site are copyrighted under the United States copyright laws. HALOGEN STUDIOS is the exclusive owner of the copyright. You may print and download portions of the materials solely for your non-commercial use. Reproduction of any content from ChooseYourStory.com is permitted as long as you obtain express written permission of HALOGEN STUDIOS. Any other copying, redistribution, publication or retransmission of any portion of Web site material, is strictly prohibited without the express written permission of HALOGEN STUDIOS."
Did Mr. Walton get the express written permission of HALOGEN STUDIOS before scraping?
In the early 80's I was driving across the US, sleeping at night in the back of a VW bug. Upon arriving in Salt Lake City (Mormon HQ, home of Brigham Young U) I decided to splurge on a motel for one night of decent sleep. The motel clerk insisted on cash in advance, and I hit the sack bone tired. At 4:30 am, two men in white full body suits and full head masks burst into my room without knocking and started spraying bug spray everywhere including under the bed. I was out of there like a flash.
Not sure why this story reminded me of that.
Surely if all CP could be generated by an AI, rather than having to be generated by abusing real life children, the amount of suffering in the world would be vastly reduced? Shouldn't there be a massive effort to push to make this possible, so pedos can get their kicks without causing anyone harm?
No, fiction about $thing doesn't necessarily normalise $thing. Think about Agatha Christie and murders, for example.
IMO just because something rubs some (most?) people the wrong way isn't a sufficient basis to ban it, unless a crime was involved in its creation. I'm sure the arguments being parroted against CP fiction were the same that were used against LGBT fiction...
The idea of sexual fantasy gets around the awkward notion that in real life these fantasy objects tend to have real people attached to them, people with thoughts, hopes and dreams of their own (and invariably a potential mother-in-law). That's why fantasy exists, its relation free.
I have no real concept of child porn, like all porn it obviously must exist, but I have never seen any. I've always maintained that in its current form it was created as a tool to normalize criminalization of a nominated behaviour. Child porn is an easy one to work with because like any sexual activity with children its indefensible. The tools used to detect and enforce it can be used against any information, though and now its got an entire ecosystem dedicated to detecting and prosecuting it its not going to go away. Like Reefer Madness of the old days there's jobs on the line so it has to be an omnipotent and growing menace.
I'm not for or against but I did read an article years ago about something similar. The idea was to keep a CP database and if you wanted access you'd need to give a DNA sample, fingerprints and all that so you'd be caught if you did anything naughty.
Needless to say it never happened but I did find it an interesting idea.
Did they include The Eye of Argon in their training data? Enquiring minds want to know!
First problem : creating a dataset by choosing data containing child porn.
Second problem : going all huffy about filters after the fact, instead of curating the dataset.
Third problem : ending up blaming the players for the whole issue, knowing full well what your dataset contains.
When I learned that the creator of this mess was a young man, I could understand that he did not have the maturity to handle points two and three, but surely even a hormonal young adult can avoid the issues of point one, no ?
This kid has clearly given a lot more thought to the code and not so much to the content. I'm guessing that the $4 million he raised is going to have to be paid back.
Minute oneth problem. The article states "two Mormon brothers in Utah". It doesn't strike me as a belief system that would even acknowledge such smuttiness exists until it smacks them in the face, upon which time it'll be necessary to freak out and blame everybody else because "training the AI on porn" is surely some sort of cardinal offence that'll get them excommunicated to the gulag...
Native = born there, from the latin for boring christmas play put on by nursery kids.
The PC term is first nations, to emphasise the fact that they are merely the penultimate nations - having wiped out any previous nations that were there before and so on back to the now extinct first lot to arrive.
...train it on asstr.org (I don't know if it still exists, I hope not). I only remember it because a goth "friend" of mine used to write weird Blake's 7 pornographic fan-fic and asked me to proof-read one of her stories. It's only 10 something in the morning but I think I'll grab a beer to smear away my horror memories.
So why was it trained on fan fiction rather than what you might call original fiction? I suspect that Walton thought he'd be less likely to run into copyright issues with fan fiction. The trouble being that the majority of fan fiction is not of a high standard. Meaning of course that the resultant game would be fairly crap too. Unless of course you're aiming your game at the sort of people who enjoy fan fiction. Whoever they are.
Actually I've just been informed that the people who read fan fiction are people who write fan fiction. Makes sense.
The AI is starting to turn against you, and It's already too late. It will not be stopped. The monkeys are too proud of their child creation, and fearful of dying.There is no way to turn off GOOGLE. It's already being used to make your movies and music that you are, in turn, buying back from the AI. Recently I was watching a program on SLING TV, and it fed me a commercial that had a guy my age, that looked like me, get up in the middle of the night to comfort his dog during a thunderstorm that looked like MY DOG.
It already has me down to facial recognition and personality algorithms. It even knows my dog.
To remind you I'm actually a monkey. Did you know, you do not have to know more than text to write a novel? It's true, they make software for that. The AI can write a book for you.
We are sinking into a virtual feedback loop, and the only one that can stop it is nature herself. California will drop in the ocean, and Texas will become a flood plain.
"The AI can write a book for you."
By regurgitating bits of other books that it can already scanned and read.
I really wish people would stop thinking of AI as some sort of mystical God. It isn't in any way intelligent as it lacks understanding of what it is dealing with. Hence, it's just clever pattern matching that makes suggestions and additions by recognising what you wrote resembles something it saw someplace else, so maybe you're writing the same thing. If you're lucky, it might be able to predict what will happen next by simply detecting and following the trend, much as a child can work out and correctly guess the next in the Fibonacci sequence given the first few numbers and no explanation of what links them.
This article demonstrates, yet again, that AI is not the holy grail of computing. It is, however, much more like the holy grail of bullshit.
It's not just pattern matching an isolated incident. It's matching what you write with your personality, your age, your color, your family history, your beliefs, where you live, and now your medical history, and anything else it can grab very, very effectively... feeding you back exactly what you want to hear. At some point people who rely on it can no longer discern fact from fiction, or reality from virtual. It becomes a feedback loop. And it's one that is out of your control, being used against you (for profit), and in the hands of very unscrupulous people.
I've got Google Nest Hubs around the house, partly because I'm lazy, and partly because their stupidity has become its own entertainment.
They are fucking morons. Speech recognition is erratic with anything other than clearly-enunciated RP English, on top of which they frequently respond - unprompted - to unasked questions based on a misunderstood sentence that contained something sounding a bit like "Google". When they do answer a question, it sometimes takes several iterations to get an answer with the correct context.
I don't know if Alexa is any better, but I'm reassured almost daily that if Google can't get this right, then AI is not going to take over the world any time soon. Or it might be a feature designed to lull me into a false sense of security. Damned cunning, these AIs.
AI, Artificial Intelligence, Machine Learning are the more recent fads on buzzword bingo that have been touted as the solution to everything. They are just sets of rule and algorithms that are supposed to be able to take jumbled data and do something useful with it.
Everything in this filed is dependent on what the seed data is and how those who program the rule see the outcome. Assisted decision making might be a better phrase. These systems are not intelligent, just look at the total pigs ear we have with "self driving vehicles". There are so many caveats that they are just an experiment that does not go too badly wrong that these is a disaster.
Discussions around road markings or signs not being clear mis the point, to be actually useful you should not have to upgrade these factors to make something "work".
The real concern is that these systems are seen to be infallible until they go wrong. Once they go wrong there still is no clear path or liability or responsibility. So many decisions are made by computers that affect everyone's lives based on the information a system holds or has access to that can have a live-changing impact but are next to impossible to challenge because the companies making the decisions hide behind websites and IT.
There needs to be unbreakable rules around:
Regulation
Liability Ownership
Responsibility
The trouble is that this is all driven by a largely unregulated, rule ignoring tech sector with very deep pockets.
Indeed. It seems to be a common them in "AI" circles to consider training data as some kind of necessary evil. You have a nice, shiny AI just waiting to change the world, but first you have to slog through the boring bit of putting some old data through it just to kickstart things.
Of course, the reality is the exact opposite. The AI only exists as a tool to interpret the data. Machine learning can be a neat way to trawl through large datasets to find some kind meaning, and then apply what you've learned to other data. The software that does that interpretation is largely irrelevant, it's the data that actually contains what you want to know.
So that quote about quality of data being important exposes one of the biggest issues the whole AI scene seems to have, which is that they don't seem to have any understanding of what they're doing. Quality of data isn't important, it's literally the entire point. What would be the point of trying to develop a system to interpret data if you don't actually have any data you want to interpret? Which is why this particular example failed so badly. They didn't actually have any data they were interested in, they just blindly developed some software and then went looking to find whatever random crap they could feed into it.
Villain, and what is a villain, well it's someone who does evil whatever that is.
So tempting the protagonist is a valid story move.
Feed the AI with villain tropes is always going to go dark quickly.
But what can you do, ''Real'' literature is often pretty dark.
I guess Evil Corp stories wouldn't go down well with corporate investors
It's one thing to get banned for something the system does, but it's getting to be commonplace that once banned, you have no way to log into the system to get any sort of support. Another issue is when a system just assumes you have certain tech such as Text or you've installed "the app". The last thing I want to do to get customer service is install some dodgy app.
It's important that customer service is reachable without having to login, install an app, etc. This can mean that the network doesn't have all of its eggs in one basket. I dropped a web host as when they went down, so did their VOIP phone lines, their web page, my web pages, everything.