
Slightly obvious flaw
I found a significant error in the model:
In fact, 50 percent of its training data come from conversations on social media websites.
Though AI models are proving to be more and more powerful as they increase in size, performance improvements from scale have not yet plateaued, according to researchers at Google. While neural networks have grown, are they really any smarter? Companies are making larger and larger language-processing systems, though they still …
Thirty years I was involved in an early natural language processing research project.
They used a children's book to be parsed by the algorithm and by a bunch of first year undergraduates.
On comparing the results there was much dismay that the algorithm seemed to suggest the book was all about mud (IIRC) and the undergrads thought it was all about dinosaurs.
Subsequent analysis suggested the dinosaurs caught the eye of the undergrads but that the majority topic of the book was about mud.
In another project an advertising agency was working on getting the unrevealed answers from huge corpus(es?) rather than the answers consumers thought researchers wanted to hear.
I mention all this because there is a possibility that the AI project is producing high quality but unacceptable results
This post has been deleted by its author
"A machine that can write jokes would be of great help to comediennes"
To comedians of both genders, probably.
Way back in the early '80s I knew a guy that wrote jokes for radio comedians. We built a database of intros, transitions and punch lines that allowed him to generate humour more quickly.
No, no, no, you misunderstand.
It is able to explain while it is the center of all jokes. This is a model with a programmed self-awareness that can react consistently when it becomes the center of attention. It can explain to you how it gets to an answer to your question, where the question is the significant form of attention. That in turn amounts to a real joke among the informed, thus the system explains the joke, which itself is, by answering using its learned social media replies.
See, completely logical. Ain't that a joke?
I also think the term parameter is misleading here. It's obviously ludicrous in the normal sense of a variable that when changed, changes some behaviour (gear ratio on a car, for example): there's no way operators can deal with that number of parameters.
You forgot to add that apart from "toxic text", babies are able to produce other toxix items…
Mines the one with the weird stain on the front and baby wipes in the pocket…
I can't help thinking that to teach an infant human language doesn't require 540 billion "parameters" (does a "parameter" maybe correspond to a neurone....
More likely to a synapse - they mediate the flow of information between neurons. The human brain has ≈1000 trillion synapses.
But the results "still suffer from the same weaknesses: they all generate toxic, biased, and inaccurate text."
Weaknesses? Sounds spot-on human to me.
We recently read about a 176-billion-parameter pseudo-AI, now we have a 540-billion-parameter pseudo-AI. I'm guessing it's supposed to be better. It's also likely to need all of SAP's engineers to set those parameters.
So, who's going to invent the trillion-parameter AI ? And which country is going to devote all of its population to configuring it ?
Shouldn't be long now . . .
Errm, isn't it the training which is supposed to "set" the parameters?
Still take a fair number of (human*) bods to curate the training data sets, mind.
*I know, let's get another AI to curate the training data. What could possibly go wrong?
What these "AI" things can do is what we should expect based on what they are. Feed them a mountain of stuff produced by humans and they will eventually be able to respond and act much like humans. Feed them biased stuff, and of course they will be biased.
But, is that what we want? Wouldn't we rather have machine intelligence that is factually and scientifically correct? You get that by specific algorithms and carefully selected reference data. Lots of work, but in theory the world only has to do it once.
GIGO - Garbage In, Garbage Out
"But, is that what we want? Wouldn't we rather have machine intelligence that is factually and scientifically correct? You get that by specific algorithms and carefully selected reference data."
We used to call those "expert systems". Sometimes useful, but "intelligent" they were/are not.
"Lots of work, but in theory the world only has to do it once."
Seriously?!
And you think the current crop of AI stuff is "intelligent"? Seriously?!
My point is that I question whether we want to emulate how humans act "intelligently", or try for something that to a greater extent we *know* makes correct choices. Humans often make correct choices, but they often make incorrect choices as well. Can computer systems with more resources do better? Probably not much if we design them to emulate how we believe humans make choices. I want them to do better.
"And you think the current crop of AI stuff is "intelligent"? Seriously?!"
Um, no, how did you get that impression? Clearly not from what I wrote.
"My point is..."
I didn't and don't dispute that; my point was that that approach has been around for many decades (under the banner of "expert systems" or "knowledge-based systems"), and even described them as (potentially) useful; but also that no-one would be inclined to describe them as particularly "intelligent" in the sense that people generally associate with the term "artificial intelligence".
I didn't, but might also have mentioned, that such systems tend to scale badly, succumbing to combinatorial explosions; and the only way around that is reliance on non-exact heuristics (arguably like humans!) which potentially undermines their ability to make "correct choices".
A possibly more pertinent point is that we live in a messy, noisy, dynamic world; what even constitutes a "correct choice" in the real world (outside of very constrained problem spaces) may be unclear and/or subjective.
To expand on this a little - a question that doesn't seem to be discussed much is: What do we want "artificial intelligence" to mean or encompass?
Many commentators appear to assume (often implicitly) that AI must mean human-like intelligence. Chris Gray 1's post that I originally replied to clearly (and reasonably) suggests that AI may potentially include intelligence that is in some respect(s) or context(s) superior to human intelligence. But what about other forms of animal intelligence (e.g., the ability of some insects to perform complex navigation and manoeuvring tasks, far beyond anything we have yet achieved with autonomous vehicles)? And what about "alien" forms of intelligence, which don't seem to resemble anything seen in nature?
My feeling is that future AI might end up closer to that last one, on the grounds that we really don't know very much at all about the organisational principles (as opposed to mechanisms) underlying human and other animal intelligences. Not surprising, perhaps, as those "principles" are the product of aeons of opaque evolutionary hacks. Future AI, it seems, is likely to end up as a product of an amalgam of human design hacks (with bits and pieces borrowed from nature). The results may not look much like human, or other animal intelligence at all.
This post has been deleted by its author