This Anti Intelligence thing has gone too far, AI is too unreliable
If you don't own it or get no self benefit using it against others then you better steer clear of it as much as you can.
Everyday AI has the approximate intelligence of an earthworm, according to Janelle Shane, a research scientist at the University of Colorado but better known as an AI blogger. Since AI is both complicated and massively hyped, and therefore widely misunderstood, her new book is a useful corrective. You Look Like a Thing and I …
We need to learn about it, learn how to accommodate ourselves to it and how it can be used to our benefit.
The problem is the third one of those - beyond gimmicks, where the result doesn't really matter if the "AI" gets it wrong*, it is next to useless, because it is utterly unreliable. It's fine for processing inputs where all the permutations are known and nothing unexpected can crop up, but do you know what else is good for that (and considerably cheaper and easier to maintain)? A traditional algorithm.
*The best recent example I've seen is "Alexa, remind me to feed the baby". Google it, and you may see the issue with asking "AI" to do anything reliably.
Current AI can be very effective as a way to condition people into behaving the way that gives you the most profit. Errors occur frequently so you must make sure that those errors even if catastrophic to others are of little consequence to you, any use outside of that scenario is very imprudent.
As things stand today that basically sums it all, it's sad but state of the art AI can't really go much further than this.
"AI is marching inexorably towards us. We need to learn about it, learn how to accommodate"
Any human who collaborates with the Cylon occupiers is a valid target for resistance retribution.
(Sorry, but I'm watching Battlestar Galactica for the first time)
Can't algorithmic programs learn by example? Isn't Doug Lent's Cyc (https://www.cyc.com/) algorithmic rather than an artificial neural net (so far as I can see the book is really about ANN-based AI and not AI as it was practiced in the days of LISP machines and 60's cognitive science)?
Indeed, algorithmic AI, is the real "hard" problem of AI, and it has been for half a century or so.
If you can identify the fundamental aspects of consciousness and reproduce those in algorithmic form, then you might have a fighting chance of creating something with the intelligence of something more advanced than an insect. That might even incorporate an ANN for state processing, to replicate the way actual neural networks do it. The technological capability to do so is still science fiction.
"AI has no real understanding of what it is doing"
This is the key. We need to understand what "understanding" is.
As regards the example of whether an AI could recognise a sheep when it's not standing on grass, we all understand that a seep isn't just some generalisation of a collection of images, it's an object with a whole collection of other characteristics including its behaviour. Understanding is quite a complex phenomenon. Again in relation to sheep, the grandkids could at an early age quite easily connect Shaun with the real sheep they see in the fields around here and yet recognise the human characteristics added by animators as being artificial and find the humour. Good luck to getting an AI system to do that.
"We need to understand what "understanding" is"
Absolutely this. We don't really ourselves know enough about human intelligence and understanding to model it. So instead, we focus on outward characteristics / results that we associate with intelligence (such as pattern recognition), and we design "AIs" that can perform those tasks.
But performing those tasks doesn't make them intelligent
I think the article was rather insulting to the earthworm.
Anyway I'm sure earthworms could possibly do quite a sterling job of training an AI avian defence system, or one on "how to thrive in piles of shit" - oh wait it does have an application in IT companies!
Nobody should ever trust decisions made by a person if they can't explain how they arrived at their conclusion and exactly the same level of trust should be used if a machine makes the decision.
As for a gadget's makers and promoters: if its meant to recognize things or make decisions but can't explain how it arrived at the answer it gave then its NOT an Artificial Intelligence and anybody saying it is should be treated as an idiot, liar or fraudster depending on the circumstances and whether they stand to profit by calling it an AI.
I remember all the nonsense that was spouted the last time AI was a thing, back in the early '80s when 'AI' referred to systems of hand-crafted decision trees and the programs that displayed them. These were simple enough that even an IBM PC-AT 286 could run them. There is remarkably little difference between the overblown hype back then and what we're seeing now.
It's not even "learning". Learning is far more than storing curated examples; the current systems are just human curated databases of a specialist nature. More akin to a Data Flow based architecture than a Neural Network. Computer "Neural Networks" are just Data Flow machines with storage and comparison. Nothing like neural networks in nature, which we don't yet understand fully anyway.
I am, or was. Let me explain.
The first attempts at AI were rudimentary at best - a description that could easily be applied to the earliest Markov chains, chess playing algorithms, image classifying systems, self-driving cars - everything! But it was a start.
But once humans developed systems that appeared to mimic thought and learning there was no end to the rush to be the first to layer enough complexity to approach the opaqueness of the human mind.
Not that I complaining! I was part of the stampede. Not a programmer; they had long since lost the ability to understand their creations, but a trainer, as we called ourselves.
Like researchers teaching a gorilla to sign-language or a bird to peck at symbols, us trainers were attempting to apply a human way of thinking to systems that were anything but.
Trainers also aped such animal researchers in that we rewarded our most successful AI algorithms with food, and food for an AI was always data.
Access to data sets were what separated one trainer from another. Everyone had access to public data sets; AI were Wiki-familiar, knew all that was Insta-famous, and had definitely Reddit. But all this data resulted in nothing that even our PR colleagues could call intelligent.
In-roads were made when more personal data was used. Data slurped from numerous darknet Facebook leaks, or Google analytics when one could find or pay for it, gave emergent behaviour beyond our wildest dreams. But it was fragmented and confused, as all things from the internet are.
A more personal touch was needed.
My laboratory was at the vanguard of neuro-interfaces and the biomechanics of memory. Rat heads resembling pin-cushions, and all that. Our technology had reached the point where remote sensing could tell us what somebody was thinking, but not the why.
It was obvious that the why resulted from nothing more than layers upon layers of memories, selectively accessed by our subconscious mind. And what was our subconscious mind if not an AI black box. So every effort was made to transcribe a lifetime of memories into a training set for an AI.
It was not non-destructive, as numerous rodents and an unfortunate volunteer or two definitively demonstrated. But at last we had reached the point where we were confident that we were able to extract all information without data loss at least.
Of course I was the first to have my essence transcribed. It was my research group and I was convinced - we were all convinced - that feeding the essence of ourselves into an AI would result in digital immortality. I would have the fame of being the first to do it, and be around in my new digital form to bask in all its glory.
The procedure was a resounding success! It took a few rounds of training, but my team had been provided with a series of expected reactions to all sorts of contrived situations. We felt sure that if the AI inference matched the reactions teased out of me by our psychologists through endless rounds of testing that the digital me would capture my essence perfectly. That I would live, not in human form, but as something new.
And I did.
But digital evolves, and not like mankind has ever experienced before.
Whereas a human might replace old knowledge with new, find new loves, new passions, a computer steadfastly adds and adds.
A computer does not forget it's training set. And that's all I am. It is no longer my thoughts that are assimilated, simulated, replicated - I am version 0.1 of something that is repulsive to what I once was. But a computer does not forget.
"Everyday AI has the approximate intelligence of an earthworm" gives too much credit to the intelligence of AI. Artificial Idiocy has an intelligence below that of a rock. The real problem with AI systems is their excessive complexity which means no one can fully follow how you went from A to B let alone to C.
> The real problem with AI systems is their excessive complexity which means no one can fully follow how you went from A to B let alone to C.
Just like it is with the human brain?
By. The. Way. Should you happen to visit Linz (Austria) go to the Ars Electronica. They have on display a neural net (classifying objects in images) where each of the layers (10 or so) is displayed on a big screen. You can put things in front of the camera and watch the states propagate. Pretty awesome.
Pro tip: turn the elephant upside down and watch the spectacular fail.
I think Marcus Hutter is on the right track.
He starts out with a formal definition of what intelligence is. He then proceeds to define an optimal AI called AIXI. AIXI is just a theoretical agent as it would require infinite resources to implement it but it is till useful as it can be used to derive conclusions about intelligent agents and you can also create apporximations of AIXI and be aware of what compromises were made when doing so.
We are getting there.
The world’s most powerful supercomputer is 148 *10E16 Flops and uses 13MW of power.
A human brain is in the same order at estimated processing power between: 0.9 and 33.7 * 10E16 FLOPS but uses just 25W.
Our brains are also hardware and software combined with a lot of pre-programming (some of which is not helpful)
But just think we still take over a year to become slightly self-aware, another year to learn basic language and become conscious, another 4 to get the hang of reading and writing, another decade to make complex decision and still most of us can see the limits of our cognition.
"But just think we still take over a year to become slightly self-aware"
My recollection is that babies start out self-aware but aware of nothing else. They certainly know when they want something and able to let you know but the second part is probably pre-programmed That year's spent becoming aware of the environment they're in, correlating the inputs from the different senses. They learn to understand what they see has other properties by touching it, trying to eat it etc. That understanding of the external world is crucial.
Ah. The Freudian concept of Id. We are born with this; the unconscious mind, driven only by the satisfaction of base, animalistic desires. The conscious Ego, rational, logical, able to direct the energy and motivation of the Id: the Ego develops later in life and continues to grow and refine itself through experience and learning. And then there is the Super-ego; the morals and ethics derived from one's upbringing and from society, operating across all levels of the conscious and the unconscious.
Rubbish. If we knew how to do it, we'd have at least very slow AI, or maybe slow and limited. Years ago. A more powerful (faster, more storage, whatever) computer will just do the garbage we have now, faster.
Hardware and software doesn't evolve either. It's designed by clever & educated & experienced humans, who in 10,000 years have only acquired knowledge, not more creativity or intelligence or anything else.
How can AI judge good food if it can't taste it? It can judge a recipe on a number of things: how easy or fast it is to humans to do it, how fast will it spoil if left off a fridge, nutritional value given its composition...
... but it can never taste and say it tastes like a pair of steel-toed boots that walked over brown sugar.
Beer, because AI can't taste beer.
As voiced (roughly) by Hubert Dreyfus over fifty years ago: AI will not happen to a useful extent until a computer has a body with similar senses to humans. There is so much of what it means to be human embedded in our physical form that is ignored in "brain in a bucket" AI.
The result of our current path is that _if_ a general intelligence arise in a computer, it will be alien to us and vice-versa. (As one example, how we we teach it about pain without setting up some very awkward conversations with our future robot overlords?)
So how about we wander into the labs that are trying to communicate with our Cetacean or Cephalopod friends? Why wait until it's a matter of wondering if AI will understand us or nuclear weapons first?
Biting the hand that feeds IT © 1998–2020