Book burning Nazis
What Mark Zuckerberg and the other evil masters of social networks are doing is no different than what the Nazis did. It's just book burning by another name.
46 posts • joined 9 Oct 2017
Unlike the brain, a deep neural net cannot see an object it has never seen before. There will be no fully autonomous cars until we solve the AGI problem. We need breakthroughs in instant object detection, a common sense or causal understanding of the world, prediction and planning. Once you have that, you won't even need fancy sensors like Lidar, radar, infrared cameras and such. A simple movable binocular camera will do. Above all, you won't need to test the system for millions of miles. The machine will learn to drive just like humans do.
Don't ride in a self-driving car that is powered by deep learning. A deep neural network is essentially a rule-based expert system. They all have a fundamental flaw: they fail catastrophically if they encounter a situation that they have never seen before. For example, it may not recognize a pedestrian in a Chewbacca suit because it has never been trained to recognize one. The human brain does not have to be trained for every possible situation, an impossible task. We have common sense. This is why we are still orders of magnitude safer drivers than autonomous cars. The California DMV disengagement data on self-driving cars proves this.
Transportation agencies everywhere should ban all self-driving cars on public streets until they can prove they have common sense.
In my opinion, the @USDOT should immediately impose a moratorium on all autonomous vehicles on public roads. Deep Learning is not suitable for uncontrolled or open environments where humans can be harmed. A deep neural net is like a rule-based expert system: it will fail catastrophically if it encounters a situation it has not been trained on. The blame for the next fatality will rest on the shoulders of @SecElaineChao.
The UK and other European nations should do likewise because more fatal accidents are coming. Guaranteed.
Fully autonomous vehicles are way beyond what current AI technologies can handle. A major breakthrough in AGI must happen before we realize this dream. One thing is certain: it will not happen with Deep Learning. A deep neural net is really an expert system and, as such, it suffers from the same fatal flaw: it fails catastrophically every time it encounters a situation for which it has not been trained. This is unsuitable for real world applications where safety is a must.
To all big time investors: Do not waste money investing in any project using deep learning to achieve full driving autonomy. It's a waste of time and money. Invest in AGI research instead.
The nasty truth is that deep learning is prone to catastrophic failures. It's the same flaw that pretty much doomed expert systems. In fact, despite denials, DNNs ARE expert systems. Unlike human drivers, a deep learning system, the kind used in self-driving cars, cannot see something it has not been pre-trained to recognize. It's a monumental flaw that guarantees that catastrophic failures are unavoidable.
But the deep learning community will continue to hype this technology to death. The AI winter cometh.
Deep learning systems have no idea what they're seeing. We must give them a label. Even then, they still have no clue. An adult human brain can instantly see and interact with a completely new object or pattern that it has never seen before. And it can do it from different perspectives. A DNN, by contrast, must be given hundreds if not thousands of samples of the object in order to properly detect it in an invariant manner. And you still have to give it a label.
A number of us have been saying this for many years. But the AI community, like all scientific fields, is extremely political. Only the famous leaders have influence, even if they are clueless.
DeepMind has never made a breakthrough in AI and never will. They essentially apply well-known techniques invented by others (Monte Carlo search, deep learning and reinforcement learning) to games chosen for their limited number of behavioral options. I would be infinitely more impressed if they made a robot that could walk in any generic kitchen and fix a meal of scrambled eggs with bacon, toast and coffee.
As an aside, Demis Hassabis and his team at DeepMind are on the record for suggesting that the human brain uses backpropagation for learning. They published a peer-reviewed paper on it. I cringe when I think about it.
So OpenAI wants to develop AGI but they are still spending talent and busloads of cash on DNNs, connection weights and backpropagation? It's a sure bet AGI will not come from that bunch. Deep learning experts are the least qualified people to work on AGI. Almost everything they know and think is important is wrong. Just saying.
US markets were never free. It was never based on inheritance. Only an inheritance-based, free market system whereby the land and its wealth belongs to all is viable. In such a system, all corporations try to make a profit for the people who invest their share of the inheritance (the wealth of the land) into the system. Everybody should receive profits from the corporations in addition to their wages if they work. Socialism and communism (government controlled programs) will never work because there is no incentive for profits and hard work. They also destroy the free market, the only way that goods and services can be properly valued.
In an inheritance-based system, artificial intelligence can simply be used by corporations to make more profits for the people. There would be no disruption because nobody would be depending solely on their work for a living.
We'd better wake up, people, before AI eliminates all jobs and the plutocracy turns the world into a welfare society forced to survive on handouts from a thieving minority.
Whats that one called where everyone gets a basic wage for free?
It's called Universal Basic Income or UBI. It's another plutocrat/socialist ploy to give handouts to the masses while they're living in decadent luxury. I and many others will rebel against it. I mean, why should the unemployed masses receive a subsistence handout from the plutocrats while the equally unemployed Mark Zuckerbergs and Bill Gates of the world are eating sushi and drinking champagne? What makes them so special?
We should all have an inheritance in the land. We need an inheritance-based economic system. The wealth of the earth is the earth. It belongs to all.
You're a century or two too late, sunshine. Marx put this idea forward,
Man, give me a break. Marx was a friggin' moron, a mediocre mind. I don't believe in socialism. Socialism/communism is about government programs: free healthcare, free housing, free education, etc. I don't believe in any of that crap. I believe in a purely free market system. Just give us what is ours by right.
The Centre for Policy Studies is, of course, a Big Brother, plutocrat-funded organization whose job is to BS the masses. In any just society, the people would be delighted to have robots do all their work for them. The fact that we are afraid that automation will take our jobs leaving us without a way to make a living should be a wake up call to the fact that we are slaves in a slave system.
True capitalism is where the people own the corporations because the corporation are funded with capital which represents the wealth of the earth. The wealth of the earth belongs to all. We are being ripped off. Give us what belongs to us.
Autonomous cars are based on deep learning, an old, annoyingly inadequate, baby-boomer technology from the last century. Just one little unsupervised learning breakthrough can force deep learning to become obsolete overnight. Suddenly, a bunch of highly paid AI experts are about as valuable as horse buggy mechanics. Not funny, I know. But life can come at you faster than you think.
LOL. Quantum computers are right up there with the phlogiston and dark matter.
1. Opposite states are superposed but only if you're not looking.
2. We can't see them but we know they are there.
3. Trust us. We know what we're talking about.
Right, sure. The crackpottery in Big Science is friggin' hilarious.
"Depending on its role in the brain, that timing may or may not be significant. It's clear you can't completely ignore it."
Are you kidding? You guys need to completely forget about spiking rate. Timing is everything. EVERYTHING. Spiking rate is a red herring, a complete waste of time (no pun intended). It is true that the retina uses rank order encoding to compress visual information (~200 to 1) but the cortex is entirely driven by the precise timing of the spikes.
"And so we built a hardware-software system that has good support for sparse connectivity. We're very focused on spiking networks whereas machine learning almost completely ignores spikes."
Wonderful. Now that deep learning guru Geoffrey Hinton has finally acknowledged that we must abandon backpropagation and start over, it is time to promote the correct paradigm that will replace backpropagation. Deep neural nets will soon become obsolete. The future of machine learning will be based on the precise timing of discrete sensory signals, aka spikes. Welcome to the new age of unsupervised spiking neural networks.
Software unreliability is proportional to complexity and is a direct result of our current computing paradigm which is based on the algorithm. The solution is to stop using the algorithm as the basis of programming and adopt a signal-based, reactive programming model. Essentially, software should work more like electronic circuits.
The second problem is that, in spite of the loud denials from the AI community, their biggest success, deep learning, is just GOFAI redux. A deep neural network is actually a rule-based expert system. AI programmers just found a way (gradient descent, fast computers and lots of labeled or pre-categorized data) to create the rules automatically. The rules are in the form, if A then B, where A is a pattern and B a label or symbol representing a category.
The problem with expert systems is that they are brittle. Presented with a situation for which there is no rule, they fail catastrophically. Adversarial patterns prove this in neural nets and Tesla Motors found out about it the hard way. The car's neural network failed to recognize a situation and caused a fatal accident. This is not to say that deep neural nets are bad per se. They are excellent in controlled environments, such as the factory floor, where all possible conditions are known in advance and humans are kept at a safe distance. But letting them loose in the real world is asking for trouble. Obviously, we will need a better solution.
Here are a few relevant links for those who care:
Another brain-dead superstitious materialist heard from. Here's what your little materialist cult believes in: the universe created itself by some unknown magic; machines are or will be conscious by some unexplainable magic called emergence; lifeforms created themselves from dirt. I could go on but then I would barf out my lunch.
I'm glad to see the deep learning hype is finally subsiding. I and many others have been saying this for years. The success of deep learning has been a disaster to AGI research. Geoffrey Hinton, one of its leading pioneers, has finally admitted that they need to scrap backpropagation and start over.
Biting the hand that feeds IT © 1998–2020