How much data do they want?
Oh, you want my health record, Google? And you're paying me how much, Google? Nothing?
OK, fine. Bye.
Google-stablemate DeepMind thinks it is one step closer to cracking artificial general intelligence with an algorithm that helps machines overcome memory loss. AI is the hottest trend in technology right now – it’s on its way to reaching the peak of the hype cycle. There's a lot of ballyhoo behind all those headlines you've …
Thinking about AI.... When/if we get general AI. Do we need to elevate the machine to "sentient" and give it rights and so on? Opens a whole can of worms, doesn't it? Wouldn't it be easier to reclassify humans as "fleshy neural networks" or something. Then, we would be allowed to "kill" the AI, because it is just one type of program or algorithm terminating another. I'm probably being a bit clumsy with my terms here, but do you see what I'm getting at? There is an ethical conundrum coming up, and I think we need to get around it somehow. Probably a good idea to have a kind of hierarchy where fleshy outranks silicone. Otherwise, you know, armageddon and that...
I haven't read the paper, so I'm not sure of why the authors decided to investigate this, or even how it's implemented. While I was reading the article, though, I was thinking about a couple of things. First, is how they reckon that sleep is necessary for most (if not all) things with a brain. Something to do with assimilating memories and inputs, most likely, and shifting experiences around between different layers of memory. The other thing that I was thinking about is research on combining neural nets with expert systems of some kind, particularly of the fuzzy-logic variety. Oh, and also some of the stuff that Douglas Hofstadter was researching on "creative analogies" and kinds of symbolic intelligence.
Like I said, I have no idea how these guys are implementing their nets, but it seems to me that something that mimics the way the human brain dreams, complete with multiple levels of memory (with associated reinforcement and deliberate forgetting) and some sort of symbolic reinterpretation of neural network states (equivalent to codifying an expert system) would give you a system that is capable of the same kind of trick as outlined in the article. Namely, integrating new "experiences" and "skills" without nuking what's there already.
The biggest problem with neural nets is that they are opaque. You can observe its "thinking" only by reference to the outputs, but explaining the reasons (and hence giving a usable expert system that isn't just a non-symbolic rehash of the neural weights) isn't easy. Still, if you could combine a kind of symbolic (associative) memory with something that's designed to play around with stored memories (ie, dream), for example, building trial fuzzy cognitive maps, you could perhaps compress the large neural network state matrices into some more manageable expert-system-like rules.
I'm sure that the learning algorithms would have to be adapted for this to work. You can't just compress a neural network state into a fixed expert system without lossage. So as stuff is shifted around between different types of memory, the system would have to self-check to make sure that the new model still works with the training set. Probably this would involve replaying and reformulating the steps that the net made as it learned (or "experienced") as a result of being corrected (with back-propagation or whatever). I imagine that a kind of blockchain structure could work very well, albeit one that provides a very subjective and revisionist version of events, thanks to it needing to be rewritten as the underlying representation of stored knowledge shifts around across the different memories and procedural parts.
I think it's very interesting. The only way to make machines learn like we do is to have their memories work like ours. Repeat over and over writing over some past knowledge in the process while still keeping some of the old. Problem is then, that it will also inherit our flaws like imperfect memory, recall latency, forgetfulness, mistakes, and the rest of human brain defences. Then at that point what's the advantage? We will still have a need for perfect dumb calculators.