
So, instead of applying a bit of Group Theory ...
... they throw an infinite number of AI mokeys at it ?
A new neural-network can solve a Rubik’s cube twice as fast as the fastest human – though roughly three times slower than the fastest dumb algorithm – according to research published in Nature Machine Intelligence on Monday. Though the AI approach is not a nippy as the fastest traditional computational method, specifically the …
So, they've trained a machine learning system, using a lot of different possibilities and probably a ton of training time, and gotten a system that doesn't exceed actual human programming. In addition, the human programming is not a black box, so can be patched if it malfunctions, but this model would have to be retrained. There are many cases where machine learning is useful or even necessary for a good result, but this never struck me as one of them, and the results don't impress me much.
Ok later in the article it talks about number of steps to solve and thats fair enough, the computer does it better, but talking about time to complete is irrelevant unless the computer version takes the same amount of time for each turn of the cube as a human (i.e based on manual dexterity), and the same visual recognition time (i.e the time it takes for a human to recognise the pattern on the cube). Otherwise, you're not comparing apples and oranges.
The researchers trained DeepCubeA over two days with 10 billion different Rubik’s Cube combinations and tasked it to decode all of them within 30 moves. It was tested on 1,000 puzzles...
I would also suggest that if you left a human to work on a single Rubiks cube combination for long enough (if im reading the above correctly, they basically solved each puzzle combination 10 million times) they would also be able to get to a similar number of move steps as the AI. Although forcing someone to perform 10 million iterations of the same puzzle would probably breach the geneva convention...
Yup. Exactly what I was going to write. It's utterly pointless to compare an algorithms efficiency to a human unless the test is done under exactly the same physical conditions and constraints that humans are able to perform under through no fault of their own. Simulated is just a not good enough comparison.
In terms of anything machine oriented, algorithmic and/or computational - other than tasks like quoting your times table or other simple sums I'd be very surprised if a human was faster than a machine at anything, so again, the comparison in my mind seems pointless.
you're right, strictly speaking AI can't solve a Rübik's Cube, it needs manipulators for that, i.e. a robot. The AI then steers the robot, but saying a robot can solve the Rübik's Cube faster than a human is less spectacular, because it's like saying that a car runs faster than a human: big deal.
The same can be said about playing chess: the actual chess computers can't play regular chess on regular chessboards, they don't have manipulators, they don't even have visual recognition of the chessboard, human assitants do the moving, and the recongnition is entirely absent. Deep-Blue wouldn't have beeten Kasparov if there haven't be a human around to help Deep-Blue, so strictly speeking computers still didn't beet humans at chess.
That computers beat humans on cumputer's playground is logical, but let them try to beat humans on human's playground. Next-step: beat humans without external power supply, running only on batteries, doing the visual recognition and chessboard manipulation. When "computers" can do that, then I'll accept that computers can beat humans at chess.
It's not at all like saying a car can run faster than a human - the point is being missed entirely. The difficulty in solving a Rubik's cube or playing chess isn't the physical effort required to rotate the blocks or move pieces. The difficulty is in the thinking (without using a dumb algorithm) what to do next.
Would you say that playing a chess game on a computer screen where there are no physical pieces is not playing chess?
Actually if you watch the top speed cubers they are close to the limit of physical ability to manipulate the cube.
The single solve isn't as highly valued a prize as the average solve though (the average of the middle three of five rounds in a competition) - much less dependant on getting an 'easy' scramble.
Be interesting to know which model of speed cube they used, and how they tuned it...
>It's not at all like saying a car can run faster than a human - the point is being missed entirely.
No you are missing the point.
A human has solved the Rubik's cube in 3.47 seconds, ie. manipulated it back to the solved condition, we are currently unable to time how long it takes the human to perform the same function as the AI, namely workout the sequence of moves necessary without actually doing the manipulations. So to say the AI is faster than the human is comparing apples to oranges.
Fair point, reasonably explained.
The problem is that you started talking about cars vs running, and the physical actions of playing chess, which does not illustrate your point at all well. Not accepting that a computer can beat a human at chess because nobody has yet made it look like a humanoid and physically move pieces without being tethered to a power source has no bearing on the result.
It should have to imagine the concept of something to manipulate the physical cube with first and then design a suitable robotic version and build it with a 3D printer and finally physically manipulate the cube. Hell, it should also have to imagine and design the concept of a 3D printer and build that 3D printer with the first 3D printer.
This is EXACTLY the sort of problem computers are good for.
No AI can do most of the ordinary things a three year old can.
It was realised in the 1960s that there might be a problem with the very idea of AI and also traditional concepts of what are "hard" or "easy" problems.
Current AI is nothing of the sort. It's mostly various strategies of pattern MATCHING (not recognition) and human curated databases, or other "training" with human defined goals.
It's thus also no surprise that the current AI approach is worse at this WELL DEFINED problem than a dedicated algorithm, but is faster than humans. The speed is irrelevant though.
In the 1980s I argued that we didn't need faster more powerful computers to do AI. I argued that if we knew how to write such a program and the computer wasn't "powerful" enough it would just be very slow AI. In practice for most problems that don't involve real time (hitting a cricket ball or driving a car), the speed is irrelevant to being able to do it.
(e.g., two colours swapped). Does it continue indefinitely or does it start by checking things such as parity to conclude it is impossible?
The latter would be more intelligent from an AI perspective, but that initial check would slow it down.
I would imagine an AI engine might not be able to detect two colours being swapped very easily without applying such a sanity test at the outset, whereas a human would be the reverse i.e., get all the faces correct but for the swapped ones, using the Mk I eyeball to detect the inconsistency at the final twists of the cube.
All this misses the point about the physical world.
Show me a robot/AI system that can catch a ball whilst running. No such machine exists.
Human dexterity is woefully underrated by proponents of artificial intelligence. Many 'low skilled' manual jobs simply cannot be done by robots. Yet these are often the worst-paid jobs.
Silicon Valley nerds might want to try their hands at some menial tasks that factory workers perform with amazing speed, then try to make a machine to match them. I'm not holding mt breath.
An intelligent human can be "used" for thousands upon thousands of "tricks". The current state of "AI" (in the digital sense of the term) is fast at solving extensive logical sequential problems (chess, Go, Rubik's cube) but seems much less effective when faced with ambiguity. Ambiguity (which is an ineradicable feature of life) is what clever humans are best at dealing with.
However not all humans are well equipped to handle it, and it seems that the reference model against which "AI" is being assessed is representative of the less well equipped humans. It's worth considering that handling ambiguity improves with practice, but as we increasingly hand it over to machines we progressively eliminate opportunities for that practice. Consequently we become less well equipped to do it ourselves. As succeeding generations of "AI" practitioners are drawn from the common population, it is entirely possible that, as the quality of the reference model declines, a limit will be reached in the performance of the "AI".
People laughed when the first clattery wobbly motor vehicle made its entrance into the world, they of course would never replace horses.
People ridiculed the use of "portable" telephones when the first cumbersome and impractical bricks arrived
People laughed at the idea there would be a computer in every home, because who on earth would have a use for such a contraption.
So lets all ridicule AI, it will make for great reading in the future ;-)