# Wanna get started with practical AI? Check out this chap's Rubik's Cube solving neural-net code

The Rubik’s Cube is one of those toys that just won't go away. Solving it is either something you can do in minutes to impress, or find so hard you end up using it as a paperweight. There are several algorithms for solving the classic cube, which has a whopping 43,252,003,274,489,856,000 – about 43 quintillion – possible …

1. #### Plateaue

So if after training on 10 moves, it plateaued in it's ability to solve at around 6 to 7 moves, does it logically imply that training to a higher number of moves will improve the ability to solve from a higher number?

I'm not sure it does. If it can't solve reliably beyond 6 or 7, then what difference would training for 12 or 13 moves make over 10?

Also I don't know much about neural networks, is there a way to "deepen" the network, to allow more analysis or computation time, to improve reliability with further training?

1. #### Re: Plateaue

The network is a simple one and its performance must hit a plateau somewhere. It was trained until it hit that plateau. Better performance would need a bigger, more complex net. The interesting part is whether that would benefit from deeper training, or whether it would pull more information from each training run and plateau just as quickly.

1. #### Re: Plateaue

It MAY not take a bigger more complex net. One of the entertaining things about neural nets is, once they get a bit complex, what they are 'up to' is hard to guess. We humans are lucky - we get a lot of our neural nets pre-sized and pre-weighted and it only takes us forever to do things. Having one too many, or too few nodes on an internal layer can make things nearly work really well after a couple of million training epochs but will never hit the sweet spot required to go the full 10 moves which could require billions of training epochs to get right.

Still waiting for it to work out how to peel the labels off.

1. #### Re: Still waiting for it to work out how to peel the labels off.

That;'s a very inelegant solution.

It's far easier to disassemble it then re-assemble it.

It's more fun to disassemble it and put it back together with one square wrong, then shuffle it and hand it to somebody who claims to be good at solving them.

2. #### Re: Plateaue

"We humans are lucky - we get a lot of our neural nets pre-sized and pre-weighted and it only takes us forever to do things"

That's not how ours work. ANN if various stripes are an attempt to use a neuron based model as a computer for complex tasks. The more the explanations for the various mechanics of operation of our neural networks expand, the end point is always a case of we don't know, PHD and dragons be here. When I say "we don't understand" there generally is a much more detailed answer that I can't give :)

We are born with a brain consisting of assorted regions, divided up by the distribution of the various specialised cells. The neurons are the obvious candidates for much of the processing, and are conveniently huge in various species (such as squid) thus have also been the focus for much of the research. All your sensors, all your motor controls, active and passive, all done by the same type of cell, "wired up" differently for each purpose.

Most of the calories we consume are spent turning chemical energy into electrical standing charge for our organic computer. We share various of the same brain regions as other species, but size and density of various cells vary. We certainly do not have the biggest or best brains by any measure. Size not always better.

We grow our own brain based on the genetic patterns we have. Those neural connections continually strengthen or wither depending on how we use them, we start with many nodes but no edges. Fire a connection less often, neurons disconnect. Fire it more, strengthen the connection, make it esier to fire. You lose roughly half your neurons between birth and the age of two as you figure out how to make the brain and body mesh and control various things. Your neurons will even migrate to other regions of the brain*, based on some mechanism we don't understand. You lose more as you grow older, but you'll notice that old people are still quite clever, despite not having as many neurons as those whippersnappers. It's also why you both learn and forget a lot during your childhood, why small children will effortlessly learn new languages.

The neurons connect, interact, grow and retract in a myriad of ways that is still not understood. The rest of the cells that maintain and support the neurons also play a part in communicating between the neurons, again in ways that can be observed and modeled, but not fully understood. Even just how the eye works is a testament to evolution as an engineer, since it's a bit of your brain exposed to the world behind a few bits of skin, and some cosmetic bone with a hole for the data cable ;)

Neuroscience is fascinating. Meat computers are are darn fine things, I'm sure there will always be a place, even when the robots rise :)

* bloody immigrants form the dorsal region, they should go back where they come from, taking our jobs etc etc

2. Coincidentally, (i assume coincidentally, it may be that there is some great big Rubik's convergence happening that I am unaware of), my nephew has just done, or is doing, a project at university to complete a Rubik's cube by imaging and motors and a Pi, and Python.

The videos are very impressive...

1. The most amusing one I have seen is built from Lego.

3. #### Yeah right

Solving it is either something you can do in minutes to impress

Please. The world record is 4.69 seconds. The guy sitting next two me can solve it in 30 seconds. Using only his left hand. And he's not left-handed.

1. #### Re: Yeah right

But was that world record made from a blind start? That's the REAL test of a Rubik's Cube Solver. In fact, what about efficiency tests as well, to see it solved in the fewest possible moves?

2. #### Re: Yeah right

Most right-handed cubers solve the cube one-handed with their left hand. That's because the fastest algorithms for a righty two-handed use a lot of R turns. Those turns are hard to make with your right hand alone. So you have to either learn a whole lot of new algorithms, or use your left hand.

4. #### Exactly 54 moves

It always takes me exactly 54 moves. This is the number of coloured faces I have to peel off and re-stick.

1. #### Re: Exactly 54 moves

Surely at the very least you'd leave one starting square on?

1. #### Re: Exactly 54 moves

You can leave the six at the center of faces, they never move!

1. #### Re: Exactly 54 moves

"You can leave the six at the center of faces, they never move!"

Ah, that's why I like to swap them first. They don't move, but they clearly start on the wrong face :)

5. Everything wrong with AI / machine learning in a nutshell.

A simple task, that a child can do, constrained to the bare minimum of logical processes necessary (i.e. no actual movement required, just literally "rotate row A then column B", after immense training plateaus to the point of uselessness before you're six moves away.

Tell me... what does a Rubik's cube six moves from completion look like? I guarantee you that it looks "almost done".

And then it's not reliable (only 75% solution rate) and doesn't scale (or they'd run it for longer to improve that reliability / number of moves).

Pretty much this is where AI is. Let's throw data at something acting randomly, wait until we've culled anything not resulting in success, then claim it's "intelligent" even when it can't then do six moves to complete a cube.

1. 6 moves is about 34mil possible configurations. A lot of humans would intuit it was nearly done but most would fail to restore it in 6 moves. I doubt such a simple neural net would notice many (or any) of the symmetries that simplify the problem space either, things human solvers learn quickly.

As a way to achieve perfect move counts it's a dead end without some help with those symmetries.

1. This is the real issue. Show me a neural net that derives a general rule for swapping two edge pairs, and from that how to twist two corners and I will be impressed. (This is what I did, by the way, as directed by the SciAm article.)

Until then, it looks to me like a fancied-up compression algorithm.

2. Chess has 9+ million positions after only three moves each (and the initial starting points are pretty restricted in movement, so it quickly becomes 34m+ for the same amount of moves mid-game).

It's still not very difficult to think three moves ahead, however. And it's much quicker for a COMPUTER to literally iterate 34 million moves than it is to "guess" at a 75% accuracy.

6. I totally agree! A wasted exercise.

7. #### Amazing

Seeing as the system has to be told the moves that were made to scramble it, I too would be able to write a program that used this information to "solve" it - by reversing those moves. Does this now qualify me as an expert in "Artificial /intelligence" or is this just another example of the worthless hype trying to equate algorithms with sentience?

1. #### Re: Amazing

Seeing as the system has to be told the moves that were made to scramble it

I initially started reading it like that but I believe that's only during training which seems fair enough.

My skill level is at solving a one turn manipulation.

2. #### Re: Amazing

"Seeing as the system has to be told the moves that were made to scramble it, I too would be able to write a program that used this information to "solve" it - by reversing those moves. Does this now qualify me as an expert in "Artificial /intelligence" or is this just another example of the worthless hype trying to equate algorithms with sentience?"

You're saying this as if a human who's never seen the Rubik's Cube before can come across a scrambled cube and, completely unprompted, can figure out the purpose AND solve it. As most things go, even humans need directions.

1. #### Re: Amazing

"You're saying this as if a human who's never seen the Rubik's Cube before can come across a scrambled cube and, completely unprompted, can figure out the purpose AND solve it. As most things go, even humans need directions."

My memory of the original cube craze is rather dim, but I'm pretty sure that 99% of the population *did* immediately figure out the purpose. Obviously only a far smaller number actually solved it, but *some* did and I see no reason to let the machines have a lower bar.

1. #### Re: Amazing

They aren't. Most people figured out the Cube by watching other people. And those who didn't usually started with a solved cube (last I checked, Cubes are sold in a solved state) and just played around with it. Like what you see here.

1. #### Re: Amazing

Exactly.

I solved the cube myself, back in the eighties, with a pretty damn inefficient set of algorithms (which I still remember, btw.) But I did a lot of the work by taking the cube to pieces, putting it back solved, and working from that.

Don't think that's all that different from what this AI is doing.

8. #### Starting Out

Many of these posts could be slightly modified and used in explaining why

print "hello world"

is a useless exercise.

Don't we have to start somewhere?

1. #### Re: Starting Out

You don't generally make press releases out of "hello world". More than that, the issue is that the approach is in the wrong direction. It's like someone learned the secret of turning it off and back on again and announced to the company that they were now a computer expert. Sure, they can solve a bunch of trivial problems this way, but the approach is so limited that it should not count for anything.

1. #### Re: Starting Out

Huh. Checking back to see if I had missed any responses, and I really like this one. So I went for the upvote.

## POST COMMENT House rules

Not a member of The Register? Create a new account here.