Reply to post: Real AI

Can you get from 'dog' to 'car' with one pixel? Japanese AI boffins can

Sssss

Real AI

Ohhhh, is that a yellow Labrador up there. Idiots. I know what the issue is, free-floating datasets. I don't know if they have figured out how neural nets work (hint, good reason to avoid using it until you know what's happening. Anybody seen this in science fiction movies?).

Now, in normal intelligence, data in anchored and crossed linked between these anchored points, represented by structures in the brain. In a niave neural mesh there is none of these structures. So, it should be possible to change one value and have that change produce a wider change, as has happened here. The AI understands niavely the data as a blob, and the blob can change and free form move around under influence. Hence, the data is not really a horse or a car as we understand it. It maybe start as image 1 and image 2. Over time with training it may have parts that are an identifiable subset which it can identify with, in broad terms. The sets are not set in stone, like visual and category systems, and therefore be malleable. The rules themselves are malleabile, as they have no bounds. You are dealing with something more open than a 1.5 year old, even if you can train them up to a 6 year olds level, some simple coaxing produces undesirable outcomes.

In the human brain, you have various systems by to anchored to. The visual system, at a low level (I have experienced this) understand images as various geometric shapes, and builds up on top of that, and fills in with shade, and texture and detail. All these things might have it's own unique physical anchored point to form a category that can be cross linked with others, and also hierarchy link along with others. The strength of the bonds forms shapes, structures, and structure links/pathways into different systems in the brain. A single point change to an image does not result in the object changing category usually, only changes in detail in related categories. The mind has proof from multiple anchored category information that it is a car, and remains a car. You would have to retrain (brainwash basically) to convince somebody it isn't a car. The brain sees the shape sub-characteristics, and functional mechanics, which proves it is a car or a dog. A single pixel may change but the mind sees a bonnet, windows, doors roof, wheel to turn, and eye, snout like dog, head like dog, legs to move and tail to wag, like a dog, and it's probably a dog, that has a funny looking pixel (in this case, car with funny looking pixel). This is because the category sets are boundaried and anchored and cross linked ("reminds me of..". To get this happening in computer terms, there needs to be logical reality based seperate systems to anchor, sets and hierarchical data subsets, and discrete seperate spaces/anchor sets. This can be done in software.

This comes from some stuff I've come up with unrelated to AI, and some AI stuff I came up with in primary school to emulate the human mind. My proposed model of human thought also has match subsequent research.

The above will probably help. As I said to an article about Google claiming there search AI is up to the level of a 6 year old, in recent years my search results look like they were given by a six year old. So, the industry needs the help I fear.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon