The only way to win...
is not to play. $3m is quite a loss.
DeepMind’s AlphaStar AI bot has reached Grandmaster level at StarCraft II, a popular battle strategy computer game, after ranking within the top 0.15 per cent of players in an online league. StarCraft II is a complex game and has a massive following with its own annual professional tournament - StarCraft II World Championship …
"$3m is quite a loss."
Indeed. One of the oft-overlooked facets of how absolutely awesome the human brain is is how little power it consumes for the result. Average adult power consumption is about 100W for the whole body, of which 20% (20W) is the brain.
That's comparable to a not-so-fast laptop processor. If you had to run your game-playing AI on a laptop, a 10-year old would wipe the floor with it every time
"We've yet to see any research or evidence that the strategies learned from a domain like StarCraft can be applied in the real world, though."
That prompts a question: after an AI learns how to play really really well, can it teach how to play?
I know that some AIs have been able to develop mathematical proofs, sometimes even novel ones, that can then be expressed succinctly in understandable form. But how/when will we have AIs that, having demonstrated proficiency in less rigidly specified domains, then will be able to impart useful guidance to the rest of us sentients?
Or are we going to end up with a bunch of passive-aggressive AIs saying "If you have to ask, I'm not going to tell you!"
That prompts a question: after an AI learns how to play really really well, can it teach how to play?Or are we going to end up with a bunch of passive-aggressive AIs saying "If you have to ask, I'm not going to tell you!"? ..... Notas Badoff
The answers to those two questions are a certain yes and a definitive no, Notas Badoff.
1. All depends on BIOS, on the basis of which a particular AI is created. Since AI is texts which always have some bias, then AI may well answer "If you have to ask, I'm not going to tell you!"
For example, I tried to ask an AI, created on the basis of Dostoevsky's books, about Fyodor's participation in the revolutionary organization. Dostoevsky, very emotionally, refused to speak on this subject. So, you may suffer the same fate speaking with an AI.
2. AI answers questions, this is what it does since it originated as a result of my involvement in NISR TREC QA. Your desire to learn how to play is a series of answers to your questions. So the answer is " Yes!", AI can teach you to play."
There's more detail in this post from the creators, but to summarise; this new version was playing using just a screen input (ie the same as a human player, no special access). It could play with any of the three races, against any other, and was playing on the same maps as human players in standard ranked matches.
Basically it was as much a straight contest between human and AI players as you could hope for.
...it was as much a straight contest between human and AI players as you could hope for...
If the texts are structured into many synonymous clusters, they become a replacement for human-written programs. And then AI can really become a serious adversary, having them at hand. Especially if there is a set of texts, articles, comments, posts, etc., which contain additional information and can be used for further machine learning.
DeepMind and Google use the same technology for their unmanned Waymo.
Sure, if I was paid to make a computer play StarCraft II, I would also think it's worth it.
So, what domain do we have in real life that could possibly benefit from this experience ? What domain has constantly changing variables that require the intuition of experience in order to not get blinding by the sheer amount of data and cut to the right solution in as short a time as possible ?
Maybe Wall Street trading, or eventually weather forecasting, but we already have massive computers that handle that (albeit not always very well). Anything else that we humans can do happens at human speed and we're better equipped to handle it than a 3K core cloud computer.
The most likely real world use for this will always be the military first and foremost, they have the money (and if they haven't they seem to print it) and are always looking for an edge.
The fun will begin when they are playing a 'War Games' scenario and the black box decides the end of the game is a strategy and nukes the operators.
Searching for information, that's where there are "constantly changing variables that require the intuition of experience in order to not get blind by the sheer amount of data and cut to the right solution in as short a time as possible."
I have a very good reason to suspect that DeepMind educates computer using textual annotations , that is using labels DeepMind both marks and comments on successful and not-so-strong moves, finds what is required and helps to win. So far I've not meet even once any detailed description of DeepMind technology, only the most general words and meaningful winks. For example, I feel that Google uses DeepMind as a cover, masking its developments in the field of AI, particularly the field of text structuring and AI-indexing.
For example, I feel that Google uses DeepMind as a cover, masking its developments in the field of AI, particularly the field of text structuring and AI-indexing. .....Il'Geller
What you have to be prepared for whenever deep into AIResearch and Dark Web Development are Frankenstein monsters and Mr Hyde iterations subverting and perverting future eventual operations/media hosted virtualised reality programs, and the following tale with many trails is real doozy ..... Why are Google meddling in journalism?
However .... Google’s ability to literally alter reality with the manipulation of its algorithms represents one of the deepest threats to democracy today, and must be challenged. ..... is as a gnat on the hide of the bull elephant in a china shop that is ... our ability to literally alter reality with the manipulation of algorithms and main streaming media presentation of fundamentally different engaging novel outcomes representing great treats for humanity today, .... and certainly should be encouraged and supported.
The proofs which will determine which paths have been chosen are always self evident in the future content to be presented and virtually realised?
I believe there is a previous tool available for retraining people who can't seem to resist doing something they deem "tedious". The clue hammer.
A gentle tap with the clue hammer installs a clue, enlightening the recipient and allowing them to see the value of their own choices. For more persistent issues, repeatedly applying the clue hammer and then using it's helper tool "the shovel" ensures that the solution is applicable to all cases.
No AI needed.
Although I have to disagree with you there, I readily admit that the early stages of any StarCraft game (just like the early stages of any Dune, Age of Empires, Warcraft, you name it game) can be exceptionally tedious if you decide to play safe and build up your troops and resources before you try anything, or suicidally exciting if you decide you're bored and go off for a bit of a skirmish before you're ready
"We've yet to see any research or evidence that the strategies learned from a domain like StarCraft can be applied in the real world, though."
- DeepMind machine learning technology uses a database, where DeepMind stores its strategies.
- Indeed, DeepMind strategies must be saved somewhere, mustn't they?
- But any database is a collection of data organized especially for rapid search and retrieval.
- Thus DeepMind strategies must inevitably be somehow indexed (in order for them to be found).
~ So, how does DeepMind indexes its database?
- Google, the owner of DeepMind, indexes by textual patterns (for Waymo): "Those images with vehicles, pedestrians, cyclists, and signage have been carefully labeled, presenting a total of 12 million 3D labels and 1.2 million 2D labels"; where these "labels" are texts.
- Google also said: " Google Introduces Huge Universal Language Translation Model: 103 Languages Trained on Over 25 Billion Examples" - Google trains its data using texts.
- Therefore I can assume that DeepMind uses text-tagged strategies playing its games, saves these strategies using some textual labels.
- Then "the strategies learned from a domain like StarCraft can be applied in the real world" if they boil down to textual retrieval.
- Indeed, DeepMind is trying its hand at medical (textual) search.
Using text labels (to index DeepMind strategies in games), that is marking and commenting on successful and not-so-strong moves, DeepMind must inevitably index not only the texts' whole patterns, but the words that make them up.
Indeed, time and accuracy are absolutely decisive in any game (not to mention driverless a car), and therefore the problem of the uniqueness of patterns and how good they convey meanings becomes an absolute imperative (I'd say, the must). Summarizing, I assume that DeepMind indexes according to the the patterns' words unique dictionary definitions, not for the whole patterns.
However, I found not a word on the actual technology DeepMind has... Thus I can only speculate.
However, I found not a word on the actual technology DeepMind has... Thus I can only speculate. ...... Il'Geller
I think we can safely assume that DeepMind research and commissions are top secret super sensitive, given what is being discovered/uncovered/recovered. And it must be so difficult to not abuse it to make killings on markets and profit from fortunes.
And is DeepMind actually a technology or much more a methodology employing engaging algorithms.
I think everything is much simpler: the technology (used by DeepMind), does not belong to either Google or DeepMind. And secondly, this technology destroys the main Google business, because unlike what Google does now it brings an extraordinary accuracy of finding information, while Google now doesn't search and relies on the espionage and theft. That is, Google is afraid of losing its own business by switching to a new technology, at a time when the old still brings him billions.
Do you see how the new works? Re-read the article. And Google continues to spy...
This post has been deleted by its author
DeepMind reckons the whole effort is worth it, however, as teaching machine-learning models to master a difficult game like StarCraft could help computers in real-world scenarios, where they have to make use of “limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales.”
That is all very well, but there always remains the probability and therefore the distinct possibility and every likelihood that, unless autonomous, will such dynamic and difficult decisions be ignored and overridden by a less than stellarly intelligent human command and control chain.
It is the gift that keeps on giving Madness and Mayhem, Conflict and CHAOS to do Battle and Lose Against.
does DeepMind also have the thermonuclear war simulator game at it's disposal?
The bigger question is: Does DeepMind have the response "Strange Game. The only way to win is not to play" built in, at priority A One One!11!, with flashing lights and a positive feedback loop to give it extra chunky electrons as a reward whenever it chooses that response?
asking for a friend
Article forgot to mention that this version was SEVERELY handicapped in it's I/O so it would feel more like a "regular" pro-player. Anyone that saw the first version will remember the insane stalker micro that made mincemeat out of everything in seemingly "no win" scenarios. Point in case, where AI will win hand down (starcraft or anything) is at the point where it needs to juggle hundreds of I/O's simoutaneously. A fair example would be military hierarquies of unit control. You control top down on a macro level, and as you go down, it's more and more "eveyone takes care of himself". An AI WILL control every unit at the unit level AND coordinate it with every other unit's effort. On the Starcraft perspective, Alphastar is just the best micro gamemasters rolled into one but at a level no human can achieve. On a more broad perspective, it's basically a fully System 2 "brain" (read "Thinking Fast and Slow" by Daniel Kahneman) without "intuition", much less bias and much less error prone (after it's trained that is...).
Funny enough, when self-driving cars reach their "ready to market" point, they will "end" human drivers, because at that point they will be more efficient (wear and tear, fuel consuption, etc) and human drivers will be seen as a liability, as we cause more accidents per minute than AI's will cause per decade...
Anyone that saw the first version will remember the insane stalker micro that made mincemeat out of everything in seemingly "no win" scenarios. .... Nuno trancoso
Where stalks the insane micro now with Overall System Controls at Hand to Command and Create with CHAOS ...... Clouds Hosting Advanced Operating Systems?
What do you want? Heavens or Hell if they be only the two choices offered at this Staging Post and AI Launch Pad ‽ .
The problematic opportunity which always presents itself to Google-type Search Engine Operations is the relentless morph and rapid drift towards Particular Proprietary Intellectual Property Product Placement Presentations rather than just as a Remote Internet Networking Supply Centre providing access to locations with provisions for trading/exchanging/seeding and feeding .......... in Order to Surprisingly Swiftly Realise the Greater Desired Views for Future Populations/Advanced TerraPhormations.
Such would then have that sort of Google-type Search Engine Operation fully liable for everything that then follows and ensues. Fully accountable and fully responsible for all that is good and bad.
Hmmm?. That is an Almighty Duty to Perform with the Immaculate Help of Angels Invested.
Bravo, Google/DeepMind. Nice one. Have a well deserved beer.