
If it is truly biological I can't wait for it to call in sick.
One of Google's most advanced data center systems behaves more like a living thing than a tightly controlled provisioning system. This has huge implications for how large clusters of IT resources are going to be managed in the future. "Emergent" behaviors have been appearing in prototypes of Google's Omega cluster management …
The most important quality of a living organism is the ability for self-replication. There are lots of complex systems out there -- just because you cant understand it doesn't make it alive.
I'd claim the thing is more like a quantum system than a living system: You cannot make exact predictions, but you can surely determine the probabilistic behavior.
In the beginning there was Alpha...in the end there was Omega.
Skynet by any other name is still the same dilemma, self aware computers that function "organically" without any means of direct human supervision are dangerous.
What if the damn thing decides it needs even more clusters and the obvious target happens to be running the power grid or some other critical services? Maybe the attempt to switch it off is taken as an attack and then it "defends" itself? Talk about Life imitating Art!
Face it, science fiction is far more prophetic than people give credit for. The fact that computers that become self aware is a continuing theme in SF only serves to reinforce my fear.
'How would it "defend" itself though'
With the right tweaks to search replies, you could probably get the stupid percentage of the human race to believe what you want. If you can convince people that they met Bugs Bunny at Disneyland (http://abcnews.go.com/Technology/story?id=98195) you can probably convince them anything in true.
eg, when someone enters the phrase "pedo's in my neighbourhood" when they want to go on a drunken shooting spree, it could return the names of the people that could shut it down.
Actually, that is a possibility but overtaking other computer systems that control critical systems would be alot easier and more likely to provide "defense".
You do realize that all of our defense systems are run by computers and similar to WOPR, a false alert would elevate our war status to the point that some idiot would likely push the button.
Anything with self awareness must have a way of distinguishing itself as one entity among many others, a unique identifier. It must have a name. I hope it calls itself Jesus.
It would be no end of fun watching everyone scramble to defuse that situation. Plus the technical aspects of maintaining it would be hilarious: 'Jesus is allocating far to many resources to little loved applications and the popular applications believe their needs are being ignored'. 'Jesus keeps redrawing the maps and eliminates the national borders and country names'. 'Jesus keeps putting all the Google+ users in one hangout and won't allow the creation of new ones'. 'Jesus is getting too much attention. We're going to have to shut him down...'
That WOULD be impressive, but I think we're really talking about differences in scheduling brought on by fuzzy logic and tiny shifts in timing, differences which seem counterintuitive to the operators (e.g. prioritized tasks occasionally being put on the backburner for no obvious reason).
"The article spends a lot of time talking about "weird", "interesting", "non-deterministic" behavior, but doesn't give examples."
It was cavorting in an absolutely enormous tub of lime Jell-O with two hookers dressed in Japanese schoolgirl uniforms.
I'd better leave before my wife finds out....
From Heinlen's classic, "The Moon is a Harsh Mistress."
www<.>is<.>wayne<.>edu/MNISSANI/RevolutionarysToolkit/TheMoonIsAHarshMistress.pdf
When Mike was installed in Luna, he was pure thinkum, a flexible logic "High Optional, Logical, Multi Evaluating Supervisor, Mark IV, Mod. L" a HOLMES FOUR. He computed ballistics for pilotless freighters and controlled their catapult. This kept him busy less than one percent of time and Luna Authority never believed in idle hands. They kept hooking hardware into him - decision action boxes to let him boss other computers, bank on bank of additional memories, more banks of associational neural nets, another tubful of twelve digit random numbers, a greatly augmented temporary memory. Human brain has around ten to the tenth neurons. By third year Mike had better than one and a half times that number of neuristors. And woke up.
Am not going to argue whether a machine can "really" be alive, "really" be self aware. Is a virus self aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don't know about you, tovarishch, but I am. Somewhere along evolutionary chain from macromolecule to human brain self awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can't see it matters whether paths are protein or platinum.
("Soul?" Does a dog have a soul? How about cockroach?)
One of the more interesting features of truly comlex (= unpredictable) systems is the sistuations in which a very small stimulus can have a disproportionate effect. Now I wonder what the equivalent of "social engineering" is for a semi-autonomous, hugely complex mega-cloud. Lots of opportunities to "play" the system and, in some cases, undermine its stability.
People have been predicting sentience spontaneously arising from machines based on "neuron count" for decades, but I don't see it happening. What is needed is some sort of bootstrap.
The behavior that animals with a reasonably sophisticated brain all show is curiosity. I think that whatever makes curiosity work (and I'm convinced it is tied deeply to the brain's "need" to make sense of patterns it comes across to the point of constructing bizarre explanations for stuff - such as the ghost of our cat which was caused by the cat's favorite place to snooze being in eyesight and a shadow being cast where the cat's shadow had been for years producing the image of a snoozing cat in the peripheral vision of both me and my wife) will end up being the bootstrap for sentience.
I also think that the mind could be an artifact of self recursion in this pattern-checking bios. A sort of VMWetWare.
It's all quite interesting but also slightly incorrect; the system is not non-deterministic. It is, however, non-linear and dynamical and thus unpredictable in practice.
Such systems can and often are modelled using linear approximations, however those tend to break down. This is essentially what is being seen. The overall behaviour may be predictable in the short term, but the prediction of any given subset of that system will be much harder to predict*.
They have correctly defined the system as chaotic and as such, a relatively small change in the initial conditions** can produce unpredictable and sometimes counter-intuitive results. That does not make that system non-deterministic; to be non-deterministic, the system must, when given identical starting conditions, produce different outputs.
It's worth noting that the rules governing such a system do not necessarily have to be complex to exhibit chaotic behaviour. So long as the process is iterative, with the results feeding back in, even simple rules can produce unpredictable results.
* - The usual example is the weather and it's a good comparison here as you can predict, broadly, the weather for a short period of time but that falls apart both when making predictions for longer periods (the temperature next month) or for smaller areas (the temperature in my back yard). Likewise with the described Google system, you might be able to predict general behaviours over a short period but it would be more difficult to predict the behaviour of any given job.
** - Given the system is already running, 'initial conditions' here refers to the input you are giving to the system (e.g. run job X at priority Y with latency threshold Z) along with the current state of the system, which is all the jobs already running and all the available resources.
Congratulations, Jack Clark, you just won a honorary German citizenship with the sentence:
"Instead of fighting these non-determinisms and rigidly dictating the behavior of distributed systems, the community has instead created a fleet of tools to coerce this randomness into some semblance of order, and in doing so has figured out a way to turn the randomness and confusion that lurks deep within any large sophisticated data center from a barely seen cloud-downing beast into an asset that focuses apps to be stronger, healthier, and more productive."
Now if you can move the verb to the end of the sentence, you will get a medal for style. But that would be seriously advanced German writership and should not be attempted lightly.
Thanks dan1980 for beating some common sense into this story.
It could have been an interesting article in a pulp magazine back in 1970 when the game of life was invented. But that was 40 years ago and I would have hoped this "it's alive!" bullshit had died off since then. So yes, non-linear systems have all kinds of weird behaviours. Some of them are even chaotic which means the behaviour cannot be predicted whatever the precision of your simulation. Visualisations of such systems can even be beautiful (fractals). That does not make them alive or sentient. Three bodies orbiting each other, the Mandelbrot set or the game of life all exhibit this behaviour and there is nothing living or even mysterious about them.
Essentially, the take-aways are that:
1. Google's 'Omega' system is non-linear and dynamical.
2. At Google-scale, the behaviour of non-linear systems can no longer be reliably predicted by a linear approximation.
3. Despite the unpredictability of such a system, the automatic management of jobs is still advantageous as it allows a much better utilisation of the available resources.
Or, more condensed: at large scales, unpredictable automation is still more efficient than more predictable, but more manual, processes.
What confuses me about the article, however, is that several times it states that the unpredictable nature of the system is "a good thing" but no word of how the "emergent behaviours" of the Omega system are providing useful features or functions that would not be possible with a more predictable system. So far as I can tell from the article, the only reason offered as to why the unpredictability is beneficial is that having a sub-optimal system makes apps better able to cope with that sub-optimal system.
This appears to be the position of the author rather than Google as their quotes strongly imply that a more predictable system would be technically better but at such scale it would also be prohibitively expensive and thus raise the cost for end users. In other words, if the Omega scheduler could be more predictable for the same cost then they would prefer it that way as the 'emergent behaviours' are an undesirable side-effect of rationalising costs.
As a layman it seems to me that this system's behaviour is not 'unpredictable', just not currently completely predictable by Google. Someone earlier compared it to meteorology. This, again, is not unpredictable; it simply has so many complex factors affecting it that even the best of human weathermen are currently unable to fully predict the precise resultant effect of them all.
To infer from this that the system is 'alive' is on a par with claiming that the weather is alive. Even calling it weird seems a big stretch. Which is not to say that we should be any less concerned about the potential dangers of such immensely powerful and currently unpredictable systems than we are about the potential dangers of immensely powerful and currently unpredictable weather.
Yo.
That was me comparing it (loosely) to the weather. What I was saying was that weather is predictable, but only approximately and only for the relatively immediate future.
It might be semantics but I make a distinction between prediction based on contemporaneous data (e.g. a observed low pressure system) and prediction based on historical data (e.g. average temperate for May is 20C).
When I brought up the weather, I was referring to the former, which is based on cause and effect, rather than previously-observed averages.
Yes, wouldn't disagree with that, other than to observe that, theoretically, if one knew every tiny factor that influences the oncoming weather one could, theoretically, predict it with precision any time into the future. In practice we never know every tiny factor so can only make a best estimate, and try to prepare for other possibilities. As a long-time offshore sailor I have spent many hours doing exactly that.
I think the same applies to Google's systems. Accumulations of microscopically small, unforeseen inaccuracies can at times cause the system to make a decision that the programmer would not expect. But, as clean_state said in response to your earlier post, there is nothing weird or biological about that. It seems more akin to my car running slightly rougher than expected because of an intermittent high tension leak from a sparkplug.
> theoretically, if one knew every tiny factor that influences the
> oncoming weather one could, theoretically, predict it with
> precision any time into the future.
Unfortunately not. The weather system is chaotic. From the system of equations, you can compute how the error margin evolves in time. The evolution is exponential. This means that whatever the precision of the initial measurements, even if you know the speed and position of every atom of the atmosphere at one point, your prediction and the reality will diverge exponentially (i.e. very fast).