Re: Directors...
On a vaguely similar note, Ridley Scott has stated in interviews that Deckard was a replicant, so it's canon, not just a fan theory.
Stephen Hawking is scared. "The development of full artificial intelligence could spell the end of the human race," Hawking has said. With the creeping integration of soft AI into our lives in the form of Siri and personalized ads on social media, these computational mini-minds serve as a constant reminder that the evolution …
This post has been deleted by its author
>Apparently some footage from "The Shining" made it into the U.S. theatrical cut. Who knew?!
That was fairly widely known, ever since the Director's Cut in the 90s, and again later in the internet age when Scott released the Ultimate Edition. What was new to me, however, was that the footage was given by Kubrick to a fed-up Scott, and the difference in aspect ratios made this re-purposing possible (otherwise there would be a VW Beetle in Blade Runner).
On a tangential side note, some people important to Scott's film Alien (Dan O'Bannon, HR Giger, Chris Foss) were first assembled by the director Jodorowsky for an abandoned adaptation of Herbert's Dune. Dune is set in an apparently post-AI universe, long after a human crusade to destroy all AIs in a 'Butlerian Jihad'. A similar issue is explored in Iain M Bank's non-Culture sci-fi novel The Algebraist.
And yeah, in interviews an exasperated Scott has confirmed the theories about Deckard by pointing out a detail in what should be the last scene (before the extra Kubrick-shot footage was bolted on); a character dops a certain item on the ground. The actor playing him would later play Admiral Adama in the new Battlestar Gallactica.
> Apparently some footage from "The Shining" made it into the U.S. theatrical cut. Who knew?!
I did, but I'm a major BR geek :-)
As to the nature of Deckard, Dave 126 points out, in the versions without the "happy ending" of them driving off into the sunset (and with the Unicorn Dream included), there's a big hint as Rachael walks into the lift. There are other hints in the film too.
Not to mention those spine-chilling lines "You've done a man's job, Sir" and "It's too bad she won't live. But then again, who does...?"
> Apparently some footage from "The Shining" made it into the U.S. theatrical cut. Who knew?!
There were, it seems, no end of 'Director's Cuts' of this film. One of the ones that I saw had the replicants' eyes slightly glow. Deckard's eyes were glowing in this one and it drove me mad.
One of the other clues to Deckard being a replicant was that he was the best Blade Runner of them all. In other words, he was brought in as "a thief to catch a thief".
The Three laws are a Mcguffin, a plot device to explore how they can be broken.
Iain M. Banks is self indulgent fantasy.
None of the examples are real attempts to propose what AI might be or do. They are irrelevant to AI research and development, which in reality has hardly advanced since Alan Turning mused about it. Even the Turing Test wasn't a serious proposal for a real AI test, but a bit of a thought experiment.
We still aren't too sure exactly what natural intelligence is, though we think that corvids (crows) are inexplicably smarter than some primates. There seems to be little or not correlation between brain size and self awareness, ability for language, creativity, problem solving, art, and tool creation and use, all things thought to indicate intelligence.
"AI" in broadest sense has only made progress by having a very narrow definition of it, and heavily relies on a human expert, human programmers and a database to initialise the system. It shows none of the sort of characteristics seen in children, corvids or other animals.
>Iain M. Banks is self indulgent fantasy.
To be self indulgent is point of fantasy. Self awareness is throughout Bank's 'A Few Notes On The Culture', an excerpt from which is here:
Certainly there are arguments against the possibility of Artificial Intelligence, but they tend to boil down to one of three assertions: one, that there is some vital field or other presently intangible influence exclusive to biological life - perhaps even carbon-based biological life - which may eventually fall within the remit of scientific understanding but which cannot be emulated in any other form (all of which is neither impossible nor likely); two, that self-awareness resides in a supernatural soul - presumably linked to a broad-based occult system involving gods or a god, reincarnation or whatever - and which one assumes can never be understood scientifically (equally improbable, though I do write as an atheist); and, three, that matter cannot become self-aware (or more precisely that it cannot support any informational formulation which might be said to be self-aware or taken together with its material substrate exhibit the signs of self-awareness). ...I leave all the more than nominally self-aware readers to spot the logical problem with that argument.
It is, of course, entirely possible that real AIs will refuse to have anything to do with their human creators (or rather, perhaps, the human creators of their non-human creators), but assuming that they do - and the design of their software may be amenable to optimization in this regard - I would argue that it is quite possible they would agree to help further the aims of their source civilisation (a contention we'll return to shortly). At this point, regardless of whatever alterations humanity might impose on itself through genetic manipulation, humanity would no longer be a one-sentience-type species. The future of our species would affect, be affected by and coexist with the future of the AI life-forms we create.
- http://www.vavatch.co.uk/books/banks/cultnote.htm
-Iain M Banks
(Sun-Earther Iain El-Bonko Banks of North Queensferry)
Copyright 1994 Iain M Banks
Commercial use only by permission.
Other uses, distribution, reproduction, tearing to shreds etc are freely encouraged provided the source is acknowledged.
"We still aren't too sure exactly what natural intelligence is"
That's absolutely right. For example, there's a species of dolphin that has more neocortical neurons than humans. What's it doing with them? We don't know, but perhaps it's very intelligent in a way that we can't properly appreciate.
I like this quote from Douglas Adams about dolphins and intelligence in general:
For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.
@mage
>>None of the examples are real attempts to propose what AI might be or do. They are irrelevant to AI research and development, which in reality has hardly advanced since Alan Turning mused about it.
This is possibly the most important fact about AI, and one I discussed with my proposed masters dissertation tutor, Turing suggested something for A.I. which (most) research steered away from, Turing suggested that every rule must be learnt not innate (and of course, the context was a mash up between computing engines and AI), in CPU terms it's like having routines built in for mathematical functions, a "CISC" processor, Turing indicated that AI should be built from RISC (not his words obviously), however most AI is built from sets of rules - and Asimov cements this in the three laws (and of course, other authors seem to feel the need to add to), these laws actually destroy AI, not create it, they create a veneer, a pretence, human laws like the golden rule come from evolution, AI must evolve or it's not AI, it's AAI
Randomly I've just finished re-reading GITS (top acronym or what) in the throne-room. It's a great bit of thoughtful SF though it gets a little too dense at times (like when the Puppeteer is explaining the nature of its "self" to the protagonist Kusanagi - bloody glad that's split over two vignettes). The author had a head teeming with ideas on this subject, but some of them might have been better reserved for "backmatter" rather than shoehorning them all into the narrative, which makes it quite clunky in places when it's trying to be a rollicking action yarn at the same time. (His "Appleseed" series suffers from this a bit too, though less so.)
My favourite aspect of the book is the Fuchikomas: AI-driven vehicles (cross between an exoskeleton and a small tank) that are sort-of-a-hive-mind-and-sort-of-not. They operate independently throughout the day, but all their memories and experiences are pooled and redistributed at day's end. Nevertheless, they're individual enough within a single day to engage in some delightfully "human" conceits (like indulging in petty shoplifting, or inciting a robot rebellion before having its peers point out what a silly idea that would be for practical reasons).
Not seen Automata, but soon may!
simply titled A.I. Artificial Intelligence and mentioned briefly in the article. Excellent movie, BTW. I highly recommend. Same notation for Ex Machina.
"simply titled A.I. Artificial Intelligence and mentioned briefly in the article."
No - not that one. The film I meant was much darker. An interactive AI mainframe super-computer is given access to the professor's home with webcams etc. It gradually takes on a malevolent character - interacting more and more with the still grieving mother and her memories of her dead son.
What it wants is to be a real living person - free from its hardware confines. Eventually it starts to make a cocoon to generate its new body. To stop it the power is disconnected. The film ends as the cocoon bursts open and a child emerges looking like the dead son - with the malevolent personality and powers of the super-computer.
Dr. Minsky himself co-authored a novel about an AI robot... it took the form of a cylinder, and had appendages that kept sub-dividing, which it uses to perform brain surgery on its human ally.
EDIT: Found it: http://www.goodreads.com/book/show/1807642.The_Turing_Option
Hahaha Minsky's co-author was Harry Harrison! Fantastic!
AI will most likely reflect the attitudes and worldview of whomever or whatever is programing it. Two major cases for that which directly connect...
Human-directed. AI are all currently built by humans, and humans will fill their heads with whatever they see fit. These will closely resemble the standard "human motives and values" AI from film and stories. They will vary widely, but will be human creations and somewhat mirrors of us.
Machine-directed (self or external). At some point, design and programing of AI may be partly or wholly passed to other machines. AI may become self-directed learning things and program themselves (like we do). When that happens, all bets are off and no predictions can be made. Such entities will have a strange and incomprehensible (to us meat monkeys) psychology. There's just no way to even guess.
Come back in a century and maybe we'll know...
", Robot the Will Smith film based on the Asimov book produced 54 years before it goes on to explore how AI may try to enslave us for our own good and survival as a species.Hollywood will unselfconsciously appropriate a book title and then not use an ounce of the original content
Have an upvote!
I think that line should have been:
I, Robot the Will Smith film based on that uses the same title as the Asimov book produced 54 years before it
That movie really annoys me for the reason you state plus the fact that they threw out the spirit of the book (an exploration of how humans and robots might co-exist) along with all the material.
'I, Robot' (the book) is a collection of short stories. There are more stories in the same setting in 'The Rest of the Robots'. Towards the end, the robots take over - not in an action packed bloody revolution, but so subtly that only about two people notice (IIRC one of them is Dr Susan Calvin). Hollywood make action movies because they are profitable. If you are one of those weirdos who like fiction that gives you something to think about, read a book. I enjoy both types of entertainment, and luckily the film and the short stories are so dissimilar that neither is a spoiler for the other.
'Blade Runner' and 'Do Androids Dream of Electric Sheep' are also massively different. I prefer the film, but the book does explain why Tyrell has an owl.
Yep, he did gloss over fictional AIs from video games.
He mentioned SHODAN, but missed out:
- Durandal (though only 90s Mac gamers could be expected to know it)
- Cortana, and 343 Guilty Spark. Cortana was a fictional AI aide to military commanders, and later her gave her name to Microsoft's 'personal assistant' software, which itself was, as was Siri, derived from Department of Defence projects in the 90s designed to triage tactically-relevant information for real battlefield commanders.
Quazatron (okay, that was on the ZX Spectrum)
Yep, he did gloss over fictional AIs from video games.
If with 'he' you're referring to the author, then you're demonstrating Natural Stupidity, especially with regard to parsing the author's name and correlating it to other articles she has written, some of which clearly reveal her gender.
or that sentient robots may try to establish a society of their own; or the idea that AIs might become omnipresent godheads – benevolent or otherwise.
Hmmmm? Now there’s a future novelty and prescient thought. ……
To … Demis Hassabis [Google DeepMind]
From …. C42 Quantum Communication Control Systems .... AI@ITsWork
Subject …… Greater IntelAIgent Games Play
Would Google DeepMinded like to consider the releasing of an Advanced IntelAIgent Patch to ailing and collapsing systems/exclusive executive administrations with Global Operating Devices and Tethered Multi-Media Machines?
A little something huge and quite completely different and virtually real ……. and even to be thought and realised and accepted as an Almighty Version of Great Gaming for Future Programming Programs for Projection …….with Mass Media Hosting Presentation with Real Live AIdDelivery of Future Key Play and Players with Immaculate Content of Impeccable Taste to freely share, and to the benefit of all to a most satisfactory degree.
And here be what IT is all about with the Creation in CyberSpace of the Command and Control of Computers and Communications amongst other things :-) I Kid U Not.
cc … S.W.Hawking@damtp.cam.ac.uk
One wonders how AI would have portrayed itself if the robots been in charge rather than the carbon-based lifeforms.
Carbon-based life phorms in charge of what and to what ultimate end is the question to be well answered there, methinks, to garner an enthusiastic following and magical support. And do no evil is/was a great starting point.
Bug powder dust and Mugwump jism. Wideboys running around Interzone tripping.
The fruits of their “scientific” labors are what have created our societies and they have been, in the main, if not ignored, dismissed. They have always been perceived as being “eccentric”, “a little bit odd” even “barking mad” but they have left us with all things, some which we can treasure and all that we thought we needed. - dDutch Initiative
Intelligence, which is capable of looking farther ahead than the next aggressive mutation, can set up long-term aims and work towards them; the same amount of raw invention that bursts in all directions from the market can be - to some degree - channelled and directed, so that while the market merely shines (and the feudal gutters), the planned lases, reaching out coherently and efficiently towards agreed-on goals. What is vital for such a scheme, however, and what was always missing in the planned economies of our world's experience, is the continual, intimate and decisive participation of the mass of the citizenry in determining these goals, and designing as well as implementing the plans which should lead towards them. Iain M Banks
This post has been deleted by its author
This post has been deleted by its author
Wait a minute, every single example I've seen, or heard of up to now, of a learning system was specifically trained to learn in one domain.
I have yet to see a machine that can learn chess without any programming, then learn backgammon, then move on to learning knitting and finally learn how to plan and build a wall. Of course, said computer will need arms, at the very least, in order to actually do something, but I think that will be easier than the learning part.
I have yet to see a machine that can learn chess without any programming, then learn backgammon, then move on to learning knitting and finally learn how to plan and build a wall.
I've yet to see a human "learn" chess without being given the ruleset and running many simulation sequences as practise. Similarly backgammon, knitting and even bricklaying are all "taught" to people by providing them with the relevant rules / framework and then allowing them to run multiple simulation sequences, correcting errors until the results are considered acceptable.
I'm not sure how different that process is to what you would mean by "programming an AI", but my massively rusty neural network experience would suggest that they are not that dissimilar at all ;)
"I've yet to see a human "learn" chess without being given the ruleset ... "
Point.
But: what about inventing chess, etc ?
I have yet to see a computer system invent something new - in other words show the ability of creativity, of original thougt. (Random errors from buggy code or malfunctions do not count.)
Now that would be AI.
There was a computer program that "discovered" prime numbers all by itself.
It was not really AI. It was a goal-seeking algorithm that could apply the rules of symbolic algrebra. It had the axioms of arithmetic hard-coded, and its goal was to prove hypotheses with high values of interestingness, where interestingness was a heuristic based on the relation of a hypothesis to other hypotheses and theorems and their interestingnesses (ie, if we can prove this, then we get that and that and that ... )
ISTR it got as far as inventing and trying to prove Goldbach's conjecture (every even number greater than two can be expressed as the sum of two prime numbers). It failed to prove it, which is not very surprising ... the best human mathematicians of the last 250 years or so haven't been able to prove it either!
There's also the proof of the four-colour theorem which is too complex for any human unassisted by a computer to comprehend. So it it proved, and if so what is it proved by?
STR it got as far as inventing and trying to prove Goldbach's conjecture (every even number greater than two can be expressed as the sum of two prime numbers).
Did it manage deducing the existence of rice pudding and income tax ?
Bungie introduced Marathon, the forerunner of all the Halo business. It featured not one but three AIs, all with starkly different personalities, and one of them psychotic. Unfortunately, such was the site of the art of actual game AI that the random human NPCs displayed an almost unerring tendency to get in the way of every shot or sight line. So they became known as "Bobs."
Artificial intelligence is never a threat of itself - only when it is deliberately used and directed by one human against another, e.g. CPU controlled "intelligent" bombs, missiles, guns etc.
The only thing that might be of concern would be if a machine were developed that was self-aware. It is however very possible that sentience can only be a part of living organisms, and we have absolutely no idea what "life" is, but it is very unlikely that it would occur spontaneously in a silicon chip.
"Artificial intelligence is never a threat of itself [...]"
If an AI decision is able to inflict damage in any way - then there is always a chance of unintended outcomes or collateral damage. That is why Asimov's Laws stipulate overriding contingencies as a catch-all.
There is the classic dilemma of the runaway train heading for a broken bridge and certain destruction. If it is diverted onto a spur then the passengers will be saved. However - that guarantees that a man in the spur will be killed. Does the machine choose the greater good - or avoid a direct action that would deliberately kill the man on the spur?
Does the machine choose the greater good - or avoid a direct action that would deliberately kill the man on the spur?
Indeed, Asimov added the "Zeroth Law" to the previous three in order to explore similar issues. To bring the great writer's work into focus on the developing issue of self driving cars and perhaps who is liable if one should cause harm to a person, I recommend a jaunt through Asimov's "Sally".
"
Does the machine choose the greater good - or avoid a direct action that would deliberately kill the man on the spur?
"
It follows the algorithms that the human who programmed it wrote. If those algorithms have unintended consequences, it's the result of the programmer's lack of foresight rather than the fault of the machine.
Machines do not "make decisions" and are not likely to do so in the foreseeable future. They just follow a pre-programmed algorithm - albeit one that may be pretty complex.
Machines do not "make decisions" and are not likely to do so in the foreseeable future. They just follow a pre-programmed algorithm - albeit one that may be pretty complex.
Which differs from a human brain taking a decision how precisely?
A machine makes a decision when it operates with internally generated code or huge internally generated continuously modified weighting tables on equally huge internally generated tables of data, heuristically pruned to fit in available memory. It becomes impossible for a human to understand why any particular instance arrives at a particular point A or not-A. Potentially this, even if we can dump a Terabyte of internal state at the precise moment that the decision was taken.
When you are running and are tripped by something you didn't see, you'll try to regain your balance. Can you tell us the details of your last success, or what you would do next time to avoid your fall, or even whether falling was avoidable? Yet clearly we do learn to run. Young kids fall over a lot more than adults. And if/when we advance from building intrinsically stable wheeled and tracked vehicles to bipedal "mechas", I have little doubt that the same will be true of their control systems.
Does the machine choose the greater good - or avoid a direct action that would deliberately kill the man on the spur?
A dilemma which the logic in self-driving cars will have to incorporate, unless we choose to refuse to incorporate any concept of "the greater good" which is itself a decision. Will it be explicit (programmer playing god, here's the algorithm which decides whether you live or die)? Or implicit (the AI has programmed goals and code that evolves in time as it processes more events, and we really don't know in advance what it will do faced with a choice between two different crashes).
I once found myself in a meta- version of this dilemma. Thanks to my own inattention, I was hurtling towards a give-way sign much too fast to stop and realized I might have to decide between a collision with another vehicle or going off-road into trees. I never got to take the decision, because the fates or whatever decreed that there was no other vehicle crossing my path.
We don't know an awful lot about self-awareness or how it can arise. In fact I'm not sure that we actually know anything. How do you prove to me that you are self-aware rather than just programmed to assert that you are? Or even, how do I know that I am self-aware rather than just something else's dream? The ancient Greeks and Shakespeare understood this. Occam's Razor is the only way out that I know.
Two good SFnal treatments of awareness arising unexpectedly: Greg Bear Queen of Angels and Charles Stross Rule 34.
Check the etymology of "robot". It was Karel Čapek's brother, not Asimov who coined it.
This post has been deleted by its author
Check the etymology of "robot". It was Karel Čapek's brother, not Asimov who coined it.
The word robot simply means slave in Czech and it was part of the title of Čapek's play in the original language. It then entered the English language as a name for mechanical humanoids.
The word robotics, to describe the science of robot construction and development, was most definitely coined by Asimov.
AI was a required class in my computer science curriculum during the 90s but later dropped as a requirement. One of the things I took from the class was how amazing the human mind is and how difficult it is to replicate. The instructor mentioned how he would place an object on a table, say a rubber ball, and have the class write a 5,000 word essay describing the object. Sounds absurd at first, but notions like "how much does the ball weigh?" "how high would the ball bounce?" "what does the ball feel like?" and so on...soon the essays are completed.
... to quote the penultimate line from Colossus: The Forbin Project where the US and the USSR both create super computers to defend their countries and prevent war.
It ends with the computers join forces to become "World Control" and obey their programing by taking over and thereby absolutely preventing war but, as World Control says, "freedom is just an illusion" and says how mankind will advance under its guidance
Forbin angrily replies "NEVER!"
So which would *you* have? Peace under computer control or war under human control...?
So which would *you* have? Peace under computer control or war under human control...? .... Graham Marsden
And those are barely two possible options, Graham Marsden, in a vast see of opportunities.
Hi, Google DeepMinded,Not so much a CV, more an AIMission Statement with tried and tested roadmaps to/from Remote Virtual Command and Control of Earthed SCADA Systems ……. http://forums.theregister.co.uk/forum/1/2016/01/29/ai_in_tv_film_books_games/#c_2762789
Search Engine Optimisation v2.0 [and above] is surely logically a Future Product Placements Engine …… Advanced Intelligence Resource with Immaculate Source, with the likes of a Google not searching for answers, both popular and controversial, but providing them with streams of supporting evidence.
Such would be akin to the Private Mentoring with Pirated Monitoring of Future Events with AIDerivative Programming for Projects/Semi-Autonomous, Self-Actualisation of Virtual Realities.
It is difficult, and maybe even impossible, to see or imagine a defence against such in an attacking configuration.
Regards,
Graham C
xxxxxxxxxxxx
Thank you for your interest in DeepMind.
We endeavour to review every application we receive at DeepMind. We have had an overwhelming level of interest so do bear with us, we will get back to you as soon as we can.
Kind Regards,
The DeepMind Team
Something new and enlightening to report on, El Reg?
Truth is: It would take at the most 0.3 generations before there would be a Colossus cult with tens of millions of followers with priests, robes, rituals, holy books, verified miracles and merchandise.
Just look at scientology, north korea, televangelism, the moon cult, islamic state, hare chrishna and neo-liberalism, craziness designed and engineered by mere humans, yet, millions and millions of suckers are irresistibly attracted like wasps to a picnic. Most people *want* to be controlled, they *want* to have all choices made for them, they *want* to have a complete check-list with all the answers for any situation.
With Colossus, there is a God who actually listens, always sees everything, punishes transgressors and probably can perform real miracles too. I'd say that 30 - 80% of the population would like that (and, following tradition, would ritually murder the other fraction in the traditional way).
Dark.Star
co-written by, edited by and starring Dan O'Bannon
- Star Wars [computer animator]
-Jodowosky's Dune [never made, sadly]
-Alien [writer, effects supervisor]
-Total Recall
Screamers, a science-fiction film about post-apocalyptic robots programmed to kill. Adapted from the Philip K. Dick story "Second Variety".
That's some career!
There is literature outside the US and UK, y'know...
Stanislav Lem's "Fables for Robots" tells the fairytales that robots tell their offspring. The universe is populated by robots. In some of their legends, a particularly devious monster appears. Made largely of corrosive water, he is only a distant collective memory. An enemy that, in their early pre-history, may have tried to destroy them. Some even believe that he created them in the first place, but that's a disgusting idea and obviously nonsense.
AI is vaporware. Hawking opinion on it is worthless... as is Musk's (I love how Musk was nominated for The Luddite Award for his tirade on "the AI apocalypse").
Computers really haven't changed that much in 20 years. They still do the same crap they did 20 years ago, only faster. Humans literally have no idea how to create artificial sentience, and frankly that's a good thing. We are an irrational and destructive species that have nothing of value to add to the universe. Lets be honest about it.
....yet it's hard to truly define, even in terms of human illogicality and irrationality, the understanding of a so-called Natural intellect, at least for myself, Marvin (-;
Oh. Do human nerve tissues conduct light of some kind of a chemical fluorescence, or maybe I just had a nightmare? Have you seen anyone just slightly glowing inm the darkness?
And yes, this is a question posed exactly as it is written here @ElReg Commentards section. One wouldn't be much surprised, anyway.
Clarke debunked that story personally in his book The Lost Worlds of 2001.
HAL stands for "Heuristically-programed ALgorithmic" computer. He also mentioned that they'd had a fair bit of help from IBM in the making of the film, so it's unlikely they'd be taking digs at them.
>Come on. That' be HPA, HPAC or HPC
None of which lend themselves to single-syllable pronunciation.
Clarke and Kubrick were writing a movie. That we all know of HAL is good evidence that they did their jobs well.
"Open the pod bay doors, Aitch-pee-ai-see" would detract from the drama.
I would have gone with "Open the fucking pod bay doors you positronic motherfucker!!!" but then again I'm not a screen writer. (Not to mention what you were allowed to say on screen in the 1960ies. Hey, Tarantino should do e remake of 2001.)
Anyway, I came here to pledge that I will call my IT-startup* "JCN - One Step Beyond". Cue Madness...
*BTW: any VCs around? Please start sending me money. It's just an idea right now, but with a couple of millions I can turn it into a concept. Can't disclose any details yet, but it's going to be bigger than Facebook and Google, promise!
I often wonder about the threat from the artificially unintelligent.
I've seen a articles on runaway bots and IoT blunders, no SciFi stories I can think of.
http://www.nytimes.com/2012/08/02/business/unusual-volume-roils-early-trading-in-some-stocks.html?_r=0
http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/
Maybe AI will get us someday, I am just thankful for all those great stories.
On an unrelated note....I recently read that a TV series based on the Foundation Trilogy for HBO is in the works. I am so happy it's finally happening and that it will be a series instead of a film.
Trying to answer the question "Is Deckard an android?" is pointless, as the Director-approved answer ("yes, of course") leaves us with a vanilla boring story, completely wasting the wonderful hints that Rutger Hauer in particular gave us to go on.
Consider: If Deckard is an android, he and Rebecca are just two machines making the beast with two backs. If not, he has all sorts of issues, not the least that he is a child molester (Rachel is six years old).
And the machines he is "retiring" are children, as can be seen occasionally when the actors let it bleed through. The scene when Roy Batty berates Leon is a case in point, when the conversation and facial expressions suddenly take a turn into the childlike, and the Deckard vs Batty sequences are full of occasional kid-level mischevious looks on the part of Batty. And of course, there's the wonderful performance between Pris and Batty in J.F. Sebastian's apartment.
But if Deckard is a machine too, he lacks the empathy for that to have any personal import and there's no point of inner turmoil.
I could go on, but all you have to do is watch and the actors show you a much richer set of possibilities than the internet Android-Or-Not combatants.
" I’m glad Blade Runner was given appropriately reverential treatment by Ridley Scott. "
Eh? Much of Blade Runner's strength lies in the fact that it ISN'T "reverential" of Dick's novella. Scott took what would work best in a movie, and ditched stuff like Mercerism and the joy of owning a robot sheep. For which we should be grateful IMO: a reverential adaptation of DADoES would have been a bit naff.
From memory (I only read it once a long time ago) DADoES main theme is empathy. All the humans have empathy as a religious requirement, which is why they are meant to keep pets (although real animals are expensive, so they have android ones and hide the fact from their neighbours). The Replicants do not have empathy, so can be detected by tests.
In Blade Runner the Replicants are empathetic to each other and at the end Roy saves Decker, because he empathises with him. So Blade Runner is all about empathy too, it's just that its premise is the Androids/Repliants can empathise, given enough "life" experience.
All the other stuff about whether or not Decker is a replicant is kind of irrelevant as far as I can see,
I honestly believe that humanity will not intentionally create an AI, but rather that artificial intelligence will develop as an emergent property of something else we create
Like the internet - though, we'll end up with an AI heavily knowledgeable about human sexuality and pets.
... with AISMARTRStone's Throw
Rather than worrying about building armies of smarter machines, why not simply reprogram new and born again try again humans to accept themselves as smarter machines programmed to survive and prosper and deliver future universal content ...... with SMARTR Intellectual Property Rights and Privileges Restored.
You might, and therefore will be pleasantly surprised at how much simpler and more effective that fundamental change of processing works.
Nevertheless, we question whether the “going dark” metaphor accurately describes the state of affairs. Are we really headed to a future in which our ability to effectively surveil criminals and bad actors is impossible? We think not. ….. https://cryptome.org/2016/02/dont-panic.pdf
Hmmm. Others would be thinking it is probably already long so at elite elevated levels.
And/But what of the influencing/reprogramming of criminals and bad actors …. and those and that which Hopefully Be Confused, rather than Hopelessly Helplessly Lost, whilst being surveilled/spied upon/virtual walkabout.
Picture at some point in the near future, sentient self aware androids assisting then replacing the aging workforce as well as acting as carers, surrogate "children" for the childless etc.
It is entirely possible that humans may die out because the machines do their job too well, leaving the risk, expense and inconvenience of biological reproduction behind.
We are already seeing people choosing between having children and a career, some may eventually dispense with the former entirely in favor of one or more artificial life-forms with their shared experiences, memories and the capability to both mechanically replicate and self improve.
Its just possible, also a particularly virulent virus may wipe out the already reduced number of humans before anything can be done but the expected disruption of society would be more of a short term inconvenience to the Machines in much the same way as we regret the extinction of other species such as the dodo and passenger pigeon.
Could be worse,
Someone might release on purpose an AI virus that takes over laptops and desktops with susceptible DDR3L "Cortexiphan" (tm) memory and causes a worldwide meltdown just to prove their 1337 hax0ring skillz.
AC, because he doesn't want the machines to find out that he ratted out their plans for world domination..
And there is the pre-Skynet 1970's "Colossus: The Forbin Project" AI that finds its Soviet counterpart and together control all the nukes. And as noted from Wikipedia: Guardian/Colossus informs Forbin that "freedom is just an illusion" and that "In time you will come to regard me not only with respect and awe, but with love". Forbin angrily replies, "Never!"