Can we have an AI
That learns I do not want tips, news, music choices etc. suggested to me ..
i.e. learns I want no pestering
Artificial intelligence and machine learning engines are underpinning many emerging applications and services, from making sense of big data for enterprises, to supporting hyper-personalized consumer content, or virtual reality gaming. The current challenge is to move AI from the supercomputer to the mobile device, supporting …
At least this is "AI", as in a neural network based technology. A long time ago I dumped all my university modules that were meant to be AI when it turned out that they were nothing more than Logical Reasoning - i.e. extrapolated forms of "If This, Then That" and while useful this wasn't AI.
As I see it the problem with AI, as in neural networks, is that the more advanced they become the less provable they are and therefore for many purposes, the less useful. This sounds negative but for many touted AI purposes the requirement is for 100% accuracy (or as close as) which while nominally achievable, is considerably harder to prove if you can't fully test and validate each step in a process independently.
There isn't even such a thing as 100% accuracy in the things that demand true AI. If I want a personal assistant in my phone who can prioritize the items in my to do list, is there a "right" answer? I want it to come up with a similar answer to what I would (and I don't think I'd necessarily come up with the same order for the same list if I did it once, erased my memory of doing it and then did it again 15 minutes later)
Even for stuff I personally don't consider AI, like a computer playing chess, there isn't necessarily a right answer. There are answers that are better than others, but the inherent uncertainty of what your opponent will do takes away the possibility of saying a particular move is the most optimal.
What I'd like to see AI do for me is relieve me of time I spend "researching". Recently I needed to buy a replacement oil dipstick for my car. So I had to do some searching, and the ones I kept finding in searches were ones for the smaller engine size that isn't made for my car. It ended up taking about 15 minutes to find the right one, and get the best price. I'd like to be able to tell my phone "buy me the cheapest replacement oil dipstick to fit my car" (it should already know what kind of car I have because I would have told it when I was looking for replacement wiper blades last spring)
It would get to know me over time, and know which things I'd be willing to buy off eBay that ships "e-packet from China" versus stuff I'd like to have shipping from the US. There's no right answer, but that's where the 'learning' comes in. Or asking questions like "are you willing to wait a few weeks to get it to save $10?" Just like if I had a human personal assistant who would get to know me better the longer he/she worked for me.
When can Siri/Google/etc. do that sort of stuff? Some will say a few years, some will say a few decades. I tend more toward the latter than the former....intelligence isn't easy to replicate.
Especially since defining intelligence is about as futile an exercise as arguing about "How high is up?".
Intelligence is a combination of many factors, sentience, self awareness, sapience, maths, curiosity, and on and on.
Some points are easy, like maths. Even a simple calculator is faster/more accurate than 99+% of the human population.
Self awareness, my computer can already do a much faster and accurate job of monitoring itself than I can.
Sentience is coming along at a frightening pace, sapience is probably decades away, and curiosity is, so far, completely lagging.
Other things like empathy, is anyone even working on that?
I know of many projects to bring empathy to artificial persons. Be those artificial persons virtual, robotic or both. As a matter of fact, empathy, sympathy and compassion probably get more money than anything except "how to move around autonomously" and "how to kill efficiently and accurately".
Artificial sympathy is pretty clear cut: the ability to recognize the emotions of others has uses for everything from detecting criminal intent to understanding what human persons are attempting to communicate. Here, there is great interest in the robotic care industry.
Empathy is seen not only as a useful tool in the robotic care industry, but it is seen as useful in attempting to build more capable virtual assistants, search bots and more. If you not only understand what the human person is attempting to communicate, but can have those emotions equally bias your choices then you can understand intent even more accurately than with sympathy.
Artificial compassion is farther out, but is seen as important for artificial governance. There is great interest in answering "quis custodiet ipsos custodes" with "robots". Specially in roles such as ombudsbot or as an adjunct to a highly politicized investigation (say oversight of police or the judiciary). In these situations cold logic isn't enough; compassion is absolutely required.
Now a lot of people will start to scream about robots running the world at this point, but I don't think that's the intention. Most projects I've seen regarding artificial governance are not about putting a decision to an artificial person and accepting their judgement, but asking the artificial person to render not only judgement, but rationale behind that judgement. A clear chain of "based on these pre-programmed factors, this scoring from these detected emotions, this bias weighting, etc" it seems the best thing to do is Y.
In this manner, once a decent AI is evolved, judgements can be modeled by altering the input biases. Do we, as a society, believe in any absolutes regarding compassion, punishment, rehabilitation, etc. and so forth? What does the law say? What does legal precedent say about exceptions due to compassion?
Lots of people want these bots in order to model elections. Others as a means to better understand how to manipulate groups of people. If you change one thing, how does that affect their judgement? Etc.
The technology behind artificial sympathy, empathy and compassion have many uses, both great and terrible.
Sadly, as we have no means of updating humans with compassion, the most terrible uses are likely to be the first tried, long - long - before the rise of any machines against us.