"This problem isn't unique to ML. It plagues chip design, bathroom scales, and prime ministers."
With Prime Ministers it's easy to tell when they're lying. Their lips move.
Machine learning's abiding weakness is verification. Is your AI telling the truth? How can you tell? This problem isn't unique to ML. It plagues chip design, bathroom scales, and prime ministers. Still, with so many new business models depending on AI's promise to bring the holy grail of scale to real-world data analysis, this …
I can't help but thinking that if you automatically assign the same Fibber score to all PMs regardless of actual Fibbing performance, it unfairly penalises the contestants who have taken Fibbing to a new level. Some PMs have clearly worked harder than others to advance the Fibbing sport, especially in the subfield of Brazenness, so it's only just that this dedication should be recognised by the judges.
Giving all the children a shared first prize just for turning up to sports day is unlikely to advance the venerable sport of Fibbing in this great country - and at a time when we are facing stiff competition from the former colonies, too.
It's the sheep who are beholden to the machine. The computer said it, therefore it is the unquestionable truth because computers cannot lie.
True, but they can be wrong. Or draw wrong conclusions from poor data.
Like the time I had to argue with my bank that it was they who had the wrong date of birth noted down for me (it wasn't a number swap, I was 19 and they had the DOB of a 70 year old). I had to get my birth certificate to demonstrate the incorrectness. That I looked like a teenager was no use when faced with a machine saying I was an old guy.
So, the culture needs to change. If a machine says "X" then understand that it may be wrong. We all have great experience here - the infamous autocorrect on our phones.
The sheep are onto the dinner plate, I've never tasted wolf.
My mother's a liar, my lovers were liars, my best friends and teachers were liars. The only person I've known who never lies is me - and I can see why you'd doubt that.
Maybe learning to lie is an essential part of intelligence.
I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal.
As someone not neuro typical, lying is difficult.
I am liking WFH, not subject to the office general chit chat, and so not having to do the large amounts of "little lies" that are part and parcel of social chat.
.. Far less mentally exhausted at the end of the day (the social interactions are mentally taxing, and the lies even more so (none of it comes naturally so a lot of mental activity is expended having to actively think about what you are going to say so it comes cross as "sociable", reminding yourself to make periodic eye contact etc.) ).
As somebody who is likely also on the so-called spectrum (cerulean! I wanna be cerulean!), I'm it seems to me that the problem is people wanting an honest reply just so long as you don't tell them the truth.
In other words, the art of learning that all of those little aside comments that you make to yourself ("you stupid ass, how can you be paid twice what I'm paid and not know that? wtf is your actual job?") should never ever be said out loud. It's usually best to be vague and say things like "I'll take a look at it and get right back to you" which can buy some time to allow you to have your thoughts (the ones they won't want to hear).
If at all possible, avoid meetings. Spoken discussions are hell because there's no time to think, everybody wants an immediate response. For people who can blag their way through life (most management) this is no problem. But some of us like to prepare a proper answer. Even if we know that nobody cares enough to give it anything like the same level of attention.
And, of course, if you aren't full of shit like that, you're not "a team player" or "promotion material".
There's a very rich and colourful narration in my mind. But that's where it must stay. <sigh>
Wow! I've always thought of myself as 'normal' or 'neuro-typical' or whatever they call it. I've never had a diagnosis of being on this so-called spectrum.
But I can absolutely relate to the bit about what I've always thought of as 'foot-in-mouth' syndrome - i.e. its all too easy to blurt out what you genuinely think rather than what others expect, or would like, to hear - even though I know I'm right!
I wonder if its more a symptom of intelligence and honesty? Rather than us being the ones with a so-called 'disorder', its actually all the other idiots an liars who have the problem!
I've never been formally diagnosed, as when I was a kid "autism" was another word for "retarded".
How I wish that people like me were seen as the sane ones. I look around at the world today and can't help but think that you deserve a thumb up, but still... we're weird and the gaslighting shitbags aren't. <sighs again>
I don't think it's intelligence, as you don't have to know anything to tell someone what you think with no diplomacy. In fact, intelligence probably makes it easier to determine when diplomacy is most needed and how to encode what you need to say in a diplomatic manner. Honesty is closer, but I think there's another element, namely risk tolerance. I feel more comfortable telling a friend that their idea sounds unworkable because I trust they'll listen and not be offended, whereas if it's a stranger, especially a stranger I need something from, it's harder to be honest when there's a reasonable chance they will react badly to hearing it. This probably also relates to experience--I've said things people didn't like to hear when I was younger, felt the consequences, and became more cautious when thinking such things. I still have such thoughts regularly, but now I don't say them very often.
@tiggity
"periodic eye contact"
I had one manager - great manager apparently, great engineer - who was loathed by all the female staff because he was always looking at their chest during conversation. Except he wasn't, he was just trying to avoid eye contact.
And don't look at your feet, or their feet, for some reason that is negative. Look at their hair, or just above their head, for some reason that is positive.
Wanting to work from home is not atypical, everyone except extreme extroverts wants to work from home. I wear a face mask on video calls, no real excuse, just used to it. I claim to have sickly house guests.
"I had to get my birth certificate to demonstrate the incorrectness."
And the birth certificate isn't proof of identity of its bearer. The bank was selecting the wrong source of data.
I take it this was some time ago as (a) finding a bank branch is hard enough now and (b) finding that the bank staff are sufficiently empowered to fix their mistakes is virtually unknown.
A birth certificate proves someone somewhere was born. Doesn’t do anything to match said cert with person presenting it.
They would have done better with a driving license, passport etc as these have the photo on as well.
As an aside when upvoting the above post it wouldn’t do it and the following message popped up at the bottom of the screen for a second “alert the dominatrix, the codes need whipping”, not seen that one before….
"better with a driving license, passport etc as these have the photo on as well"
Well someone accepted a birth certificate for producing one of these "better" documents, so that just kicks the trust can down the road. No doubt the bank employee knew a mistake had been made, but they can't just change the field without some kind of documentation to back them up. In a highly regulated industry like banking, the bank's procedures will even list what types of documents are acceptable. It can be maddening to deal with someone who sticks to the rules when things are "obviously" wrong, but it can be highly reassuring in scenarios where things are a little iffy.
Re "It's the sheep who are beholden to the machine. The computer said it, therefore it is the unquestionable truth because computers cannot lie.
True, but they can be wrong. Or draw wrong conclusions from poor data.".
It's true. They cannot lie, because to lie requires some sort of intent. Something that usually has a real intelligence behind it. By real, I mean not Artificial.
However, as you say, they can be wrong due to missing or incorrect/poor data, a bad model or bad programming/system design.
The car salesperson knows they are lying
It's a joke that goes back at least thirty years but is still true. That people believe many things that are either unproven or downright wrong. What is worse is if that person has credibility with the masses - say a celebrity or footballer, then many more people will believe what they say "'cos their [sic] famous, innit"!
Okay, a truth table for all possible inputs to demonstrate the output is correct is practically impossible for large complex stuff like this, but a truth table within the scope of the current input, deviating by just a couple of increments, to test the output is also within a couple of increments would at least suggest all seems well, and assure you this particular scenario of inputs has not triggered a backdoor.
So, I suppose in a real world case, where you have Bobby Bobsworthington-Smythe, white, 35 year old male, single, renting, earning £20,000/annum in a temp role. The test to see if the outcome of his mortgage application follows the algorithm as expected is to tweak one of the inputs slightly and run it again, then repeat with another tweak........ All the outputs should be within the scope of the first output, and if not, flagged for review.
Obviously, the output to this scenario is an instant refusal for a mortgage followed by passing his details on to every credit card company on the planet, for a small fee, as he's clearly open to the prospect of eating today and praying he can pay for it tomorrow - hell he might even turn the economy7 heating on for half an hour to dry the damp ridden walls in his overpriced shithole of a flat...err... sorry what were we talking about? Oh yes AI and how it's going to fuck us all up, and not just those who are already fucked.
"The test to see if the outcome of his mortgage application follows the algorithm as expected is to tweak one of the inputs slightly and run it again, then repeat with another tweak........"
Unfortunately this is how mortgage applications are done, enough tweaks to enable taking the upfront fee, and let the underwriters deal with the fallout.
A truth table won't save you from the kind of attack described in the article. It works by implanting a special response for specific inputs - something like "applying for a mortgage of £35076,56", with the special response not getting triggered for a mortgage of £35076,55 or £35076,57. And you can't test literally every possible input.
This post has been deleted by its author
So, like every other model a judgement call at some point has to be made by a human, preferably an expert in the subject, to make up for the deficiencies of applied statistics.
MLs additional complexity means you need an ever more specialised expert to gauge if it is giving sensible output or not.
That said, the few demos I've used in person have been truly terrible. IBM had a demo of some elements of Watson - where they fed it a copy of a guidebook relating to one of their properties. You could then ask questions e.g. who designed the building, how old is it, etc.
It was utterly unable to manage these responses. And this on on a sales pitch.
Lot of people getting big lunches out of ML, but not from me.
Also, how do we teach a ML morality, when a significant proportion of the population lack it?
AIs are as reliable as dogs. We think we know what they're up to, but we can't tell exactly what parts of the training are being picked up on, e.g. it has been shown that drug dogs react to almost imperceptible cues from their handler rather than the scent of drugs or money.
To trust our money, our freedoms and even our very lives to systems you cannot reliably interrogate or understand is madness.
This post has been deleted by its author
Even better! Just let the AI to tell him he's fired and handle the termination process. By doing this, no management or HR person will be traumatized in the process.
There are two main reasons AI use is spreading like wildfire. First, nobody is to blame in case of a mistake. The machine did it and the word "intelligence" gives the assurance that there was no mistake. Second, there is no recourse for the victim. The algorithms and rules are kept hidden and nobody would have/want to investigate.