The burden is as alaways...
Good luck proving this. I guess it depends on how the legislation is written but good luck anyways.
A newly proposed amendment to California's hiring discrimination laws would make AI-powered employment decision-making software a source of legal liability. The proposal would make it illegal for businesses and employment agencies to use automated-decision systems to screen out applicants who are considered a protected class …
You just outlaw automated sorting of resumes, "AI" or not. Period. It does not work anyway. In the States, if it's being used, it will come out in the discovery phase unless the company wants to compound it's woes by engaging in perjury. I doubt many HR employees would want to perjure themselves "for the company".
State has a bot that submits the same CV with male/female/different ethnicity names - runs stats on the results - take company to court
Company either has to argue that it is guilty of using ML which just looks for CVs that match the board members OR it has people deliberately screening out women and low albedo candidates
Before I publicly comment that they should probably scrap this and start over. Not because the basic idea is bad, just that this is exactly how idiots over-regulate things without actually accomplishing their goals. This proposal is a mountain of bureaucratic paperwork with no teeth to prevent abuse. As sreynolds said, good luck proving this. Having the model data does not mean the regulator will understand any of it. The way it's written it is only useful in a prolonged investigation, which is to say after someones rights have been violated.
Scrap this and replace it with language requiring companies that want to use this kind of technology to show that they understand and can control it's biases BEFORE it's deployed in the first place.
We make them do drug trials, we should make them do something similar for machine learning algorithms.
>Not because the basic idea is bad... without actually accomplishing their goals
Cynically you would say that this achieves the goals of telling their voters that "we stopped evil AI" while telling the businesses that fund them "don't worry it won't change anything" while the lawyers on both sides figure they can bill $$$ arguing over it
Swcrap this and replace it with language requiring "companies that want to use this kind of technology to show that they understand and can control it's biases BEFORE it's deployed in the first place"
And exactly how do you propose this? The vast, vast majority of people using tech have no idea how it works let alone prove that it is working as expected - if they understood the tech before it was implemented, World+dog wouldn't even be having this discussion. They would understand the limitations and either program accordingly or only use the program fire guidance, not final decisions. It is the lack of full understanding of what is happening inside these black box "AI" programs that is causing the concern.
Rather than banning the use of automated systems, simply make a law that companies are liable for any bias in those systems, whether or not it was intended. Companies will either stop using it, or their lawyers will insist that vendors prove that their system isn't biased before using it. Win-win.
If human beings can be made to justify their decisions, then AI software should also. If the software can't tell you WHY it made the decision, then hold the users in contempt. Yes, the users. Start fining the people/companies who use the software and see how quickly they flee developers who sell black boxes.
Let me illustrate exactly how ATS software is flawed and discriminatory.
Suppose an HR department uses externally supplied software (some service providing it) that rejects using an experienced doctor because he has had too many patients. Or lawyer because he's won too many cases.
Sounds absurd? Well, what has been happening is that companies who put up job solicitations for contractors REJECT experienced contractors for having too many projects on their resume. The flawed reasoning used by the ATS systems is that if someone has too many jobs, they are automatically classified as a job-jumper and unreliable. And so the applicant gets rejected, no matter how qualified or how good a fit they are. The problem comes about because ATS software companies wrongly apply criteria suitable to judge salaried employees, instead use these to judge contractor technologists. The HR department receives data with the applicant scored low because of wrong criteria application, and so a needed qualified candidate is rejected.
My personal experience with this has been incredibly frustrating. and it shows the need for legislative action. I've actually had several cases where I was provably the only one in the industry who matched certain criteria (because I was the one, the originator, who created the knowledge area they were looking for, and there are no other experts on it yet!) but I was rejected on the grounds of too much experience (too many jobs). Frustrating.
There are certain ATS software suppliers whose product is garbage but they have strong sales departments. These companies need to be reined in, legislatively.