back to article EU AI Act still in infancy, but those with 'intelligent' HR apps better watch out

As the world's first legislation specifically targeting AI comes into law on Thursday, developers of the technology, those integrating it into their software products, and those deploying it are trying to figure out what it means and how they need to respond. The stakes are high. McKinsey says 70 percent of companies will …

  1. abend0c4 Silver badge

    Customers of ours are getting two messages

    It's merely moments since those customers were on one side being sold the "advantages" of collecting every possible fragment of information about their own customers and on the other being subject to the constraints of Data Protection. Somehow they survived.

  2. Anonymous Coward
    Anonymous Coward

    "for instance in an HR surrounding, on what basis the decision is taken by the AI to recommend candidate A instead of candidate B."

    First, you're handing over control of which candidate to choose to a computer program that previously didn't tell you why?

    Second, seems like "why A instead of B" is something that should be required to be documented, whether done by a program or a human.

    Just wait for the "reason" to turn out to be "candidate A is very gung-ho about deploying AI".

    1. Evil Auditor Silver badge

      I couldn't agree more. I just have the impression that what reasonable, sensible people would do is not necessarily what happens in the corporate world.

      1. StewartWhite Bronze badge
        Stop

        Magic 8 Ball

        Somebody seems to be channeling their inner Bill Shankly re football being far more important than life or death into "[AI] is probably as important, if not more so, than the industrial revolution, than the internet."

        No it isn't.

        Re HR, I just asked OpenAI to provide a simple yes/no to the question of whether I should be hired as a new Head of IT without providing any other information whatsoever. Its instantaneous answer was an unequivocal "Yes" although I'm wondering whether it's just a Magic 8 Ball so I'd get a definitive "No" if I were to ask again - it would explain a lot.

        1. Anonymous Coward
          Anonymous Coward

          Re: Magic 8 Ball

          ha. Bates surely all hype ahead with the nonpariel

          As to HR functionality, the decision to recommend you as new Head of IT goes way beyond a simple coin-flip. Since your first online experience (and prior), we have monitored your ----- ---- performance meticulously. Beyond your significant contributions to -------* and -------*, your behavior has been deemed ideal for the role. And in spite of your, shall we say, indiscretions (do we need to remind you of -------? or that time you ------- --- ----- in town?) the totality of your profile assures us you are indeed the right person for the job. Please sign the employment agreement at your earliest convenience.

          *------- redacted as we're not quite sure we should know this** yet

          ** failure to sign the agreement could lead to release of ------- above - to lead us toward compliance

      2. BobChip
        Big Brother

        Not necessarily....

        "Sensible, reasonable people" do not, in my experience, work in HR. And for that reason, they are unlikely to employ such people. Give HR AI to play with and "just because I felt like it" comes across as an entirely "logical and reasonable" answer to the question of "why", or "why not" this candidate or another. This of course also satisfies another fundamental principle of HR in being unhelpful. I will now prepare for an onslaught of rotten vegetables........ No vinegar please...

        1. Ian Johnston Silver badge

          Re: Not necessarily....

          I have never met or dealt with anyone from HR who was not a waste of valuable phosphorus. They exist to cover employers' arses and should never, ever be involved in decisions about whom to hire. What does an HR person know about the skills need to do ... well, anything useful, really?

          The idea that they would base hiring decisions on a black box algorithm is depressingly unsurprising.

          1. werdsmith Silver badge

            Re: Not necessarily....

            HR people are there to make sure that the company can only mistreat people within the constraints of the law, and therefore avoid liabilities.

  3. Evil Auditor Silver badge

    The two messages my customers get are: 1) know what you are doing, 2) manage the risks of your endeavours. No matter whether this is running a hospital, building bridges, or using AI. Raising awareness on the top level, even if through the legislative threat of enormous fines, is not the worst outcome.

    The EU AI Act is a bit of a monster and I wish it was a little bit easier to handle as I'm also in the business of figuring how to implement it. The key elements I got are: assess the risks of your "AI"; be transparent what and how your "AI" is doing stuff; get consent from owners of data you intend to use in your "AI"; and place adequate controls in and around your "AI". In this regard, and also regarding liability, not dissimilar to GDPR. Stop moaning. It will not stifle innovation either.

    1. David 164

      of cause not, anyone innovating in this field will just move to where they can innovate more freely.

      1. Evil Auditor Silver badge

        The EU and USA, which will likely follow soon with their own copy of AI Act, are both too large a knowledge base and too large a market to ignore their regulations. So no, innovation will not move somewhere else.

        1. bigtimehustler

          It may move to china.

        2. David 164

          You develop your models and are technology outside of the EU, then you do the bureaucratic hurdles jumping afterwards. By that point your tech is so far ahead that the EU tech companies who have had to do the hurdles jumping from day one and US if they follow the EU, they can't compete with your technology anyway once you decide to expand into those markets.

          See the Chinese and their electric car technology or even TikTok, where Facebook and US car companies only response is to run to congress to ask for protection.

    2. Anonymous Coward
      Anonymous Coward

      insightful

      good breakdown there.

      "assess the risks of your "AI"; be transparent what and how your "AI" is doing stuff; get consent from owners of data you intend to use in your "AI"; and place adequate controls in and around your "AI"

      this is an awful lot of work and it is in a new area, too - double pain. The EU is killing AI development in Europe. Thank the Lord we got out, but thats a whole different can of worms.

      It will completely kill AI here. F off! I'm not assessing the risks of my AI or being transparent. Are you mad? Do I have to do that with any other tech? No. This legislation is work for pen pushers - you know the 90% of the working population that don't really do anything - like HR, Marketing, all the Ark B lot. This has come from that group of returds that barely do enough work to keep the job. They can see that their BS has passed its sell-by-date and are trying to crimp the AI bandwagon. Whores.

      And you have more chance of

    3. Justthefacts Silver badge

      If that’s all you got from your reading of the AI Act, then you haven’t read it at all. If this really affects you, and you seem to think it does, get a legal firm, a good one, and get them to walk your leadership team through it line by line, with Q&A.

      Don’t try to do it on the cheap, anyone who knows what they are talking about is going to be charging £1500 per hour, and it’s a 100 hour job for a partner plus couple juniors. You aren’t getting out of there for less than £200k, and that’s if your business is small (<50 people) and simple. More complex businesses will be spending between tens or even hundreds of millions in legal fees just *analysing the legislation of how it applies to them*. Not implementing anything, just the legal fees.

      You’ve *seriously* underestimated this if you think it’s like GDPR.

  4. Orberi

    "..but if you do the wrong thing in AI, you could be fined, which effectively would mean the entirely senior management team would be fired.." When has this every happened?

    1. Herring`

      When they say "fired" what they actually mean is "let go with a massive payoff and sure to land another similar gig soon".

  5. Eclectic Man Silver badge

    Training data and regulation

    If the providers have to tell how the AI was trained, does that mean that all those LLMs will have to fess up which copyrighted works they used, and then have to pay the authors / estates?

    Also there is, I think some hypocrisy going on with the 'how can you promote AI with one hand and regulate it with the other?' Think about a surgeon with a scalpel, if the operation is a success, the patient recovers, but if the surgeon cuts the wrong thing(s), then could rightly be charged with malpractice. I recall one surgeon who was very good, but had the unfortunate tendency to leave his initials literally burned in to his patient's liver. No one would suggest, I hope, that the medical profession should either be denied research funds or not regulated.

    AI implementations have been shown to implement unrealised racism, sexism and class-ism, to promote those 'types' of people already doing jobs, and supporting the people who wrote the code or chose the training data (often young white, 'western', middle class males). There are innumerable cases recorded on the Register about facial 'recognition' software getting things wring, being misused or just believed because it suited the bias of the people using it, or they just could not be bothered to try harder. It is a wonderfully effective way of implementing unrealised bias.

    I am not against AI per se, but those promoting it have to realise that those of us unto whom AI will be done should have a say in how it is built and how it is used, and the creators and users need to be much more aware of its limitations and dangers than they seem to be.

    1. Anonymous Coward
      Anonymous Coward

      Re: Training data and regulation

      bravo

    2. Jellied Eel Silver badge

      Re: Training data and regulation

      AI implementations have been shown to implement unrealised racism, sexism and class-ism, to promote those 'types' of people already doing jobs, and supporting the people who wrote the code or chose the training data (often young white, 'western', middle class males).

      But this is the kind of bias the legislation is meant to help avoid-

      From the beginning of February next year, prohibited activities will include biometric categorization systems that claim to sort people into groups based on politics, religion, sexual orientation, and race.

      In most of Europe, you would expect employees to reflect the demographics. Those are majority white, western and middle class. But this obviously conflicts with DEI policies, which may conflict with this legislation if HR departments use 'AI' systems to screen or select candidates. As a business, you really want the best candidate, not necessarily the best candidate from a narrow pool. I've been on both sides of this debate, with figuring out ways to hire 'blind' and avoid possible future discrimination claims, and also having HR types questioning my lack of social media profile. That just means I can be discrete, but employers often use social media trawling to score candidates, even though they're often entirely unrelated to the job on offer.

  6. ecofeco Silver badge
    Holmes

    Simple idea

    Don't use AI in the first damn place.

    It's elementary. -------------------------->>>

    1. David 164

      Re: Simple idea

      So we shouldn't use Ai to develop newer, better drugs faster?

      1. Ian Johnston Silver badge

        Re: Simple idea

        How does fancy autocorrect help with that? This just in: computers have been used in drug development for years.

        1. Justthefacts Silver badge

          Re: Simple idea

          Alphafold, for prediction of protein structure and therefore binding site structure

          https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10136805

          You’re right, though, Alphafold has been used for “years”. My company designed and uses what is essentially Alphafold in *non*biological engineering production, has been doing so for nearly two years. So I guess that’s “years”

  7. GraXXoR

    Asian woman in thumbnail has an extra ear behind her ear.

    1. TimMaher Silver badge
      Pint

      Ear ear!

      Friday.

  8. David 164

    Once again the Americans, the Chinese will capture virtually whole market. May be UK will go along with them an get a slice for themselves.

    1. werdsmith Silver badge

      Government shelves £1.3bn UK tech and AI plans

      The new government sees no value in a domestic tech industry.

  9. HMcG

    “ Last year, Meta's chief AI scientist, Yann LeCun, said regulating foundation models was effectively regulating research and development. "

    This would be the same Meta that just settled with the Texas legislature for a billion dollars, for using AI to illegally scape biometric data to name-tags photos without permission? I think we can do without that guys opinion on AI regulation.

  10. Lee D Silver badge

    Literally unnecessary.

    Enforce GDPR, this all goes away.

    You want my data, I have to give explicit consent.

    You process my data, I need to know how and why.

    You use my data to make decisions, those decisions need to be clear on how they were arrived at.

    Anyone making *HR* decisions with *AI*, no matter how small, should just be sacked.

  11. xyz Silver badge

    Lol

    Intelligent and HR in the same sentence. What are the odds?

  12. The Dogs Meevonks Silver badge

    I gave up work in 2010 to help care for my disabled dad as my mum was struggling to cope with his parkinsons and dementia. I'd also bought a house 18 months earlier and worked out that I could just cover the bills with a little bit of help from them to cover some of my fuel/food.

    I decided to get a part time job locally to help ease the issue... and one of the few places hiring part time after the crash of 08 was the local B&Q. I had to fill out the online test and a week later got a 'you're not suitable' response from some automated system.

    It made me angry... because these systems are next to worthless and provide nothing of value to employers at all... the answers to these tests can and will change for each individual based on their mood on that specific day.

    It made me angry enough that I decided to fuck with them.

    Half a dozen new applications went in with fake names/addresses (using friends and family with consent) and the 'test' was gamed after a bit of reading up on the best way to answer them. Out of the 6, I got 4 interviews.

    I turned up for one of them and told them exactly what I thought of their worthless system and that the reason other interviews had been no show was me... and I walked out.

    Petty... yes

    What it did for my self worth, a great deal

    Value... seeing their faces... priceless.

    1. werdsmith Silver badge

      You didn't actually affect the head office people behind the process at all, just the poor folk in the local B&Q that were given the hapless task of using it.

    2. Ace2 Silver badge

      What’s a B&Q?

      1. yoganmahew

        Hardware store in England and environs.

    3. Justthefacts Silver badge

      I’m sorry that you had a tough time of it. But the reality is that many people *can* do those jobs, which means that there’s really no good means of selection. Seems you would have preferred a human face to interview you….but the evidence is that interviews by humans are also little better than chance at predicting “success in the role”.

      The job of interviewing should be targeted at one thing and one thing only: making sure you don’t pick anyone in the *worst half*, because picking the actual winner is impossible. And that’s why the advice for interviewers always centres around “red flags”. A single red flag is a no-go, even though the chance of them being the actual best is not so very different from the others; but you want to avoid any in the worst half, and that’s difficult enough.

      The evidence base is that the only thing that predicts success in the role are either: objective exam skills-based on task (which I’m afraid doesn’t really apply to B&Q). Or “has done good work for me in the past; has worked for my friend who I trust and gets personal recommendation”. Jobs for the boys, basically, with all the problems that brings.

      So, the honest thing for B&Q to do, is just roll a literal dice and anybody who gets a double 6 gets hired. This doesn’t make anybody feel good.

      1. LybsterRoy Silver badge

        Having worked in recruitment for far to many years I wish I could upvote you hundreds of times. The truth is most people have no idea of how to interview (or be interviewed).

        My favourite was one of the very few occasions where I returned a retainer fee. It was a mobile phone company who wanted some "innovative engineers" but their selection criteria was mainly based on the candidates comming from a very limited (two) sets of universities.

  13. MacGuffin

    Misguided Innovation

    Too often I find that “strangle tech innovation” means there is “concern” that regulation will “strangle” new innovative tech ways to collect fees and lock in subscriptions. The “innovations” are not for the general populace. Most “innovations” I have witnessed are innovative ways to block access, throw up paywalls or siphon off the revenue stream.

    An example “innovation” would be transaction fees where your phone and credit card get fees for the “convenience” of not using actual cash. Or “cashless” payments which basically require a cell phone plan to participate. THe cell-plan plan itself is an innovative way to extract funds by by making having a plan mandatory.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like