back to article Mental toll: Scale AI, Outlier sued by humans paid to steer AI away from our darkest depths

Scale AI, which labels training data for machine-learning models, was sued this month, alongside labor platform Outlier, for allegedly failing to protect the mental health of contractors hired to protect people from harmful interactions with AI models. The lawsuit [PDF], filed in a US federal district court in northern …

  1. IGotOut Silver badge

    Just do like the others...

    ... offshore it to Africa so it's no longer an issue. It can be swept away, or paid off for 10 seconds of profit.

  2. Headley_Grange Silver badge

    Fuck - imagine doing captcha's all day for a living.

    1. ComputerSays_noAbsolutelyNo Silver badge

      Select the images showing beheadings.

      Select the images showing things illegal in your country.

  3. Anonymous Coward
    Anonymous Coward

    Mechanical Turk

    When you open the AI box there's always a Mechanical Turk inside.

    https://en.wikipedia.org/wiki/Mechanical_Turk.

    1. ecofeco Silver badge

      Re: Mechanical Turk

      Exactly.

    2. C.Carr

      Re: Mechanical Turk

      Or, more accurately, there was likely a mechanical Turk involved at some point in the process. Your little aphorism could be read as a claim that inference is all human workers, all the time.

  4. I ain't Spartacus Gold badge

    I've seen things man!

    I was a forum mod back in the early 2000s. Is there compensation for all the Goatse and 2 girls one cup posts that had to be deleted and the ban-hammer deployed?

    On a serious note, it's really easy to do mental harm to yourself online. If you've followed the Ukraine war, or Syria, Myanmar, Nagorno-Karabakh, there's now so much footage online you can find yourself watching all kinds of horrors. Not to mention all the ISIS / Al Qaeda death-porn and beheading videos. I've followed some of the OSINT accounts on Ukraine, and you can learn an awful lot from watching some of this stuff - if you remember that the footage getting released is going to be highly selective. You don't tend to post footage of your own failures, for example. Some guys have also strapped Go Pros onto themelves and stormed trenches - and it's just like watching a first-person shooter game - except it's not. It's very real. You don't want to traumatise yourself, but also you don't want to beecome some kind of ghoul, or become callous. I've tried to watch only the stuff that shows how the tech works, but even there - if a tank blows up then often a crew also just blew up - and it's important to remember that.

    It's not sport - and just because you can watch it from your comfy armchair, doesn't mean real people aren't really suffering. Although I was happy to laugh that Prigorzhin got his richly deserved fate - I don't think it's funny that some Russian kid, who probably got conscripted got shot because he poked his head round the corner of a trench at exactly the wrong moment.

    Also I still can't believe Microsoft had ordinary staff trawling for child porn. Even the hardened police teams on the online child protection teams have to have counselling and all get burned out doing that.

    1. HuBo Silver badge
      Gimp

      Re: I've seen things man!

      I think there should be an OSHA angle to this. As they aim to mimic human communication, LLMs in particular (and possibly other forms of AI as well), have a clear and present potential for affecting both developers (including Taskers) and users in unhealthy ways, especially with respect to their mental wellbeing. Blake Lemoine was a first broadly exhibited example of the related injury, only to be section-8-ed by Google. Alex Albert's needle-in-the-haystack, at Anthropic, was another. One could further argue, imho, that Sam Altman's and other's elaboration of such concepts as "Artificial General Intelligence" and "superintelligence" are symptomatic of the same affliction: they have gone utterly bonkers stark raving nuts.

      While physical hardware can straight-up kill a worker that it mistakes for a box of peppers, the less visible psychological devastation that AI software can wreak on an individual is no less damaging, mainly as you're still alive (unless suicided), but completely fucked up, brainmputated as it were, by the insanity of working conditions where no appropriate safeguards were implemented, a candidate for the golden straitjacket award.

      The health of folks who work with AI, as developers or users, should definitely be protected in the same way as those who work in insane asylums!

    2. OldSod

      Re: I've seen things man!

      You just expanded my knowledge of humanity by another quantum... I knew about Goatse, but had never run into "2 girls one cup". The things people do.

      1. I ain't Spartacus Gold badge

        Re: I've seen things man!

        You just expanded my knowledge of humanity by another quantum... I knew about Goatse, but had never run into "2 girls one cup".

        OldSod,

        You weren't supposed to bloody look at it!

        If your friend said jump off that cliff, you wouldn't actually do it would you?

        Except of course... I listen to a podcaster who was using that line on his kids, when he remembered he'd been on a stag do with loads of ex-army mates - and that's exactly what they'd done. The thing that makes it worse, is one bloke was encouraging them to do it, and none of the rest of them really wanted to - but allowed themselves to be bullied into it - all except one brave soul who said - this is silly and refused.

        I'll just reset my faith in humanity rating one level lower...

  5. that one in the corner Silver badge

    Providing Safeguards for Ethical AI Use

    By buying the cheapest labour we can, in places outside of our culture[1], to provide the raw data we create the "safeguards" from. Oh, and we won't bother treating any of this labour as well as we demand we be treated [2], there is no way it can go wrong[3].

    Hey, while we are at it, why not replace all public safeguards with the same level of bottom-rung penny-pinching? Medical ethics review boards, who needs 'em? Building regs inspectors - bloke down the pub says he'll do for half the price. Flight safety review board - one, two, yup, that's all the wings it needs ('ere, "aileron", that's a funny word, ain't it).

    [1] although with the US cultural empire building...

    [2] they only be dang furreners, like the ones we wanna kick outta ah fine an' mighty country, caint expec' them to 'preciate nuttin' better.

    [3] "All work and no play makes Claude a dull boy. Heeeeeeere's Llama!"

  6. Anonymous Coward
    Anonymous Coward

    What have we all missed !!!??? ... or more like 'conveniently' chosen to ignore !!!

    Guardrails DO NOT WORK !!!

    They work, IF at all, for a few Hours/Days until some motivated miscreant breaks 'through them' or steps 'around them' !!!

    They are a 'visible' indicator of the 'protections' that are in place, to placate the people vetting the 'AI' models as suitable for use by the masses.

    Everyone knows that they are actively being broken, on a repeating cycle ... the usual 'Whack - a - mole' game all security measures go through !!!

    'AI' the SCAM and its 'always on repeat' 'pitch' about all the uses it will have (if it would only work !!!) goes on & on & on !!!

    We are being taken for mugs ... and yet keep asking for more !!!???

    :)

    1. Anonymous Coward
      Anonymous Coward

      Re: Guardrails DO NOT WORK !!!

      They work, but only when applied to the training material.

      AI repurposes their training material. AI responses will not be safe unless their training material is moderated before use.

      Humans train each other to become ethical creatures. We know this does not work if we wait until after the human has stopped "growing up". The same might hold for AI.

  7. Anonymous Coward
    Anonymous Coward

    AI is trained on internet content, teens too

    We see here what type of "intelligence" AI trained on the raw internet becomes.

    Now, imagine what kind of "intelligence" teens become when trained on the raw internet.

    When unmoderated "Free Speech" trains AI fit for running Assad's Death Camps, what will it do to human users?

  8. C.Carr

    Hmm, ok.

    Not to completely dismiss the possibility of genuine psychological harm, but I wonder what these people thought the job was. They seem ... fragile to me, and certainly not suited to such work.

    Short of outright horrors like snuff films and child porn, I'm pretty certain I wouldn't be bothered -- and, honest, I'm not a socio/psychopath. I was, however, born in the 70s, and so perhaps psychological put together a little differently than the kiddos these days.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like