back to article Judge demands social media sites prove they didn't help radicalize mass shooter

Some of the largest social media platforms in the world will soon try to convince a US court their platforms did not contribute to the radicalization of a mass shooter who killed ten people and injured three more in a New York grocery store in 2022. Depending on the outcome, the case could reshape liability rules for social …

  1. cyberdemon Silver badge
    Pint

    IANAL

    Meta, Reddit, Twitch's company Amazon, YouTube owner Alphabet, plus Discord and 4Chan (where is TikTok?)

    But it seems to me the only ones in this group with a defence are Discord and 4Chan i.e. the ones acting as "plain-old messageboards" without promoting content to er, "like-minded users"?

    Reg needs a popcorn icon. Or maybe dry-roasted peanuts.

    1. Anonymous Coward
      Anonymous Coward

      No clean hands

      It will be interesting to watch how the claims develop in court, but they don't have to be one size fits all. Meta's properties are based on ultra-invasive profiling, targeting, and monetization. Neither Insta for Facebook have clean hands, quite the contrary they are eyeballs deep in a literal genocide. YouTube is in the same boat, steering people to dangerous but viral content and making piles of cash of the traffic. They don't passively serve content, they shovel it down peoples throats, and aggressively steer into fraudulent and dangerous content to sell ad impressions. Reddit promotes posts, and had turned a blind eye to toxic and criminal content for ages. Some communities are eventually shut down, but like Meta and Google, it's internal content moderation team is incapable of meeting the flood violating content, so only a fraction of the worst material is removed by the company. Reddit also turns a blind eye when banned communities partially decamp to another platform, but maintain a toe hold on Reddit that is tacitly pointing users to offsite troves of violating materials.

      4chan and 8* are structured as traditional message boards, but glorify both the content and the community behind it, so they are in effect promoting ALL the posts, not just the ones chosen by an algorithm. That makes them MORE complicit, not less, and just being a message board doesn't protect you, as the old stormfront, atomwaffen, etc etc found out. That said it is not automatic, and in the narrower case of this shooting it will be on the plaintiff to provide additional evidence.

      Discord is tricky as it wasn't founded as a haven for poorly socialized (literal or mental age) adolescents. That said it was quite slow to act to clean up toxic or criminal servers and did promote both servers and content by various means over the years. That is one I am watching for in the upcoming trial, but seems like the most likely to be dropped at this point.

      Your call out on TikTok points to quite a glaring omission from the discussion at least. Their "algorithm" included a feature to massively boot specific content essentially at the push of a button, as well as targeting data second only to Facebook and Google. Viral content on TikTok regularly promotes self harm, propaganda, bogus medical scams, hate speech, and conspiracy theories promoting extremism and outright terrorism. But based on this suit, the companies called out are ones that the plaintiffs can link to this case, so while you are right TikTok is a huge problem, it may not play a part in this particular suit.

      1. SundogUK Silver badge

        Re: No clean hands

        "...they are eyeballs deep in a literal genocide."

        That's a bold claim with zero evidence.

        1. deive

          Re: No clean hands

          Don't remember the Rohingya incident?

          https://www.theguardian.com/technology/2021/dec/06/rohingya-sue-facebook-myanmar-genocide-us-uk-legal-action-social-media-violence

      2. Wzrd1 Silver badge

        Re: No clean hands

        One problem with your characterization of the 4chan and 8chan message boards being more responsible by non-promotion of the objectionable content. You basically crafted a position of "heads I win, tails I lose, now prove your innocence beyond the shadow of the most unreasonable doubt", which is the antithesis of what our judicial system requires.

        And defending those two groups makes me feel physically ill!

        The antisocial media corporations have algorithms that steer users to what is detected to be preferred content of interest, largely in order to display advertisements for profit that would also interest the victi - erm, user. The xchan boards do not, they simply are message boards for flakes and malcontents.

        So, with zero respect to the plaintiff, it appears that both products are operating as designed, regardless of objectionable content. It would be like trying to old Ford responsible because a driver of a Ford truck decided to drive over a crowd at a bus stop. When the "defect" is the user utilizing a device or service, the product isn't defective, the user is.

        Hence, the matter should not be addressed by the judicial system, but via the social and hence political system. You don't use a scalpel to screw together a bookshelf.

        1. Anonymous Coward
          Anonymous Coward

          Re: No clean hands

          I object to your classification of me as either a flake or a malcontent. Where, in addition to 4chan, can I get ancient drawings of cities, graceful pictures of airliners in flight, recommendations for bicycle switch gear, and Japanese hentai pornography, all on the same website with just a couple of clicks?

        2. John Brown (no body) Silver badge

          Re: No clean hands

          "It would be like trying to old Ford responsible because a driver of a Ford truck decided to drive over a crowd at a bus stop. When the "defect" is the user utilizing a device or service, the product isn't defective, the user is."

          That argument falls at the first hurdle because Ford are not pushing content at you showing badly behaved Ford drivers, even if you ask Ford for information about badly behaved drivers, they aren't going to give you any, let alone encourage it by giving you more than you asked for. But go to YouTube and search for Ford Crashes and it will not only provide multiple examples, it will keep pushing more and more similar items at you and the more of them you click on, the more it will keep serving up.

    2. Andrew Hodgkinson

      Re: IANAL

      I agree, but I don't think that's what these lawyers are going for and that really confuses me.

      The issue of being a "simple message board" vs something 'more' is The Algorithm™ - the way that most of these sites actively push content to which you are not subscribed at you, and this instant echo-chamber creation has been the subject of numerous studies showing that it causes all kinds of harm. Radicalisation is an obvious outcome; you show interest in something, so you're shown more of it, some of that will doubtless be more extreme, and so we proceed to circle down the drain.

      This is further worsened by no serious attempt to actually defend against fake news, including now both audio, still photo and video deepfake content. At the scale these platforms operate it's very hard to do, but they only got that scale in the first place by ignoring all of these issues, putting just about no resources into them and pocketing all the spare cash. Cry me a river should the legally mandated cost of doing business go up, in order to clean up the cesspit that they themselves created. They can afford it, and then some.

      Without a recommendations algorithm, users would have to actively search for content of interest. Specific message boards / pages / groups / Subreddits / whatever-you-want-to-call-them that specialised in "radical" content would be easy enough for automated searches to find and flag for human moderator review, if they're easy enough for regular users to find. With an appropriately scaled and resourced review team, both "objectionable" (FSVO "objectionable") or outright per-country/state illegal content would be more rapidly found. Disinformation is harder to tackle, but the framework for tackling it can only be established once the aforementioned review infrastructure is present.

      None of this seems to be the focus of the lawsuit; they seem to be trying to argue over a legal distinction between these things being "a product" or not. That seems pretty difficult. Perhaps the idea of being "a product" is legally "proved" by the presence of a recommendations engine which implies immediate, albeit computer-generated corporate content bias pushed at individual users? Somehow, though, I doubt it...

      1. Anonymous Coward
        Anonymous Coward

        Re: IANAL

        Without a recommendations algorithm, users would have to actively search for content of interest.

        Without section 230, there would be a lot more human curators employed in selecting content, and a lot less junk. A win, IMO.

        1. doublelayer Silver badge

          Re: IANAL

          Without 230 or something similar, there might be a lot less everything. If I could be sued for literally any comment someone chose to post, I might be a lot more cautious about letting people post anything that was slightly negative on one of my sites. If I write a post about a product existing and someone comments that the company's build quality, security practices, value for money, or anything else was bad, do I want to take the risk that the company concerned gets angry about that comment existing and try to threaten me into taking it down? We all know that some companies are that irritating and quick to use the threatening legal letter.

          Yes, that would also significantly reduce the junk out there, including the really unpleasant junk. It is useful to know what the downsides are when considering it, however.

    3. doublelayer Silver badge

      Re: IANAL

      "where is TikTok?"

      My best guess is that they're focusing on services this specific attacker used and he didn't use TikTok? It's a long list as it is, but maybe he did list all of those as places he found material that made him want to commit mass murder. That restriction is the only reason why TikTok couldn't fit into the list. Whether this suit will prove viable is a separate question.

    4. Yorick Hunt Silver badge
      Happy

      Re: IANAL

      Deep-fried whiting goes best with your choice of icon.

    5. jmch Silver badge

      Re: IANAL

      The "plain-old messageboards" argument is the interesting one here, and to my mind the core legal argument here.

      "plain-old messageboards" behaviour is, every post is posted in time order of posting, or grouped by thread if there are replies. These comments right here on The Register are a perfect example. None of the content is in any way promoted by the site, I have to choose to look at the comments for every article myself, and I have to choose to read any comment that I read. The only 'promoting' factors for a post, up and downvotes, are performed by other users, not by the site (and in any case, it is my judgement anyway to choose to read and/or believe any thing posted, regardless of up or downvotes)

      The original Facebook had pretty much this behaviour - I saw every post posted by any of my contacts in order of time of posting, but that behaviour is long lost in the mists of time. What FB (and other social media) does and has done for a long time now is to choose which posts to promote and which not, and to insert into everyone's feed not only suggested posts ads, but also 'you might be interested in this' posts from random strangers or groups.

      What is interesting in this trial is that it is likely to dig into the platform algorithms - what is being promoted and how posts are chosen for promotion. To be honest I do not expect that anything in the algorithm promotes disturbing / radicalising etc posts per se. BUT by promoting posts which get the most engagement (reactions and shares), the promotion algorithm is co-opting human emotional biases, i.e. humans react strongly to strongly emotional topics like disturbing / radicalising etc posts because anger, fear shock ares strong human emotions, and therefore such posts tend to have higher engagement on social media. I do not for a second believe that FB etc were/are unaware of these mechanisms.

      With regards to 'defective', I think the argument that FB etc are 'defective' products that are causing harm through some defect is quite wrong. FB etc are causing harm through their 'correct' (ie as-designed) behaviour. Which is, I think, even worse.

      1. MarkMLl

        Re: IANAL

        In short, server + proprietary_protocol + in-browser_custom_client = product.

        I don't like to sound like an open-source zealot, but if somebody- and this includes the likes of Discord- is filtering messages through their custom code, and by use of a proprietary protocol are at least potentially preventing people from getting at the /actual/ postings on the server, then they're going to have a Hell of a hard time demonstrating that they're a "common carrier".

        1. doublelayer Silver badge

          Re: IANAL

          The question is not about common carrier. They are not common carriers, but they don't have to be. The protections of section 230 apply to "information service provider[s]". The distinction is that information service allows them to make the information public and show it to many people, including those who were not deliberately targeted. In order to prove this case, they'll have to do one of the following:

          1. either prove or form a distinction between "information service provider" and something else,

          2. demonstrate that the law itself contradicts some other law or right,

          3. demonstrate that the platforms do not have to be deemed a publisher to have liability for this case.

          I think they're kind of going for option 3, but I'm not their lawyer or a lawyer, so I can't say that for sure. They're already close to the already decided cases using 230 in a related way, so they'll have to have a new argument or they'll lose from that precedent.

      2. Wzrd1 Silver badge

        Re: IANAL

        "With regards to 'defective', I think the argument that FB etc are 'defective' products that are causing harm through some defect is quite wrong. FB etc are causing harm through their 'correct' (ie as-designed) behaviour. Which is, I think, even worse."

        I completely agree, however such is not actionable in court. Otherwise, Ford would be found responsible for introducing a defective product because it functioned correctly when a malcontent driver decided to run over a crowd of pedestrians. The truck operated properly as designed, the operator used it in an unlawful manner.

        1. cyberdemon Silver badge
          Stop

          Re: IANAL

          > Otherwise, Ford would be found responsible for introducing a defective product because it functioned correctly when a malcontent driver decided to run over a crowd of pedestrians

          Either you or I are misunderstanding something?

          Facebook et al are working "as-designed" by promoting controversial/extreme/distressing content from unrelated sources (and NOT displaying what the user is actually looking for i.e. a chronological list of posts by their friends) in order to drive engagement. To use your analogy, it would be like a Ford truck with an automated sat-nav, which looks at the user's face wearing traditional islamic dress, an "AI" draws a correlation with images of terrorists, and the sat-nav suggests "Hi, you look like you want to mow down some pedestrians! Here's a route through a heavily pedestrianised area for you"

          The defect is in the design, not the implementation, but the product is still harmful.

    6. MarkMLl

      Re: IANAL

      You can't connect to Discord without enabling Javascript in your browser, i.e. messages are filtered through Discord's client-side code.

      It's down to Discord to /prove/ that they don't, ever, filter messages in any way.

      Same apples to everybody else, but if they can't do that then they're not a "common carrier" and are liable for content.

      1. Wzrd1 Silver badge

        Re: IANAL

        "Same apples to everybody else, but if they can't do that then they're not a "common carrier" and are liable for content."

        OK, then the postal service is not a common carrier, despite legal code and case law. They filter content (mail) by content (address) and deliver it to the address indicated. So, if any terrorist receives "go" orders by postal mail, obviously the postal service is defective in that argument.

        By that standard, all mechanical devices, from hammers and knives to motor vehicles and highways are defective and need to be shut down, as all have been utilized to commit crimes.

        1. John Brown (no body) Silver badge

          Re: IANAL

          "OK, then the postal service is not a common carrier, despite legal code and case law. They filter content (mail) by content (address) and deliver it to the address indicated. So, if any terrorist receives "go" orders by postal mail, obviously the postal service is defective in that argument."

          Don't be silly. The postal service doesn't open and check your mail and decided to send you more, similar mail that isn't even addressed to you, or push more mail at you that they want you to see and delay or hide your actual mail at the bottom of the pile.

          (Ok, to an extent, they do send you "unwanted" mail, but that's not them sending it, it's the spammers and mailing companies sending it, even if addressed to "The Occupier". The Posat service simply delivers what it's been paid to deliver, in the order it's collected or sorted. They don't even decide if it's priority or not. Priority mail may be delivered faster, but not because they decided it might be "more interesting" to you based on past deliveries. Priority mail gets priority only because the sender paid for the extra level of service.

      2. Michael Wojcik Silver badge

        Re: IANAL

        if they can't do that then they're not a "common carrier" and are liable for content

        This is completely wrong. §230 has nothing to do with common-carrier status.

        Just read it. It's not long.

    7. John Brown (no body) Silver badge

      Re: IANAL

      "promoting content"

      Came here to say pretty much the same thing. If you have to search a "message board", then it's probably safe. If the "message board" is recommending things based on it's algorithms and what else you've looked at, then it much more likely there will be case to answer since one is the user searching stuff out, the latter is the service pushing you into stuff and giuding you down the rabbit hole, pretty much the definition of radicalisation.

      1. Michael Wojcik Silver badge

        Re: IANAL

        That's a lovely theory, but it has no relationship to the actual law.

        1. Dan 55 Silver badge

          Re: IANAL

          Section 230(c)(2) removes liability for moderating or not moderating objectionable content but it says nothing about removing liability for promotion of objectionable content, which they pretty much are guilty of.

          It's difficult for social media companies to argue that they have no liability if they turned someone's timeline into a firehose of sketchy content by promoting it or by moderating away non-objectionable content. There's nothing in section 230 that says they can do that.

  2. Anonymous Coward
    Anonymous Coward

    It’s the algorithms on trial

    Burn them to the ground. Let people find their own content, not have it pushed to them left right and centre.

    1. An_Old_Dog Silver badge

      Re: It’s the algorithms on trial [Hold Up Here, Chief]

      From a previous post: ... actively push content to which you are not subscribed at you.

      I'm not defending Big Electronic Media, and B.E.M. probably is doing most of the things they're accused of, but hold up here, Chief.

      First, all this -- the lawsuit and many of the comments here -- totally disregard peoples' moral and legal responsibilities!

      It does not matter how many people I hear ranting on soapboxes in the public square that I should kill people of sub-group X, nor how many of my (non-electronic) friends tell me I should kill people of sub-group X, nor how many printed posters I see on building walls telling me I should kill people of sub-group X, nor how many books or leaflets I read telling me I should kill people of sub-group X, nor how many electronic posts I read telling me I should kill people of sub-group X, nor how many videos I see telling me I should kill people of sub-group X, nor how many chatbots tell me I should go out and kill people of sub-group X. It does not matter whether or not I spent time in an "echo chamber" of people writing/saying I should kill people of sub-group X. If I go out and kill someone of sub-group X, I am held legally and morally responsible for having done so.

      This should apply to everyone else, as well (in reality, rich, powerful, influential people receive special treatment, but it should apply to them, as well).

      "He told me to do it." is neither a legal nor a moral defense.

      Second, though B.E.M. is using all the psychological tricks available to it to encourage people to watch their recommended videos, posts, and adverts, the viewers are not strapped into a chair with some horror-movie-like contraption preventing them from turning their eyeballs aside from the screen. They're not being forced to watch. They can get up and walk away, or watch a different video, any time they choose to do so.

      Third, from TFA: "By his own admission, Gendron, a vulnerable teenager, was not racist until he became addicted to social media apps and was lured, unsuspectingly, into a psychological vortex by defective social media applications designed, marketed, and pushed out by Social Media Defendants," If this teenager truly was as "vulnerable" (in other words, exceptionally dumb-ass, for even a teen-ager) as the lawsuit claims, he should have been advised and monitored by his parents regarding his social media use, just as with tobacco, drugs, alcohol, sex, and firearms. It appears they failed to do this, since if they had, they would have seen some warning signs and (should have) acted upon those warning signs.

      1. ChoHag Silver badge

        Re: It’s the algorithms on trial [Hold Up Here, Chief]

        Tobacco, drugs, alcohol and sex have legal age limits because vulnerable people have been deemed unable to practice the necessary self control.

        1. LybsterRoy Silver badge

          Re: It’s the algorithms on trial [Hold Up Here, Chief]

          Weirdly enough I think there are legal constraints against shooting people.

          1. Anonymous Coward
            Anonymous Coward

            Re: It’s the algorithms on trial [Hold Up Here, Chief]

            The question isn't whether or not murder is legal but whether or not platforms are encouraging vulnerable individuals to commit such heinous crimes.

        2. Anonymous Coward
          Anonymous Coward

          Re: It’s the algorithms on trial [Hold Up Here, Chief]

          No, its because the religiously deluded people decided that they know what's good for us, which is actually NONE of their responsibility.

        3. Wzrd1 Silver badge

          Re: It’s the algorithms on trial [Hold Up Here, Chief]

          "Tobacco, drugs, alcohol and sex have legal age limits because vulnerable people have been deemed unable to practice the necessary self control."

          One has no Constitutional right to tobacco, drugs, alcohol or sex. Indeed, that's at the heart of every INCEL's objections to life.

          One does have the right to free speech - within specified constraints to ensure the welfare of the public and society. Hence, the defect of Section 230, as those constraints are removed and antisocial media, in order to serve advertisements for profit then steer objectionable, frequently unlawful in many jurisdictions discussions toward unsuspecting users.

          The product isn't defective at all, it's functioning perfectly as designed, it's used amorally, but we don't legislate morality, we restrict unlawful and likely to cause civil harm - save in these "common carriers" (a term previously utilized by mass transport, shipping firms, internet and telephone carriers and decidedly not publicly available bulletin boards).

          Everything operated correctly, as designed and intended, with unintended results, exacerbated by parental non-supervision.

      2. Khaptain Silver badge

        Re: It’s the algorithms on trial [Hold Up Here, Chief]

        "It does not matter how many people I hear ranting on soapboxes in the public square that I should kill people of sub-group" etc etc

        Unfortunately the world does not work exactly like this, there are a vast amount of people who are easily led, that do not think for themselves and can be coerced into doing a multitude of bad things. Then there are the other people who do try and who do succeed in radicalising others and they are the most clearly the people that should be brought to justice.

        1. LybsterRoy Silver badge

          Re: It’s the algorithms on trial [Hold Up Here, Chief]

          All to often these days the excuse is "its someone else's fault". Is that what you're promoting?

          1. Anonymous Coward
            Anonymous Coward

            Re: It’s the algorithms on trial [Hold Up Here, Chief]

            do you really need us to point at orange nutters followers?

          2. 8bitHero

            Re: It’s the algorithms on trial [Hold Up Here, Chief]

            They are BOTH guilty. The person committing the act is still liable (and in this case awaiting a death penalty trial), but the forces radicalizing the person may also be guilty of a crime. If you tell someone to commit a crime and you do it, surely you have some responsibility (mob boss for example). I am not for letting the perpetrator off the hook but I think the argument here against the social media sites is at least interesting enough to examine in detail.

          3. ChoHag Silver badge

            Re: It’s the algorithms on trial [Hold Up Here, Chief]

            "Someone told me to do it" should not be a viable excuse, but neither should "I only _talked_ about doing it (to the impressionable idiot/youth/ill/sick/depressed)".

      3. Headley_Grange Silver badge

        Re: It’s the algorithms on trial [Hold Up Here, Chief]

        The fact that you think you are not vulnerable to the algorithm doesn't mean that everyone isn't vulnerable. Protection is needed for the weak, not the strong.

        1. jospanner Bronze badge

          Re: It’s the algorithms on trial [Hold Up Here, Chief]

          People who are convinced that they are “strong”, ie, above propaganda, are some of the worst offenders.

        2. Anonymous Coward
          Anonymous Coward

          Re: It’s the algorithms on trial [Hold Up Here, Chief]

          That's what parents and families are for. If they can't do the job then they shouldn't have signed up in the first place.

          As someone well before me said "They only had to pass the practical."

          1. Anonymous Coward
            Anonymous Coward

            Re: It’s the algorithms on trial [Hold Up Here, Chief]

            Sadly in the USA we have parents who buy their son a gun knowing full well he has MH issues and judges who give light sentences to kids from wealthy families as they've lived such a sheltered life that they don't know right from wrong.

            We seem to have abdicated all responsibility onto others. When did it become the school's job to teach kids how to dress themselves and use the toilet?

      4. Wzrd1 Silver badge

        Re: It’s the algorithms on trial [Hold Up Here, Chief]

        "First, all this -- the lawsuit and many of the comments here -- totally disregard peoples' moral and legal responsibilities!"

        Therein lies the problem, inappropriate tool usage. BEM is morally liable, not civilly liable, as US courts do not enforce moral code, but legal code. When one goes into things moral, those are addressed socially via political processes and appropriate legislation (and more often, inappropriate legislation, because if you want something fouled up, give it to a politician (regardless of nation)).

        Trying to claim that their product is defective, when it was operating as designed is like holding a car manufacturer responsible for a driver intentionally running down pedestrians. Using the courts in this manner, akin to trying to screw together shelves with a scalpel - a tool not designed for that purpose and one that would inevitably fail, typically in an injurious way. Then, bitching that the scalpel was defective as well, as it was used for a purpose it was not designed to perform, in a way it was not designed to be used and predictably shattered, spewing sharp edged shards about.

        1. doublelayer Silver badge

          Re: It’s the algorithms on trial [Hold Up Here, Chief]

          Laws are written to enforce moral things. Maybe your philosophy is that they shouldn't, and I certainly can point to laws that enforce morals I don't share and would like to see change. If you're trying to pretend that laws are not written to make some forms of morality required with penalties if you don't act in the way they consider moral, you may have a weird idea of what makes a politician promote one or a voter demand one.

          "Trying to claim that their product is defective, when it was operating as designed is like holding a car manufacturer responsible for a driver intentionally running down pedestrians."

          You have made this argument before, but it does not represent what the case is about. The defect they're talking about is that the recommendation engines promoted violent material, and if you ask the companies that make the algorithms, they will tell you that they don't intend to recommend that stuff. They will say that because the alternative "yes, we definitely build our engine to recommend violent media when we think that'll make us money", sounds evil. The reality may be that they don't intentionally try to promote it and they may put a little effort into trying to detect it, but so little that it doesn't actually get removed from the recommendations list unless it's extreme and obvious. If this is behavior that the producer says they don't intend and behavior that the plaintiff says is harmful, then you can make a case that it's a defect. It's not a perfect one that's obviously going to win, but that's not the only legal problem these lawyers have.

          The analogy to a car or a scalpel is wrong. When talking about the moral responsibility, they can be valid arguments to suggest certain views, but when arguing the legal one, they are not because they don't represent the argument being made. The scalpel example, in particular, is very far from the situation because using a scalpel as a screwdriver is ridiculously far from intended use, whereas using a car to move forward or a social media algorithm to see content is exactly what they were built for.

      5. John Brown (no body) Silver badge

        Re: It’s the algorithms on trial [Hold Up Here, Chief]

        "He told me to do it." is neither a legal nor a moral defense.

        True. Which is why the the person is serving a life sentence with a further trial due to decide if he gets the death penalty. So there's no mitigating circumstances to help ease his sentence. But there is serious concern about his motivations, what caused them, how did they grow to such a level that he acted on them, etc. He didn't get this way in a vacuum.

    2. Anonymous Coward
      Anonymous Coward

      Re: It’s the algorithms on trial

      I agree. Get rid of the algorithms not censor.

      Will never happen because the state wants to push some narratives.

  3. Anonymous Coward
    Anonymous Coward

    Creative interpretations to try and get around section 230, not the first time it's been tried. I suspect if platforms lose they'll go to a federal court saying state law is preempted in this case by section 230.

  4. IGotOut Silver badge

    Or you could fix ...

    ....your dumb ass gun culture!.

    Nahhh. Freedumbs and all that

    Just carry on with the weekly mass murders

    1. Yorick Hunt Silver badge

      Re: Or you could fix ...

      Theoretically these same murders could have been committed without guns - using knives, machetes, axes, you name it.

      The problem is though that the use of those other implements is invariably messy, and the type of coward who usually perpetrates such crimes is too much of a princess to get his hands dirty - hence preferring the simplicity of the "point and click" interface of firearms.

      1. DS999 Silver badge

        Re: Or you could fix ...

        REALLY hard to kill 10 people (this was in a grocery store, not a school where if you're lucky you can corner them in a classroom) in one go with a knife unless you have elite military training or you're the hero in a Hollywood movie silently making quick work of nameless henchmen on your way to a final showdown with the big bad.

        1. ComputerSays_noAbsolutelyNo Silver badge

          Re: Or you could fix ...

          While a more restricted access to guns doesn't lower the number of psychopaths, it lowers the death count once the psychopaths decide to run amok.

          You can not outrun a bullet,

          but with luck you can outrun a loonie with a knife.

          This issue nicely demonstrates a common fallacy:

          The proposed measure can not fully prevent X from happening, so let's try it at all.

          1. Anonymous Coward
            Anonymous Coward

            Re: Or you could fix ...

            Nonsense.

            We had a spate of vehicles being used to kill multiple people. I seriously doubt a ban on guns would achieve anything. Criminals would get guns illegally, there must be plenty out there the way arms have been dumped in Ukraine and other lawless parts of the world without any means of accounting for them. A few can outrun a loony with a knife but very few from all accounts because we don't walk down the street dodging every stranger just in case. Legally held weapons are not a problem, certainly not in the UK.

            Effective and non-politcial gun control is to be welcomed however. If you think legally held weapons are a problem here, produce the evidence. Switzerland has more guns per capita than the US but are not known for runaway gun crime. Maybe that needs investigation?

            1. Doctor Syntax Silver badge

              Re: Or you could fix ...

              We had a big problem with people shooting other people in N Ireland - to some degree we still do. But nobody has suggested that making possession of firearms legal without having to have a gun certificate would in any way improve the situation.

            2. Anonymous Coward
              Anonymous Coward

              Re: Or you could fix ...

              you do realise we are not talking about the UK which has very strict gun control and minimal mass gun shootings.

              we are talking about the fucking stupid USA gun control which doesn't even check for fucking mental health problems and has huge mass shooting issues.

              learn to fucking read.

              1. Anonymous Coward
                Anonymous Coward

                Re: Or you could fix ...

                The UK has lots of mass stabbings as well as some shootings cos incredibly enough criminals are not worried about the legality of owning a gun. Despite being an island nation we still have a good supply of imported guns and drugs.

                1. DJO Silver badge

                  Re: Or you could fix ...

                  No it really does not. When an incident does happen it hits the news because they are so rare. If they were regular events they would not be newsworthy.

                  2023 US Murders: 18,450

                  2023 UK murders: 571

                  The US population is about x5 of the UK so the rate is about 8 times higher. And that's before we consider how many deaths in the US are misreported as natural or self inflicted.

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: Or you could fix ...

                    The problem with your numbers is that they include all gun deaths. That means both suicide and death by cop are counted alongside the bodycount from the psycho who killed his mother and then shot up a school while law enforcement hid behind their squad cars in the parking lot.

                    There are in excess of 450 million guns in the U.S. If the problem was the guns, it would be exponentially more than 18k/year. The problem is the 1 out of 15000 people with psychiatric problems.

                    1. Paul Crawford Silver badge

                      Re: Or you could fix ...

                      Oh what an idiot you are!

                      The problem is not guns per se, it is the ease at which psychos can access guns and ammunition with practically zero checks. Because amendment!

                      Having guns and ammo available in places like supermarkets, etc, is a concept that no sane country would ever consider a good idea.

            3. Headley_Grange Silver badge

              Re: Or you could fix ...

              The Swiss believe in the state and society with its rights and responsibilites, as a mutual benefit - a common wealth. US citizens seemingly don't; as far as I can tell, the overriding belief seems to be similar to that of a teenager shouting "you're not the boss of me" at his mum when she wants him to tidy his room or, indeed, do anything to contribute to the household. Tell the Swiss to do something reasonable, like not kill people with their guns and they do as they're told - so they can have nice things, like guns. Tell a teenage boy to do something reasonable and he responds by shouting and screaming at his mum about his human rights and then breaks something to show her she's not in charge. Until he wants his tea, of course, which he expects to be on the table at 6 but refuses to help with the washing up. That's why he can't have nice things.

              The problem with the US is that it seems like the teenagers are in charge.

              1. DJO Silver badge

                Re: Or you could fix ...

                Switzerland has compulsory national service for men (optional for women) so every man and a lot of women have had weapon safety drummed into them. In the US any idiot can own a weapon with no training whatsoever.

                It makes a big difference, you know, having weapon usage "well regulated" - now that's a familiar phrase!

                1. stiine Silver badge

                  Re: Or you could fix ...

                  No it doesn't. I, my father, my three uncles, and both of my grandfathers, learned gun safety and how to shoot before we turned 13. Only my uncles and one grandfather were ever in the military with the requisite formal training.

                  1. DJO Silver badge

                    Re: Or you could fix ...

                    You are arguing with yourself.

                    You can't say that having training does not help and then state that you are OK because you had some safety training. Training comes under the heading "well regulated" - nobody should be allowed to own a weapon unless they can prove they can use it safely and responsibly, you know like when they get a driving licence and have to prove they can handle a car.

                    Anyway you are demonstrably wrong, how many US gun owners have proper gun safes and use them, how many leave a gun in a drawer, how many leave guns with a round in the chamber, how many break every single rule pertaining to the safe handling of firearms? Hint, try "far too many".

                    You cannot argue with the statistics, the USA has a far higher rate of death by firearm than any other advanced country. Also in the US and only in the US you have this infantile fetishising of guns. I wonder if those two things might be related.

                    It does not matter if the training is "formal" from the armed forces or informal from relatives and friends, the important bit is the training happens. If it does not you get the shit-show you have in the USA.

          2. fishman

            Re: Or you could fix ...

            Bombs. If you don't know about them until they go off, it's too late to run.

    2. Jedit Silver badge
      Stop

      "Just carry on with the weekly mass murders"

      What weekly mass murders?

      In 2024 there has been more than one mass shooting incident in the US per day.

      1. Anonymous Coward
        Anonymous Coward

        Re: "Just carry on with the weekly mass murders"

        Mostly committed by a small portion of the US population, who are likely to already have a criminal history, with illegal handguns.

        1. Ken Hagan Gold badge

          Re: "Just carry on with the weekly mass murders"

          Almost certainly, but why is it only the US that has this problem? Do other countries not have criminals, or do they not have illegal guns, or it is something else they don't have?

          1. ChoHag Silver badge

            Re: "Just carry on with the weekly mass murders"

            why is it onlymostly the US that has this problem?

            FTFY

            Everything is bigger in Texas.

  5. Anonymous Coward
    Anonymous Coward

    Ridiculous

    Hey don't look here, look there!

    Perhaps they should be asking how many were taking pharmaceuticals for mental issues instead of trying more ways to stifle free speech. Or am I radicalising the readers of The Register through such heinous comments against censorship and criticism of institutional integrity?

    1. Anonymous Coward
      Anonymous Coward

      Re: Ridiculous

      Another interesting take on this story would be the radicalisation that happens WRT the current gender debate. The supposed experts (who usually have a vested financial interest) deny the concept of 'rapid onset gender dysphoria' and claim that peer pressure, mental health issues and/or social media are not contributing factors to the huge increase in kids deciding that they need 'treatment'.

      People will claim that the gun debate is biased as there are lobbying groups such as the NRA involved. Well there is just as much, if not more, money involved in the gender debate as institutions are making vast sums from the medicalisation of these issues as they are creating 'patients for life'.

      1. Anonymous Coward
        Anonymous Coward

        Re: Ridiculous

        that's just right wing nutter bollocks.

        guns are hugely more profitable than any of the other shit your railing against.

        1. Anonymous Coward
          Anonymous Coward

          Re: Ridiculous

          They become very profitable when you manufacture wars.

          But if you can stop frothing at the mouth for a few seconds, if the internet can radicalise a person in this way they why is it completely ignored in other situations?

  6. Doctor Syntax Silver badge

    The social middens might well have a lot to answer for but I'm old enough to remember a different procedure for trying someone in court. They were charged, the prosecution (not the judge) produced evidence against them. If they thought it necessary they could produce their own evidence and arguments to counter the prosecution's case but at the end it was the prosecution who had to prove their case beyond reasonable doubt. The defence had to prove nothing.

    It was called the presumption of innocence. Without it we are all in danger of wrongful conviction. Has the US joined the UK in abandoning this?

    1. Anonymous Coward
      Anonymous Coward

      The presumption of innocence has evaporated. We have the court of public opinion, politicians effectively campaigning on the promise of bringing people/groups 'to justice' and mobs of unregulated street judges (well, living in their parents basement) who go to great lengths to get people they don't agree with fired and destroy their lives.

      1. disgruntled yank Silver badge

        Presumption of innocence

        This is a civil trial, where the presumption of innocence does not apply. Of course, the plaintiffs must prove their case, and certainly they will face highly paid legal counsel out do show its weaknesses.

        1. Doctor Syntax Silver badge

          Re: Presumption of innocence

          "Beyond reasonable doubt" becomes "balance of probabilities". It is still up to the plaintiff to prove their case. If anyone doubts that this is a bad thing let me remind them that a few years ago the burden of proof was shifted to the defence where the "evidence" was produced by computer - unless the defence could prove the computer was wrong the evidence was accepted. And now we're face with a snail's pace untangling of one of the largest miscarriages of justice, criminal and civil, in UK legal history.

          If we are to look to the presumption of innocence for our own safety we have to grant to to everyone - even Facebook and X - because it's indivisible.

        2. Anonymous Coward
          Anonymous Coward

          Re: Presumption of innocence

          And we all know that civil trials in NY are not at all biased.

        3. I am David Jones Silver badge

          Re: Presumption of innocence

          Presumption of innocence really does apply to civil matters. Difference is the standard of proof: beyond reasonable doubt (criminal) vs on the balance of probabilities (civil).

          That’s the theory at least…

    2. Killfalcon

      This is misunderstanding the stage we're at in the procedure.

      So, there's a civil (not criminal) court case. A claims B did a bad thing, and would like recompense. B isn't saying they didn't do the thing, yet.

      B claims that regardless of if they did the thing, they are allowed to do the thing under S230, and asks the judge to dismiss the case without a full trial. Logic is that there's no point asking a court to say if they did the thing if the law doesn't care if they did.

      A lot of court cases end like this - cases that are obviously doomed are a waste of the Court's time, but the line for "obvious" is intentionally very low, rather than deny too many people a shot at justice. This is why you might see a lot of fuss about "anti-SLAPP (Strategic Lawsuit Against Public Participation" laws, that are meant to make it easier to dismiss lawsuits that are obviously being used to silence people for making public comments. Normally those go to trial and cost a lot of money to defend, regardless of merit.

      Here, the judge has read the initial documents, and says "A's claims are plausible enough that we can't instantly dismiss this, B is invited to write more words to counter that, or wait for the full trial date"

      That's where we are now. The plaintiff's case has survived a motion to dismiss [ie, end the case before trial]. B has been invited to write a fuller explanation, if they don't or can't, then there's a full trial, where both sides are expected to prove their case.

      1. Michael Wojcik Silver badge

        Exactly. There was a motion to dismiss, and it was denied. Very standard. The article rather overstates the importance of this. §230 is not "on trial" here, and nothing has been decided about it or whether it applies to the defendants. (Spoiler: It does, and this argument will not prevail; even if it wins in the initial trial, it will almost certainly be overturned on appeal.)

        The judge may just be being cautious, or she may be being foolish, or she may be sympathetic to the plaintiffs, or she may be grinding a political axe. The text of the denial sounds like the first, to be honest, though I think precedent is sufficient that dismissal would have been justified.

  7. Anonymous Coward
    Anonymous Coward

    Have certain cable networks and AM radio talk shows that hyped up stolen election and covid conspiracies etc been asked to prove they didn't radicalize violent idiots? No, then why are we asking internet companies with far less direct editorial control.

    If he was getting enough extreme content in his feeds to radicalize him, then I have to wonder what he searched for and watched. Anecdotally, I've used YouTube and Discord for years and TikTok on and off for a few months and don't encounter much right wing content, much less any violent racism content.

    1. Evil Scot Bronze badge

      Ahh but Fox "News" is not Promoting GB "News"

    2. Headley_Grange Silver badge

      Anecdotally, after I clear my youtube cookies (I haven't got a google account) then there will be a few right-ish wing vids in the feed for the first few days - GB news and the like - along with some music feeds and stuff. After a while they go away cos I don't click on them. Oddly, there's never much from the other political wings - nothing lefty or greeny. I don't know if that's cos it thinks it knows something about me (I'm not right wing) or because there's more right-wing shite than left or because right-wing shite is more likely to pull in views, shares and long dwell times.

      Just tried it now - opened youtube in a private window. In the first 100 vids there was a Farage/Trump interview, GB News "If you're white Poor and Male they're coming for you", and one with Tommy Robinson. There's bugger all from any other wings of politics - no George Monbiot banging on about sheep and trees - unless you assume its covered in the more mainstream news vids. The rest is a smattering of food, music, travel, film and TV that looks like it could be a selection to test out my likes, but there's no obvious lefty rabbit hole I could disappear down, although I'm sure the algorithm is a lot more compliacated than that. I'm just settling down to watch a vid on blue LEDs.

      1. Evil Scot Bronze badge

        Dam good story that. Such snobbery.

        1. John Brown (no body) Silver badge

          It's not hard to test it out for yourself rather than just dismiss it out of hand.

          I just did and, also anecdotally, got fairly similar results. A sample of two isn't proof of anything, of course, but interesting nonetheless.

          1. Evil Scot Bronze badge
            Facepalm

            The guy who invented the process for blue LEDs had no PHD and was treated like crap. Tis a great story.

            1. This post has been deleted by its author

      2. stiine Silver badge

        How the fuck to you make them go away? I don't lick on GB news videos, any of the fake SpaceX videos, any of the fake CNN channel videos, nothing political. In fact, if its not racing cars or heavy machinery, I'm not interested. I think youtube's autoplay on mouseover is telling them that I like everying in the right-hand column from the top of the page to the very bottom.

        1. Headley_Grange Silver badge

          Dunno. I haven't got a Google or Youtube account so the algorithm is based on whatever it stores in its cookie and, I guess, whatever nefarious stuff Youtube does cross-site. I allow the youtube.com cookie - if I don't then it's groundhog day every time I launch the website - and the youtube nocookie that's needed to watch some of the vids on the Guardian website. I read most of the free national UK dailies online and a couple of locals, but use NoScript to lock them down tight and, Gruan aside, never watch vids on them, so maybe Youtube only knows I read the Guardian and, together with my recent video watching, assumes I'm a left-leaning watchmaker who likes SciFi, Suntour, Japanese wood saws, LEDs and is currently making a surface plate and a longbow.

        2. ecofeco Silver badge
          FAIL

          Under the thumbnails. there is a hidden 3 dot menu button that does not appear until you hover on the bottom right corner, just slightly under the thumbnail.

          This give you the option of saying you are not interested or do not recommend this channel or to report it.

          And yes, I also get a lot of Nazi fascist racist crap recommendations but few to none progressive videos. Lately, I've had to spend every day clicking the damn hidden Not Interested/Do Not Recommend button.

          YouTube has completely lost the plot a long time ago.

          If you clear your cookies and history and go there without singing in, the dreck of music videos and sports and movie previews and right wing rants is overwhelming.

  8. russmichaels

    They definitely cannot claim they are nothing but message boards, as everyone knows they push targeted content to users.

    If you view posts on a certain topic, your feed will then push more similar posts to your feed,

    Social media is full of racist content, although TBF, 99% of what I see is blatant racism and hate toward whites, but I rarely ever see anything the other way round, so it is clear that any filtering that does happen is prejudiced.

  9. tyrfing

    Sigh. Civil suit. So they're looking for a lottery payout.

  10. Bob Whitcombe

    Isn't everyone guilty of a shooting "Social Media Radicalized"

    With exposure to social media content ubiquitous and omnipresent, if the social media defendants can't "prove" they did not in anyway contribute to the shooter's "radicalization" then it would follow that social media radicalization is responsible for ALL shootings since all shooters have been exposed. And thus all robberies, drug use and violent crime. So how do we square the First Amendment with a proven social harm to humanity?

    1. Anonymous Coward
      Anonymous Coward

      Re: Isn't everyone guilty of a shooting "Social Media Radicalized"

      First Amendment applies to humans, not corporations.

  11. Skiver

    "Reddit declared that hate and violence have no place on its platform and pointed us to its content policy that prohibits hate content based on identity or vulnerability as well as messages that glorify, encourage or incite violence."

    The only think they care about is bad press

    1. ecofeco Silver badge

      LOL, I can link a dozen far right subs right now that are nothing but hate.

      They DO try to keep a tight reign on calls for violence, but dog whistles and innuendo are still fine.

  12. aerogems Silver badge
    Boffin

    This is kind of a tough one.

    On the one hand, I tend to agree in a very broad sense, with the various sites that they're just message boards. However, it has long been established that they deliberately set out to make their message boards as addictive as possible to keep people coming back, and that they have often been slow (at best) to act when presented with evidence of various kinds of hate speech. For a time, Facebook literally had an ad category called "Jew Haters" which let you target ads at people... well, the name pretty much speaks for itself. Sure, it was probably something that was automatically generated by a predictive algorithm (what now gets called AI) but the fact that they apparently didn't have anyone doing the occasional review of ad categories speaks volumes.

    So, sites like Facebook aren't without culpability, but at first blush it seems like the judge is going a bit far. Granted it's only a preliminary ruling and just that the plaintiffs have made a compelling enough argument the judge is going to force the sites to defend themselves instead of being severed from the case entirely, but still...

  13. Excused Boots Silver badge

    Ion

    It does strike me that maybe we should think of this as if it were a real world setup.

    For example, imagine that I erect a truly massive, physical bill board on my own land. And I allow anyone, once they have signed up, to collect a standard size piece of paper from me on which they can write anything they like and pin it up on the board for everyone to see. Fine!

    Every now and again, I take a look at the massive bill board, sigh, and remove the posts which are obviously ‘wrong’, ie ‘Meet-up on January 6th, storming of the Capitol building - bring your own guns; cheese and wine party afterwards at…..’

    But other than that, I let everything else go. Fine. Free speech, First Amendment etc!

    Except, imagine that I frequently look at the posts and some that I think are ‘interesting’ or more likely to get engagement, I reprint in a much bigger font and move to a more prominent position on my billboard? Surely then, I’m no longer just a passive carrier of others views (and hence should be protected under the section 230 rules), but am now actively manipulating stuff to suit my, or my owners or some random AI generated, interests? And maybe, maybe, the 230 provisions shouldn’t now apply to me and maybe I’m liable for any harm that my actions have caused or contributed to.

    Just a thought, anyway!

    1. Anonymous Coward
      Anonymous Coward

      Re: Ion

      If you remove any post, you are infringing on the First Amendment right of the poster.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like