back to article The cybersecurity QA trifecta of fail that may burn down the world

In Neal Stephenson's 1992 novel Snow Crash, he invents malware that can leap species from silicon to the human brain. That's a great metaphor for so much of our online lives, but it raises one question of particular interest. If humans can be damaged by our own technology, should we protect not just our data but ourselves …

  1. b0llchit Silver badge
    Unhappy

    Let's ask our politicians why they accept excuses in place of action.

    No need to ask... We have the best politics and policies money can buy. It is the old "power corrupts and absolute power corrupts absolutely".

    notwithstanding any honest souls fighting windmills

    1. Anonymous Coward
      Anonymous Coward

      > absolute power corrupts absolutely

      - which also applies to religious movements.

      Lee Kuan Yew warns on Dangers of Christianity and Islam - YouTube:

      https://www.youtube.com/watch?v=x3lUD8ScQKk

      1. Anonymous Coward
        Anonymous Coward

        The irony

        Of posting a video about slating religions on this news article …

  2. Pascal Monett Silver badge
    Thumb Up

    Great article

    It is essential to be able to summarize a situation and point out the inconvenient truth of it. This article does exactly that concerning AI/ML and the timing couldn't be better.

    Thank you for this piece of reference. I hope the message will be heard far and wide.

  3. Like a badger

    The road to hell...

    If AI/ML is to identify and relatively de-promote or entirely eliminate invective and divisive opinions, and make the internet a calmer, better place, that's a worthy aspiration. It's also at risk of being a form of censorship. Clearly it's already happening when algorithms make a choice to de-promote calmer and less emotive voices, and they are being partially censored. You can use the term "moderation" because it sounds better, but it's the same thing.

    If the proposal is to formalise this to moderate the internet to be a better place, that's a worthy aspiration. It does however beg some questions, such as who makes the rules, what are the rules, and what if these rules are at odds with democracy at the ballot box, or indeed with the way decisions are made in those cultures who don't see much benefit from Western-style democracy?

    Some more pragmatic concerns would be to ask where's the emergency stop button for when algo's suddenly start producing outcomes that weren't intended, and how performance will be maintained when state forces seek to manipulate them?

    1. Anonymous Coward
      Anonymous Coward

      Re: The road to hell...

      > who makes the rules

      - definitely not the social media companies, which are mostly interested in user engagement. They are mistakenly or intentionally treat freedom of spreading propaganda as "freedom of speech" without accountability and attribution specific to real world communication before anonymity became a thing for humans and disinformation bots.

    2. IfYouInsist

      Re: The road to hell...

      Perhaps the vendors should help people decide for themselves, rather than deciding for them. I envision a classifier that would decorate displayed text to show its semantic categories. It would classify statements (descriptive, speculative, normative, imperative) as well as questions (inquiries, provocations, rhetorical ones). It could highlight weasel words (quantifiers, qualifiers, disclaimers) and indicate emotional charge.

      The task seems more complex than syntax parsing but less daunting than sentiment analysis (also less attractive as a subversion target). It really should be doable with today's technology. The UX would have to be worked out, obviously, as well as a business model. Oh well, one can dream...

      1. gtarthur

        Re: The road to hell...

        A good start is right here in this forum. The up and down voting of participants is a good start. Beyond that, is the example set by the venerable techie site slashdot.org. In addition to the voting, a user can limit what they see based on the net value of the up and down voting. This generally keeps trash below the cutoff threshold. I do like the categorization as well. It would help with those crazies that are either off topic or psychotic. As others have pointed out, the problem is the algorithms are tweaked to generate "hits" for advertising revenue. The failure to filter is also probably a function of "after the fact" content analysis (assuming there is any). Categorization should take place at the time of submittal. This has a 2-fold benefit. First it's a clearly identifiable event in the workflow, and second it introduces a delay factor that can work to slow down the rage machine. I have personally given up on all social media and focused on this type of forums and feedback for new journalism from "authentic" publishers. Perhaps this will resolve itself with a generational "turnover". Let's hope we don't burn it all down before that.

        1. Neil Barnes Silver badge

          Re: The road to hell...

          The big snag - to me - with up and down voting is that it is unattributed as to reason. An upvote for 'I liked your post' is easy and obvious but the downvote much less so: did you disagree with a particular point, the general tenor of the post, think it factually inaccurate, or what?

          I've argued previously that downvotes should only be allowed with an explanatory paragraph in response (upvotes would simply generate 'me too') but there are obvious holes in the logic still... but if you don't tell my why you downvoted me, how should I know?

          1. Jadith

            Re: The road to hell...

            I would argue the same can be said of an upvote. Is the upvote for a well reasoned and insightful post? Is it because you found it funny? Is it an upvote for trolling purposes, like you want it to be seen by more people for a bigger flame war? Is it simply because the person writing it has a following? Is it because it has lots of upvotes and people like being part of the 'winning team' as it were.

            It seems the downvote gets all the attention because of the association with negative emotions, but the elimination of downvoting on social media platfors seems to inevitably lead to a state that is "worse than Reddit" leading me to believe it serves just as important a purpose as the upvote.

            Alos, the upvote downvote mix leads to an aggregation that, overall, shows how that particular online community of viewers accepts or rejects content. The more people in an online community that participate in the up and down votes, the more it takes the shape of the general acceptance by that community as opposed to the specific motivations of the individuals.

            That being said, a more filtered, nuanced, or even larger scope of options could be a good thing, and something I would be interested in seeing in play.

            1. Dan 55 Silver badge

              Re: The road to hell...

              Is the upvote for a well reasoned and insightful post? Is it because you found it funny? Is it an upvote for trolling purposes, like you want it to be seen by more people for a bigger flame war?

              Slashdot again had an answer for that, but it didn't help it avoid sliding into irrelevancy.

    3. Paul Crawford Silver badge

      Re: The road to hell...

      The simpler solution is to stop promoting anything. Stop simple like/share. If you are on social media for chat with friends then that is what it should be - what you see is what they actually type in, and it only propagates as far as their immediate circle of friends. Just like talk in a pub.

      But "social media" is all about the money - milking outrage for advertisement reasons, rather like some of the tabloid press but without the legal accountability. Make them liable for what is posted beyond a reasonable friend size, say 100-200 folks, and suddenly they will find ways to avoid the spread of hate and bile.

      1. Brave Coward Bronze badge

        Re: The road to hell...

        A little ironic you've been upvoted 5 times at the time of writing, but I mostly agree with you. This whole stock market ideology about social relations is so boring and dumb.

      2. PB90210 Silver badge

        Re: The road to hell...

        "We would like your opinion on your recent purchase"... normally received before you have even had the chance to open the package!

        Got a phone call from a call centre the day after booking an eye test in Boots, asking if I was liable to attend and was I happy with the date. "But I made the appointment!" didn't seem to match any of their prepared answers

      3. Michael Wojcik Silver badge

        Re: The road to hell...

        Yes. Usenet didn't have "reactions" and its social norms somewhat discouraged non-contributing "me too" or no-content flame posts (more so on moderated groups, of course), and somehow many of us managed to communicate.

        Reactions are tempting for readers, because we're social creatures with an instinct to share our emotional state; but in asynchronous communication, where there's no interactive adjustment to linguistic footing, their main effects are to promote in-group/out-group distinctions and competition for attention.1

        While we're at it, I'd like to see discussion forums that block emoji. The initial popularity of emoji among ideographic language users2 is understandable, but they've become a way for writers to avoid diction, style, and the expression of nuance. Lowering the cognitive load of communication is not necessarily a good thing.3

        1I alluded to some of these effects in the piece on Usenet I wrote for Works and Days in 1994, and others made similar observations in the last couple of decades of the twentieth century as online communication proliferated. And of course there are a couple of millennia or so of various rhetorical traditions that analyzed the different affordances and effects among interpersonal speech, oratory, private writing, and publication.

        2"Emoji" literally means "picture-writing".

        3One of the consequences of hand-writing letters in ink was that writers had to consider things like orthography, pragmatics, and layout while writing, because erasing was difficult or impossible and alternatives such as striking out text were aesthetically unpleasant (and showed a lack of forethought). The ability to freely edit text while composing on a computer has different affordances, and offers different use cases; people today write far more than they did a hundred years ago, for example. But it has worked against careful writing.

  4. This post has been deleted by its author

  5. Anonymous Coward
    Anonymous Coward

    Loony bin in the bagging area

    It looks like our modern Plug & Pray tech is tremendously effective at the algorithmic fostering of batshit crazyness, especially for extremes and "outliers" in the spectrum of human psychoses IMHO. To the schizo-loner it seems to offer: You're out of it weirdo, go murder some kids or something (Sandy Hook, Southport). And to the mindless hormonal herdster: You're in, join our beastly mob violence of outrage and aggression (Charlottesville, Southport).

    One can only hope that folks in-between remain less damaged by these species-leaping blood-thirsty malwares (for the survival of the species!).

  6. StewartWhite Bronze badge
    Flame

    Cynical or Naive?

    Either the author is being incredibly cynical or naive - I can't decide which.

    Why would the doublespeak monikered "social" media companies (the orange-wigged one's use of "Truth Social" being the most egregiously named of all) want to do anything to deal with the far-right/far-left/tinfoil hat wearing brigades spouting their toxicity to the Delta and Epsilon hordes of like-minded idiots? Won't you think of the children, I mean $$$$$.

    X/4chan/Telegram etc. have business models that to a large extent rely on people's stupidity/viciousness/mindless consumerism and the others are only marginally better.

    Current AI is mostly baloney and would never be able to deal with human ingenuity in replacing banned terms with carrot/aubergine emojis until it's too late even if the companies involved were to want it to.

    Ultimately the poison that's being propagated on the web could at least be massively reduced by revoking the bizarre loophole protections afforded by Section 230 in the US and similar legislation in the UK and elsewhere but politicians are too stupid or too venal to do so. It's profoundly depressing that we've allowed the internet to become an open sewer rather than ensure that Facebook et al are held to the same account as conventional media.

  7. Postscript
    Flame

    break the world

    "Or perhaps they'd rather break the world than clean up their act."

    Is it not the same for all corporations and their capitalist governments? Rather burn down the world than admit climate change is real and rein in fossil fuel use. Rather have citizens without water and electricity than limit data centers. Rather let all their patients, employees, and their families become disabled or dead with Covid than protect anyone, give sick leave and insurance, or clean indoor air. Rather let grocery stores collude and price gouge their way to record profits while people go hungry. Rather let venture capitalists buy up all the housing and make homelessness a crime. Etc. This will continue to accelerate until somebody stops them.

    1. abend0c4 Silver badge

      Re: break the world

      The desire to externalise costs - or simply to insulate yourself from the consequences of your actions - seems to be an intrinsic human trait. It's the same attitude that leads people to throw rubbish out of their cars into the surrounding environment (and while we shake our head at flying McDonald's wrappers, we're oddly rather less censorious about the noxious exhaust gases) - I have improved my local wellbeing even if it's at the expense of other people.

      That makes it very hard for governments to do anything significant about it; we come with a sense of entitlement and vote accordingly. We're still remediating a toxic legacy in the physical environment that dates back more than 150 years. Toxic IT, by contrast, is designed to appear innocuous to those who recklessly bathe in it and the danger is not so much that we deny its obvious hazards, but that we lose our ability to recognise it.

  8. parrot

    Rage against the subjective machine

    A few weeks ago I was having a lovely chat with someone about how they were fundraising to help homeless people. Suddenly it switched and they went off on a rant about immigration, shoot them before they get off the boats they were saying, no apparent awareness whether I might not agree or share their view. Perhaps they just assumed I would.

    Last week a similar thing happened with another person I don't see very often, seemed like they were permanently plugged in to twitter. Their phone would ping and they'd be making comments condemning far right violence, then suddenly celebrating violence against Israel. Then we had an airing of views about supposed trans boxers in the Olympics, and "the trans agenda". They'd be upset if they knew what pronouns I was using for them here... ;) Again, no apparent thought to what I might think or feel about any of it. It was just awkward.

    I believe it's healthy when we talk about our beliefs and values with people who have different beliefs and values, and I get into that kind of conversation occasionally with people I know. It doesn't have to be heated and it challenges you to examine ideas which might not be immediately intuitive to you. I think there's a societal norm now that we avoid those discussions to avoid awkwardness, which I understand. It's not always appropriate. But a hallmark of someone caught in an echo chamber has to be obliviousness to the views of others.

    That said, I don't think having conviction is a bad thing, because some things are right and some things are not. But that is subjective, and I find myself considering the things which make me angry, even the music I listen to, and wondering where my own blind spots are.

    1. tiggity Silver badge

      Re: Rage against the subjective machine

      @parrot

      Bad timing there, given the IBA have held a press conference today explaining the boxers they disqualified (but IOC allowed) were disqualified because they were genetically male.

      The Olympic boxing furore is not about "trans", it's about protecting women's sport (and in this case, women's health too, as males hit a lot harder than women*)

      * https://www.sciencedaily.com/releases/2020/02/200205132404.htm#:~:text=But%20even%20with%20roughly%20uniform,with%20time%20and%20with%20purpose.

      An irritatingly small sample size, as many surveys often are due to costs & logistics, but all of comparable "fitness"

      1. parrot

        Re: Rage against the subjective machine

        Well, exactly. Not sure the timing makes much difference, perhaps I should have said “alleged”?

  9. Paul Kinsler

    FWIW

    There is already a non-trivial amount of work trying to understand radicalization & social networks: notably, I just happened to see this, but there is older work out there.

    Quantifying the vulnerabilities of the online public square to adversarial manipulation tactics

    Truong et.al, PNAS Nexus 3(7), 258 (2024).

    https://doi.org/10.1093/pnasnexus/pgae258

    Abstract: Social media, seen by some as the modern public square, is vulnerable to manipulation. By controlling inauthentic accounts impersonating humans, malicious actors can amplify disinformation within target communities. The consequences of such operations are difficult to evaluate due to the challenges posed by collecting data and carrying out ethical experiments that would influence online communities. Here we use a social media model that simulates information diffusion in an empirical network to quantify the impacts of adversarial manipulation tactics on the quality of content. We find that the presence of hub accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation. Among the explored tactics that bad actors can employ, infiltrating a community is the most likely to make low-quality content go viral. Such harm can be further compounded by inauthentic agents flooding the network with low-quality, yet appealing content, but is mitigated when bad actors focus on specific targets, such as influential or vulnerable individuals. These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.

  10. Mage Silver badge
    Boffin

    AI is a scam

    The fact is that AI doesn't deliver. All the jargon is picked to mislead.

    So of course it can't fix (un)social media, not that the owners want to fix it. Some less than others.

  11. AVR Silver badge

    A problem for us isn't a problem for them

    Various social media platforms get better engagement through outrage. Detecting and removing outraging material is against their interests, and expecting them to come up with an effective means of doing so isn't reasonable in their view.

    Then of course there's the way that many very rich people find the culture wars useful to prevent action against their growing fortunes, or outright agree with the reactionary nutcases (Lachlan Murdoch, Elon Musk).

    Basically 'challenging' AI/ML to solve the problem is up against a lot more problems than technical ability to do so. If you come up with a way then the people in charge of implementing it have incentives to undermine it.

    1. Richard 12 Silver badge

      Re: A problem for us isn't a problem for them

      It is very difficult to get Facebook to prevent something when their entire business model is based around promoting it.

  12. John69

    Why expect them to?

    The idea that we should give capitalists all the power, and expect them to act in the best interests of society is completely naive.

    The only answer is for us all to run the algorithms to block this stuff, eg. by training our own AIs from Hugging Face on what we want want blocked. Or we could take the power from the capitalists in a more direct way.

  13. Filippo Silver badge

    The article is good - but it fails to draw the final conclusion.

    AI is currently unreliable. The article reports several blunders. More are reported every day. This appears to be an intrinsic feature of the technology, and fixing it seems unlikely.

    Poor quality in cybersecurity can cause serious problems. The article mentions this clearly. Everyone has tales of antiviruses doing more damage than viruses.

    Given those two facts, is AI feasible as the equivalent of good cybersecurity for our social communication?

    Isn't it more likely that it would be the equivalent of poor cybersecurity? With all the false positive, false negatives, and miscellaneous annoyances?

    IMHO, that's the conclusion that the article should draw.

  14. Filippo Silver badge

    Personally, I prefer to screen incoming memetic malware at my brain.

    My main firewall rule involves discarding, or quarantining for careful examination, anything that appears to be created with the intent of making me angry about something. Also, anything where the content creator is assuming that anyone who disagrees must be either stupid or corrupt.

    That alone gets rid of most attacks on my mind.

  15. Captain Hogwash Silver badge

    I'd start at banning algorithmic social media feeds then see how things go from there. Maybe it wouldn't be enough but I think it would make the problem a lot smaller.

  16. martinusher Silver badge

    Time to re-evaluate what's really important?

    Software has two fundamental properties. One is that its complexity is optional, the other is that it never wears out. A piece of code that runs reliably now will continue to run in the indefinite future, it only starts to become unreliable because the conditions that it runs in change. These changes could be due to hardware deterioration but most of the time they're due to updates to other software running on the same machine. All too often these updates are not improving the functionality of the machine, they're just there to provide armies of programmers something to do and companies something to sell. Its this spiral of often unnecessary fixes on fixes that's killing us, consuming huge amounts of productive effort but not actually getting anything done.

  17. Anonymous Coward
    Anonymous Coward

    This, with more detail, should be a Black Hat presentation.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like