back to article Twitter's machine learning algorithms amplify tweets from right-wing politicians over those on the left

Twitter's algorithms are more likely to boost right-wing content than left-wing posts from politicians and news publications, according to a recent study. A team of engineers working on Twitter's own ML Ethics, Transparency and Accountability (META) unit scraped millions of tweets of thousands of elected officials from seven …

  1. Disgusted Of Tunbridge Wells Silver badge

    Compensating for the fact that most Twitter content is posted by children and is therefore left wing?

    Could it be that Twitter readers are more representative of society than the people who actually post?

    1. Tom7

      My thought too - this is likely to be a simple "regression to the mean." It's difficult to boost content that's already reached 99% of Twitter users through ordinary people re-tweeting it.

      And the cynic in me rather thinks that Twitter might have a vested interest in this result.

      1. Khaptain Silver badge

        "And the cynic in me rather thinks that Twitter might have a vested interest in this result."

        I don't think that your are being cynical, I would probably use the word 'rational'...

        Twitter appears to interested in anything/everything that generates Tweets... Especially when it involves pitting people against each other...

        1. Anonymous Coward
          Paris Hilton

          Twitter has a vested interest in activity, not in merely reading tweets.

          Take COVID or elections in the US. The left tends to look at posts from the left or right with an attitude of 'of course' or 'not again' and moves on to read the next tweet.

          The right tends to look on posts from the left as 'so what' and moves on to read the next tweet BUT they look on posts from the right as something that needs to be shared and retweets it, generating higher activity and, thus, higher rankings.

          In countries like Germany, the situation is reversed with the left doing more retweeting, generating more activity, and higher rankings.

          And in both cases, Twitter sits back, smiles, and collects money.

      2. Disgusted Of Tunbridge Wells Silver badge

        There's plenty of content to appeal to the left. Promoting non left-wing content so that new users aren't immediately repulsed isn't a bad idea.

        Also the number of times I see a clearly orchestrated campaign to get some leftwing political hashtag trending is ludicrous. It could just be a reaction to that.

        1. LionelB Silver badge
          Stop

          So you think new users are more likely to be repulsed by left-wing than by right-wing content? And if it happens to be the case in some demographic that's an acceptable reason to pander to it? (Well I guess the answer to the latter question is "yes" if you're in the business of systematic creation of right-wing echo chambers, because... there just aren't enough of them already?)

          And you think the right does not orchestrate campaigns to get some political hashtag trending?

          He he, I thought at first glance your username may have been ironic...

          1. Disgusted Of Tunbridge Wells Silver badge
            Facepalm

            There is a middle ground between a left wing echo chamber ( ie: Twitter ) and a right wing echo chamber ( eg: nowhere ).

            1. LionelB Silver badge

              Erm, what? Are you saying there are no right-wing echo chambers (ludicrously false) or that there can be no middle ground between left and right? Really not sure what you are getting at, or how it constitutes a response to my post.

    2. Philip Storry

      Alternatively, it could be the opposite - that right wing politics has deviated from the centre too far, and is not representative at all.

      It depends on how the algorithm was trained.

      If it's just looking at the numbers of likes, replies and retweets - without context - then it's easy to see how more controversial and extreme content would get higher numbers. More moderate content just gets a few replies & retweets from those that agree and a small number of extremists replying. Extreme content gets many more replies as mockery and disgust generates replies/retweets too.

      Therefore the algorithm would mistake activity for popularity, and provide a boost to content that's further from the true political centre of the country. When it evaluates a tweet from a right wing source it sees markers that it will be "popular" (read: generate activity), and boosts it accordingly.

      This might also explain the apparent anomaly of Germany, if their left wing is more proactive and pushing policies which might be more controversial.

      If I may done my old man hat and start a quick rant, this is the problem with ML systems. Nobody can read the models that they produce, nobody knows how they work - they're just a black box.

      Back in the 90's when "artificial intelligence" meant "expert systems", things were a horrible mess of rules and filtering/bayesian evaluations. But you could at least sit down and trace data's path through those systems and know what each step was doing. By contrast very few people understand the actual ML model sets and how they work, and sitting down and tracing a data point's path through it for evaluation is neither practical nor useful in almost all situations.

      It'll be very interesting to see what happens the first time an ML model appears in a court of law. How will a judge take to a company or government department saying "We have no idea why it does that, nor do we know how to stop it from doing that."?

      Machine learning no doubt has its place, but we're still learning what that place is.

      1. cyberdemon Silver badge
        Mushroom

        Machine learning no doubt has its place, but we're still learning what that place is.

        Room 101?

      2. jmch Silver badge

        "more controversial and extreme content would get higher numbers"

        My first gut feeling was this - as stated in the article, there is significant difference in boosting not only between parties but also between different members of the same party. Well known that more controversial posts get shared more. If the algorithm for boosting takes previous shares into account, it could be (part of) an explanation.

        Whatever the case may be, it's noteworthy of the state of "AI" or rather, machine-pattern-recognitrion posing as machine 'learning', that the researchers with access to the raw data and the boosting algorithm still don't know why some tweets are boosted and not others.

      3. brotherelf
        Terminator

        Machine learning no doubt has its place, but we're still learning what that place is.

        Yeah, I can't help but think that Artificially Trained Stereotyping is more useful if you think of it as trying to figure out the question by getting a series of unsatisfactory answers. We used to think beating a chess master would be a sure proof of AI, until it happened and felt anti-climactic in a way.

        We dreamt of crystalline pureness of thought beyond human limitations, deities of our own making that would lead us safely into an ever-better future, while what we get is just as flawed as the human world that teaches it and its anwers are the digital equivalents of hunches. "I believe this x-ray shows cancer with confidence level 57.83643318, but I can absolutely not explain why, and the number will be different for an upside-down picture. Please respect my beliefs."

      4. elsergiovolador Silver badge

        By contrast very few people understand the actual ML model sets and how they work

        Most data scientist work is to prepare the input data so it can be read by their scripts, copy and paste the model that seems most fitting for the specification and then tweak something (when they have a hunch it will make most difference in the desired direction), then train the model, then evaluate it.

        If it does not meet the acceptance criteria, tweak something at pretty much random, train, evaluate.

        Then repeat that until PM is visibly angry it is taking so long and then let managers decide whether it is actually acceptable. Rinse and repeat.

        Majority of the skill is fitting the data and choosing the right initial model. Then it is just a (highly paid) guessing game.

      5. Anonymous Coward
        Anonymous Coward

        "If it's just looking at the numbers of likes, replies and retweets - without context - then it's easy to see how more controversial and extreme content would get higher numbers."

        Actually, the algorithms of ALL social media platforms have only a single "value" function: Screen-time, which in Twitter's case means: Responses, any responses.

        So the more extreme a tweet, the more likely a response the more valued the tweet for the algorithm.

        The fact that right-wing tweets are amplified more simply means that right-wing tweets garner more responses. There is nothing about these responses that would indicate that they represent anything about the larger public or even the larger group of Twitter users.

        Giving the voting results of any political movement, the extremes are not a majority by far.

    3. iron Silver badge

      Clearly compensating for right wing morons who can't read. Why else would they believe horse dewormer and a toxic overdose of vitamins will cure their COVID?

      1. Anonymous Coward
        Anonymous Coward

        FYI

        Being dead will eventually cure you of almost all deceases.

    4. bombastic bob Silver badge
      Devil

      I think Twitter may be "compensating". Banning Trump alienated a LOT of people.

      Either that, or their algorithm detects TRUTH (for the more 'Conservative' posts, anyway. heh)

      Also worth pointing out: right wing (usually wacky fascist and/or racist stuff) does not equal 'conservative' so I have to wonder exactly what they classify as "right wing". left wing is generally obvious when it orients towards socialism, social justice, universal basic income, and taxing "the rich": right wing (racist, fascist) may be more difficult to detect if it does not contain actual pejorative terms. And I wonder how they would even detect conservative posting unless it includes discussion about C.R.T. or parental involvement in schools or school choice or lower taxes/regulation or illegal immigration or states rights or less government in our lives (in general). [these concepts may be a little more vague for an algorithm so maybe they are detected as 'something else' ?].

      I'd just conclude that algorithms really can't understand political things.

      1. Cederic Silver badge

        Well, the Conservative party in Government in the UK just don't qualify as right-wing. Tax rises, constant increases in public spending, fiscal imprudence, authoritarian population control, largely unfettered immigration, unrealistic plans for 'net zero', constantly pandering to feminist lobbyists, reductions in military capability..

        Some people like those policies, but they definitely wouldn't swing 'right' on a traditional linear scale.

        1. Dan 55 Silver badge

          Aristocratic kakistocratic banana monarchy.

          Fancy some shit in your water?

  2. elsergiovolador Silver badge

    Schrodinger's feed

    personalized relevance model

    It's unclear why conservatives are favoured over liberals.

    What was the control group?

    If you read the paper, I can't help but think that it was set up to get the desired outcome, which makes it pretty much a pseudo science.

  3. TrevorH

    It's that Russian Troll farm liking posts most likely to lead to the demise of democracy...

  4. PenfoldUK

    Tweet Bias

    Could it be that some more right-wing people are more likely to tweet simplistic solutions to complex problems, that are themselves more likely to get re-tweeted?

    For example, with the Covid-19 anti-vaxx rhetoric, in English-speaking circles at least this seems to be driven more by right-wing ideology than left.

    Whereas people debunking the claims often have to post several tweets to explain their views. Which then are less likely to be retweeted.

    What is more disturbing about this story is that if Twitter don't know why their algorithms are generating these results, maybe they should switch to code that identifies why certain tweets are promoted over others. Black Box processes have a tendency to give unintended results precisely because no one know how they are making the decisions.

    1. Anonymous Coward
      Anonymous Coward

      Re: Tweet Bias

      Twitter, by design, is more suited to carrying terse, oversimplified, populist dribble. This is a selection bias baked into the medium.

      I suspect that what left wing stuff gets through is from the equally loony far left.

      ... and the pragmatic, sane, centre where real humans live goes under represented.

      1. Disgusted Of Tunbridge Wells Silver badge

        Re: Tweet Bias

        The best demonstrator of the divide between the left and reality on Twitter is to read the comments below a BritainElects post of a general election opinion poll.

        1. Richard 12 Silver badge

          Re: Tweet Bias

          What exactly do you think "the left" is, anyway?

          1. bombastic bob Silver badge
            Devil

            Re: Tweet Bias

            what exactly does anyone think "the right" is? (kinda the same argument)

            Stranger still, "far right" (fascism, racism) and "far left" (communism, cancel culture) are too similar NOT to notice...

            1. Anonymous Coward
              Anonymous Coward

              Re: Tweet Bias

              thats easy, uncaring lying greedy bastards, see boris for prime example

      2. Binraider Silver badge

        Re: Tweet Bias

        In my experience, the pragmatic sane centre and is all but forgotten by both the media and many political parties.

        It's coming to something when the only centrist option on the paper is basically dead and buried by First Past the Post.

      3. Anonymous Coward
        Anonymous Coward

        Re: Tweet Bias

        The pragmantic, sane, center left likely tried Twitter for a week as I did, got revolted, and never came back.

    2. elsergiovolador Silver badge

      Re: Tweet Bias

      Could it be that some more right-wing people are more likely to tweet simplistic solutions to complex problems

      Is that true though? Because tweeting simplistic solutions to complex problems is not endemic to "right wing". People from all sides of the spectrum are in the business of click baity messaging designed to feed their dopamine addiction. More retweets - more dopamine release.

      1. xeroks

        Re: Tweet Bias

        Use of simple answers tends to be a tactic used by populist politics, so generally the more extreme stuff.

        Maybe - because the majority of twitter is US based - there is little if anything on the extreme left-wing to balance those US far-right-wingers out. Until recently, even our tory party were to the left of anything the US have come out with.

      2. Insert sadsack pun here

        Re: Tweet Bias

        The answer is simple: right wingers are better at creating engaging content. Enraging people engages them - so do simple slogans. In the last couple of years the right has been better than the left in both: lock them up / build the wall / get Brexit done / leave means leave / make America great again / stop the steal... "For the many, not the few" was impossibly twee and woolly by comparison, and who even remembers what Biden's slogan was?

    3. Anonymous Coward
      Anonymous Coward

      Re: Tweet Bias

      Could it simply be that the much of the left view anything they don't agree with as "right wing" or even "far right" in their minds. So all they see outside their own echo chamber is "right wing" - even modern centrist and logical views. You can't even quote simple facts without the twatter libtardsocialeft(tm) screeching people are Nazis...

      1. Disgusted Of Tunbridge Wells Silver badge
        Facepalm

        Re: Tweet Bias

        The commenters below the line on the Guardian website will tell you that the Guardian is a right wing Tory rag.

        So yes, this.

      2. Anonymous Coward
        Anonymous Coward

        Re: Tweet Bias

        Someone was bound throw in that "libtard" term sooner or later .... and such empty terminology is immensely helpful for us in the liberal democratic first world to play our games of "spot the American*".

        *with apologies to the two other North American nations who really need those walls built.

  5. David Austin

    Black Box

    This is something that has been talked about before, but the fact we're so happy to deploy "Black Box" AI - where no-one, not even it's creators, know why it's making the decisions it is - is mildly concerning.

    In this case, it's relatively benign, in as much as it's giving right wing talking heads a boost, but how many other, more critical AI's are doing the same?

    It's the AI equivalent of asking someone how they came to decision x, and getting a shoulder shrug.

    1. Lomax
      Mushroom

      Re: Black Box

      > In this case, it's relatively benign

      Unless of course you consider the collapse of civilisation to be a problem.

    2. elsergiovolador Silver badge

      Re: Black Box

      not even it's creators, know why it's making the decisions it is - is mildly concerning

      It is true that probably majority of the so called data scientists don't know exactly what they are doing. Most of their work is just changing the model, inputs, methods of training until it works to the given specification, but they don't know why it works.

      But in reality the why (in the technical sense) is irrelevant if it gives the desired output for the given input. The why is in the model specification - and it contains the agenda of people who wrote it.

      For example if the model is promoting certain content and demoting another, then you can very much say this has been designed to work that way. Hiding behind "AI decided that way" is dishonest, because AI in the end is just a sophisticated pattern matching contraption. It does not think, it just does what it was designed to do.

      1. Joe W Silver badge

        Re: Black Box

        I would probably call them data alchemists... If you just put ingredients together, mix, burn, boil until the desired outcome (gold, philosophers' stone) without having a clear understanding of why X happens if Y is changed - then it is alchemy.

        (Yes, there are some people doing great foundational work in that field. Yeah, sure, those are scientists I guess).

    3. xeroks

      Re: Black Box

      weird use of the term "benign".

      Promoting policies and attitudes that cause damage to people isn't harmless.

      1. David Austin

        Re: Black Box

        I don't disagree (hence the modifier of relatively), but in this case I was comparing the AI outcomes to safety critical systems like say, Car Crash avoidance, Missile Guidance, or medical AI's, where the black box problem has more immediate and concrete (Read: Harder to deny) ramifications.

  6. Anonymous Coward
    Anonymous Coward

    That's easy

    Everyone knows that the right makes better memes. Left wing rhetoric is enough to send even an AI algo to sleep!

    1. codejunky Silver badge

      Re: That's easy

      @AC

      "Everyone knows that the right makes better memes"

      You could be onto something with that. The right/libertarian has joy, humour, range of opinion and a sarcastic streak a mile long. I dont use twitter but from other platforms and discussion boards this seems to be fairly consistent.

      I was interested to see the humorous output of the left consisted of 'orange man bad' before hitting the floor once Trump left office. Its almost as though left humour is majority politics and stops there.

      Of course it helps that comedians with a history of making people laugh are against the censoring which blocks humour.

      1. Anonymous Coward
        Anonymous Coward

        Re: That's easy

        You could be onto something with that. The right/libertarian has joy, humour, range of opinion and a sarcastic streak a mile long. I dont use twitter but from other platforms and discussion boards this seems to be fairly consistent.

        Or the general public has joy, humour, range of opinion and a sarcastic streak a mile long, and wealthy and overly sanctimonious prats thinking they are better than other people who have egos puffed up like a gas giant have always been, are now, and will always be a wonderful target for getting needled by anybody and everybody who enjoys watching them deflate?

        If you are thinking "the right makes better memes" then your probably uncomfortably far left compared to the rest of the population.

        1. bombastic bob Silver badge
          Trollface

          Re: That's easy

          If you are thinking "the right makes better memes" then your probably uncomfortably far left compared to the rest of the population.

          no, I think the idea that the left is currently stifling humor is correct. That is, applying "cancel culture" means that the only thing left for, uh, "the left", is to PANDER TO THE PERCEPTION. And that's usually not funny except to those who laugh at whatever makes themselves look better at others' expense. Yeah, it's an "ego thing". My Bombastic Opinion.

          good satire and humor is rooted in truth. Something like "How many cancel culture advocates does it take to change a light bulb? HOW DARE YOU SAY LIGHT BULB!!!" (yeah I just made that up. but some of these jokes just write themselves, ya know?) And THAT is why "The Left cannot meme".

          1. Anonymous Coward
            Anonymous Coward

            Re: That's easy

            ""How many cancel culture advocates does it take to change a light bulb? HOW DARE YOU SAY LIGHT BULB!!!""

            what idiot says the right have a sense of humour and then gives a perfect example of why right wing loons are shit at comedy.

      2. bombastic bob Silver badge
        Thumb Up

        Re: That's easy

        see icon

    2. Anonymous Coward
      Anonymous Coward

      Re: That's easy

      Not at all. That is just the usual perceptual bias of presuming anyone who is quiet must be quiet because they agree with you. You don't like the left, so you give credit to the right, and the posters of the memes are probably center and think both extremes are nuts.

    3. Anonymous Coward
      Anonymous Coward

      Re: That's easy

      So true. The Right's glorious Digital Soldiers over on 4/8Chan sure know how to work that MeMe Magic.

  7. tiggity Silver badge

    Who on earth

    Chooses "personalized relevance model" for their data?

    Always asking for grief using "AI" to give relevance.

    Cannot speak for Twitter, but if relevance accuracy is anything like as dismal as the recommendations I get for content on You Tube* then it will be dismal.

    * I don't use YouTube a huge amount but as I'm not a social media user that's about the nearest experience I have of "AI" guessing what I'm interested in. And YT algorithms are very poor in suggesting

    1. Roland 2

      Re: Who on earth

      > Who on earth chooses "personalized relevance model" for their data?

      You do as soon as you use Google search.

      Which is a much bigger concern than what kind of political gibberish you see on twitter, as we all now more or less the the world through the lens of web searches.

      BTW, am I the only one to notice that Wikipedia entries have moved down the Google result list in the last months?

      Could that be correlated to zero ad revenue from Wikipedia?

      Just guessing.

      1. Anonymous Coward
        Anonymous Coward

        Re: Who on earth

        more like, google spotted that wikipedia is generally full of shit, not all shit, just a majority.

  8. Anonymous Coward
    Anonymous Coward

    Twitter. The preferred platform of right wing fundamentalists and neo-Nazis everywhere.

    :(

  9. mevets

    economics?

    Can they partition again, into "Left True; Left False; Right True; Right False" to see whether the amplification follows not just the bias, but also the fantasy aspect.

    The revelations of hacker X, and his claims that they stuck to a right-biased narrative because it was more lucrative, might be a fundamental force. If Leftish consumers don't prop up the ad revenue by swallowing left wing fantasies whole; but Rightish do for right fantasies, then it is merely an economic model at play. Spreading right wing bunk is more profitable because it draw more customers than left drivel does.

    A complimentary bias in truth -- that left true articles draw more support than right true articles -- would set a further nail in the coffin; leading to a conclusion: The Left will only pay for truth; the Right will only pay for fantasies.

    1. codejunky Silver badge

      Re: economics?

      @mevets

      "Can they partition again, into "Left True; Left False; Right True; Right False" to see whether the amplification follows not just the bias, but also the fantasy aspect."

      That would come with a problem. There is a fundamental disagreement of what is truth, what is opinion and what is down right insanity. As FB claiming covid lab leaks as lies before having to step back once reality emerged. Where do they stand on what is a man and what is a woman? The effectiveness of different kinds of drugs? Climate change and effects?

      I expect an amount of difficulty with that would be initial definitions which of course would never be agreed on as to what is and isnt truth.

      1. Anonymous Coward
        Anonymous Coward

        Re: economics?

        sounds like you swallowed the orange "alternative facts" orgasm.

        remember to spit next time

        1. codejunky Silver badge

          Re: economics?

          @AC

          I note the coward to respond. So stick your account name to a reply answering each of those questions and lets see if you can get full agreement. Easy enough for you to demonstrate right here and clear for everyone to see.

          Or are you just getting your sexual frustration out behind an anonymous mask?

        2. Anonymous Coward
          Anonymous Coward

          Re: economics?

          They only swallow Tim Worstall opinion, not facts.

          1. codejunky Silver badge

            Re: economics?

            @AC

            I will just assume your my coward pet troll. Go get a biscuit

            1. Anonymous Coward
              Anonymous Coward

              Re: economics?

              No it's, you're my coward pet troll. Surely?

              Hobnob?

  10. Draco
    Windows

    Self-serving study

    The explanation of the methodology is quite good, but the discussion in the paper is designed to push the narrative that their algorithm tends to promote conservative / right leaning tweets more than liberal / left tweets.

    The raw data is missing, along with data on the political leanings of those engaging with the tweets. From what I gather on the Internet, in the US and UK, the users skew to the left. (In the US, 60% of Twitter users lean Democrat, 35% Republican : https://www.pewresearch.org/internet/2019/04/24/sizing-up-twitter-users/).

    As the paper notes: The selection and ranking of Tweets is influenced, in part, by the output of machine learning models which are trained to predict whether the user is likely to engage with the Tweet in various ways (like, reTweet, reply, etc). [SI 1.14]

    I think The Economist expressed best when it took a look at Tweet favouring in 2020: The platform’s recommendation engine appears to favour inflammatory tweets https://web.archive.org/web/20200803093134/https://www.economist.com/graphic-detail/2020/08/01/twitters-algorithm-does-not-seem-to-silence-conservatives

    Those inflammatory tweets are exactly the ones that are going to get engagement. As the paper notes, the only type of tweets they considered were: We then selected original Tweets authored by the legislators, including any replies and quote Tweets (where they retweet a Tweet while also adding original commentary). We excluded retweets without comment.. [p3. Results]. The rationale for excluding tweets without comment was: attribution is

    ambiguous when multiple legislators retweet the same content [ibid]. I think there is an additional problem / bias - (I suspect, but have no data to support this) people will retweet without adding a comment if they agree / support the tweet, but are more likely to attach some editorial comment ("SO STUPID!!!") when they disagree / oppose the tweet. [Ambiguity note: I find the paper ambiguous on whether it is only "legislator" retweets without comment that are ignored, or if that includes any "engaged" retweet of a legislator's tweet / retweet without comment]

    Since humans - like all animals - are evolved to watch and attend to (i.e. "engage with") wany real or perceived threat, tweets that are "oppositional" will garner more attention. Since (at least the US and UK) Twitter users are left leaning, they will engage with what they perceive to be threats - which is what the algorithm will serve up to them, which will come from the other side of the political aisle. QED. Or to repeat what The Economist said: The platform’s recommendation engine appears to favour inflammatory tweets

    -------

    Why the paper is self serving:

    In the main body, it argues: With the exception of Germany, we find a statistically significant difference favoring the political right wing. This effect is strongest in Canada (Liberals 43% vs Conservatives 167%) and the United Kingdom (Labour 112% vs Conservatives 176%).

    Yet, when you look (in Canada) at the amplifications of individual legislators, you see the Liberals and Conservatives are (almost) perfectly mirror each other (Chart 1C) - i.e. the amplification of individual members of the Liberal or Conservative parties is pretty much the same, yet the group amplifications are very different. The paper explains that this "discrepancy" is explained in SI 1.E.3 (which, I think is meant to be SI 1.5.3).

    It is easy to see that if amplification a(G) of a group G were a linear function of the amplification of individuals i ... [then the sum of] individual amplification parity implies equal group amplification" [SI 1.5.3] (substance of the quote, equations didn't come through)

    However, our definition of amplification does not satisfy this requirement. To see why, consider the function f (G) = |UTG|, where TG is the set of Tweets authored by members of the group G and UTG is the set of users who registered an impression event with at least one Tweet in TG. The function f is a submodular set function exhibiting a diminishing return property f (G ∪ H) ≤ f (G) + f (H). Equality would hold if Tweets from groups G and H reach completely non-overlapping audiences. [SU 1.5.3] (Again, apologies for the not quite 100% quoting, but ... equation problems).

    This means that you have a much wider range of tweets from the Conservatives than the Liberals (remember, I'm looking at the Canada result / conclusion). Recall, from Graph 1C, individual Liberal and Conservative legislators get about the same amplification, but when we aggregate the amplification by group, the Liberals get less amplification than the Conservatives. But, the aggregate is a submodular set function: if the Liberals are all sharing the same tweet ("Conservatives Evil!") then each individual Liberal will get their individual "amplification", but the aggregate tweet amplification will be for that one tweet and consequently lower because of the high overlap for that tweet; if individual Conservatives are tweeting all over the place ("Liberals Evil!" or "Crystal Skulls" or "We're not the Liberals!"), each Conservative will get their individual "amplification" (which, more or less, matches the individual Liberals), but the aggregate group tweet amplification will be higher because there is less overlap with the tweets. This leads to (at least) two different ways to spin: (1) Liberals are focused and on point, Conservatives are all over the place, (2) Liberals share only one voice, Conservatives have many individual voices.

    Now, many countries (apart from the US) have multiple parties. The paper focuses on Comparing the amplification of the largest mainstream left- and right-wing parties in each country [SI Figure S1A] and ignores all the other parties. In Canada, there are 2 other parties listed (NDP and BQ, both are leftist - indeed, the BQ has higher amplification in Canada than the Conservatives). Why aren't the Left and Right aggregated together so we can see the Left / Right amplification? Why is the amplification provided for only individual parties, but then generalized as "the right-wing gets more amplification". The Liberals + NDP + BQ are 3 left voices vs the one Conservative voice in Canada.

    We should ask what the binary left / right "amplification" was for other countries (as well) and not just the party amplification (and then present that as representative of the left / right amplification):

    UK : 3 left + 1 right

    Germany : 3 left + 3 right

    France : 3 left + 4 right

    Japan : 2 left + 3 right

  11. T. F. M. Reader

    Define "algorithms"

    I am having a problem with the notion of "algorithms" developed by an organization that lead to an observable result that a team of experts working for the same organization cannot explain. Of course, AI is incomprehensible magic and finding bias in inputs is too hard a task, even if you have all the information you need for that since you work for the same company and with the people who developed the "algorithms" and wrote the code and compiled the data set and divided it into training and testing and whatever other parts, and tuned the parameters...

    Given this situation I am skeptical about the possibility that an outside team can reproduce the research or shed light on the results using a reduced set (possibly with additional undetected biases thrown in for good measure). Even the Commentariat's chances of doing it on the basis of common sense alone are pretty slim, IMHO.

    Sorry, but the first thing that pops into my nasty suspicious mind is a variant of "policy-driven statistics": look what our completely objective [but unverifiable - khem, khem...] research found - now we need to apply corrective bias to become more balanced! Success - a narrowing of the gap, possibly to zero - will be confirmed by future research.

    Cynical? Moi?

  12. ShortStuff

    It's Obvious

    The Right tell the truth and have valid arguments. The Left lie and accuse you of racism if you say anything they don't like.

    The Right have a sense of humor and can laugh at jokes, even if they themselves are the target of the joke. The left have no sense of humor and call you a racist if you make a joke.

    Any algorithm that rates the truth and humor above lies and condescension will obviously choose comments from the Right over the Left.

  13. TheMeerkat

    People are more engaged with tweets that are anti-establishment.

    All this shows that the left-wing politicians and their ideology are seen as establishment.

  14. Dan 55 Silver badge
    Devil

    After the Facebook leaks...

    "We don't even know how our ML works!" - Twitter.

    So it's not their fault, of course. Always good to get your excuse in first.

    And now, we return you to your scheduled programming of the decline of civilisation.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like