Let's pass a new law
But these states are currently busy forbidding teachers from making any discussions about gender choices.
While the US Supreme Court considers an emergency petition to reinstate a preliminary injunction against Texas' social media law HB 20, the US Eleventh Circuit Court of Appeals on Monday partially upheld a similar injunction against Florida's social media law, SB 7072. Both Florida and Texas last year passed laws that impose …
If that was really what they were doing, most of us wouldn't mind.
But it's not. "Nonsense" is a very generic term, it can be about absolutely anything. But only one very specific type of "nonsense", about one very specific topic, is under debate. That gives away the real agenda, which is nothing to do with "protecting impressionable children" and everything to do with "virtue signalling to impressionable adults".
...as to remove the only protection they have from the "liberal media elite" posting hundreds of thousands of social media articles a day telling the world that they - that is the Texas and Florida legislature - are closeted, gay, paedophiles who scream constantly about family values in order to throw up a smokescreen that covers their participation in illegal drug fueled orgies.
Which isn't the sort of thing that I would ever say, because that would be defamation and provably false, of course. But you know, if the social media companies are just "common carriers" then they can't possibly be expected to remove such obviously false and damaging lies, or prevent them from trending to the top of every news feed in the world.
...but then I remember who we're talking about. Yeah. They'd definitely be that stupid. Go for it lads. I'll get the popcorn ready.
I've been saying for a while that something like this should happen.
Social media companies need to make up their minds what business they want to be in. Are they platforms, or are they publishers?
If platforms, then they get their coveted legal immunity - but they don't get to censor, or promote, content. All those "engagement" feed algorithms would be useless.
If publishers, then they can censor and promote and moderate all they like, but the flip side is that they're responsible for the output. We shouldn't give them a pass on that just because there's more content than anyone can track. That's the business they chose to be in, it's their job to solve the problems it brings.
I don't see any logical necessity that they can't be a third status (which, in fact, they are today under US law): a private company operating a bulletin board which can choose to allow, or not, any post they choose on their bulletin board, and yet the authors are liable for what they post.
The bulletin board operator clearly cannot determine legal issues of whether something is legally permitted or not: the author is responsible in court for what they say. Equally, the bulletin board operator may exercise whatever editorial decisions they like: if they only want to allow discussion of the Buffyverse on their bulletin board that is their choice.
For the user to be responsible in court for what they say surely means that the user must be known to the court, and under the court's jurisdiction. So this implies that firstly, the bulletin board operator would need to verify the identity of all posters, and secondly that they can only permit posts within a country.
Verifying a poster's identity would of course eliminate fake accounts, but the requirement that a poster submit to the laws of, potentially, another country causes difficulties. We have already seen examples of "libel tourism" where rich individuals have used the strict libel laws of England and Wales to obtain redress for being libelled online/harass people investigating their affairs (delete as appropriate). Would all users of Facebook be required to submit to US law, for example? Would a poster have to declare which countries their post was visible in, and submit to the laws of all countries selected?
It could also cause problems for celebrities who have someone to manage their social media accounts for them. (A recent libel case in London had this as a significant element.) The celebrity would have to assume liability for whatever the employee wrote under their name.
(A hacking defence would have to be permitted, too - "I lost control of the account on dd/mm/yyyy - anything posted after that wasn't me.")
In a word, yes, to pretty much all of it.
Right now, posting to sm is posting globally. That feels cool, but it has global effect. Because these companies have chosen to make it a global effect.
Innovation is cool until it becomes innovative ways to break laws all over the world.
Suppose I (a natural US person) decide that I've had it with the current government of India. I learn Hindi (stay with me). I start stirring up the Muslim population. Just how far can I go before the government of India takes an interest?
Actual news organizations have had to deal with this forever. You go into a country, you abide by their laws.
The hard-won free speech rights supposedly enjoyed by US persons have never been global in scope--until Zuckerberg figured out how to make a mint off it. NO country is obligated to respect his business model. Or, the US version of Freedom of Speech.
I keep getting downvoted for this, but if you operate in country X, you must abide by the laws of country X. That applies to Facebook. That applies to me if I attempt to influence the affairs of another country.
I agree completely. There are many "interpretations" of the US Constition that are expected by misguided individuals even within that country. Not the least of which is their foolish presumption that American law applies when they are touring the world.
Not that other nations are immune from having such individuals in their populace, but the Americans are told from day one that they're the "greatest" and indoctrinated in their particularly perverse version of "freedom" that leads to widespread hate-speech, gun crime, needless deaths, and a government that is bought and paid for by PAC donations/bribes (no other nation allows such unfettered financial contributions and so little oversight on the legislation subsequently put into place by the elected.)
Global communities are funny things. No matter what you say, you are bound to offend a significant number of people just because you have an opinion they disagree with. And some of those people expect legal protection from "offensive" content. :(
I'm pretty sure I get what you're trying to say, but in fact, US laws generally DO apply globally--to US persons. In fact, we have laws expressly against certain types of sex tourism, for instance. Of course, if you travel (or operate) outside the US, the locals are going to demand that you obey their laws. A fact that seems to elude many people. (And get me lots of downvotes when I phrase it a certain way here.)
For the user to be responsible in court for what they say surely means that the user must be known to the court, and under the court's jurisdiction.
No. It does not.
If I post a libellous message on a nearby telegraph pole in the middle of the night with no one seeing me, that doesn't make me any less responsible for it. It just might make me harder to find and prosecute.
Similarly, if I do the same thing posting on the village noticeboard on the village green. In that case, of course, the owner of the noticeboard is allowed to remove it if they wish.
Clearly, though, they are not acting as a bulletin board where they are choosing to allow or not certain posts globally for every user. Each users feed is individually curated, and therefore they are exercising a very high level of editorial discretion on each and every post, in each and every user's feed. That this editing is being done automatically by algorithm is just incidental - in principle I would see the liability issue as closer to that of a newspaper, where the individual journalist posting the story and the publication itself are jointly liable.
Facebook can't just pass on the responsibility of illegal posts to the poster, they are choosing to publish the post in a curated feed, they are jointly liable. Claiming that they can't moderate billions of posts a day is also bunkum. They can and do curate billions of posts on hundreds of millions of feeds every day. If their algorithm is making editorial decisions on who sees what in their feed, they can certainly do the same to at the very least flag posts for human review.
The "curated feed" is just Facebook doing what its users want: they want posts that are most interesting to them. If you don't like the way Facebook curate your feed, stop using Facebook. Don't complain about the way they curate other people's feeds: either they are doing what their users want or they aren't and will lose users. It still doesn't open them to any liability - that remains clearly with the author.
And as for your comments on moderation... just go and do some real research before repeating them. You obviously have completely missed the scale of the problem.
"If you don't like the way Facebook curate your feed, stop using Facebook. "
Why thank you, In fact I've stopped using FB years ago, but that's not the point. FB aren't curating the feeds in a way that the users prefer (since users can't express any preference among all available posts), they are curating it in a way to maximise user engagement and dopamine micro-hits. For most users, they have no way of knowing if the feed is being in their best interest because they have no alternate comparison (ie they don't know what FB is filtering out). FB might be filtering out a bunch of posts that to the user would be extremely interesting (but to which the user has less chance of likeing or resharing), and the user would never know.
Either way, what Facebook is doing, algorithmic or not, is editorial, and therefore they should be liable to clean out illegal content at the very least within a certain time limit of it being flagged, and they should be jointly liable with the poster should they decide to ignore or permit illegal content.
Regarding "And as for your comments on moderation... just go and do some real research before repeating them. You obviously have completely missed the scale of the problem."
I don't need to research the scale, I know it's massive. BUT If FB algorithm can curate a gazillion posts and feeds in real time, it can just as efficiently scan and flag a lot of problems in real time. It needs human eyeballs only on posts that either score very highly on their AI warning scale or on post that are actively reported (ie they use their own users as content moderators to cut costs). Even then, you are probably right that given the scale of the issue, many illegal posts might slip through, but at least it would be an improvement.
Your solution seems to be to shrug shoulders, absolve FB of all responsibility and give up even trying.
Of course there is a third status, specially written into law because "on the internet". But I think that - like most laws that were only ever passed because "on the internet" - is a mistake.
And even that law is not very effective. In practice, ff the operator can control the content, then they must. Remember some of the outcries about this or that company not removing some offensive content? They happen all the time, both from left and right wingers, and every company ends up bowing to that pressure.
That's why social media companies start out big on freedom, but as their numbers and reach and fame grow, invariably they start to clamp down on what is and isn't acceptable content. Invariably. Facebook trod that path, Twitter did it, and I daresay Truth Social will do it if it lasts long enough.
Moreover, if the operator is going to hide behind the author's legal liability, then the operator needs to take responsibility for identifying the author and making him/her available to the courts. If they can't or won't do that, then the liability lands straight back on them.
All the social media companies can, and do, provide all the information they have to identify a user once a court has ordered it. No company has ever refused to do that (although some value their users' anonymity higher and choose to dispute such orders in courts - but they act if the court system eventually makes the order).
I hope you are not saying that users should be forced to provide identification before being allowed to speak on social media? There is a reason anonymous public free speech is allowed and the reason is in the word "freedom".
Why shouldn't users be identified?
I miss the days when a loudmouth had to be polite because they were in punching and slapping range. At the very least, they need to be kept in lawsuit range.
I firmly believe in full identification, authorization, and even licensing to be permitted to access the internet. I am tired of belligerent anonymous screamers on the internet that only scream because they think they're out of reach. Well, as far as I'm concerned, it is time to pull them back in reach - and give them a good legal slap.
I have to say I disagree with you on this.
To play devils advocate, (and to steal the excellent analogy of the bulletin board from Mr Cobb) what your proposing is that should a Church put up a bulletin board, they can either act as a platform and not be liable for what is put up on the board, but also not be allowed to remove that flyer for the Satanic Church just down the road; or they act as a publisher, where they can remove or ban anything they want from the board, but can be sued for spreading hate speech when they start posting anything from the Old Testament on the board, or if they are just too slow to react to one of the parishioners posting up a rant against Gay people or in support of the KKK.
Now, I'm pretty certain in the real world, most people would expect the church to be able to decide what is allowed to be posted on the church bulletin board and to remove anything they dont like. It would also be expected that they remove anything hate filled and illegal as well, although people would be inclined to give them a bit of time to notice the problem and remove it.
Now if the Church moves there bulleting board online, does that change anything? Same rules would apply, no? Maybe we might expect them to act faster than with the physical Church to remove the bad stuff, but other than that the same rules apply, no?
Now define why a social media company should be held to a different standard to the local church bulletin board? That way lies madness...
Well, consider the propagation of content. A church bulletin board has a specific location, it's accessed by a finite community of people, and they have a reasonably well understood expectation of the range of content they are likely to find there. Even if you put it online, it'll have a URL and a page header that makes all this fairly clear.
And the community are all more or less known to one another, everyone knows who the trickster is who's likely to post the notice about the Flying Spaghetti Monster.
That's not comparable to any social media company I've heard of. Even Truth Social has a larger community and much broader terms of reference.
I have to say, I find the idea of transparency about moderation rules appealing. Unfortunately, legislating for that would probably also run foul of the First Amendment ("abridging the freedom [...] of the press").
The size of the community is completely irrelevant - the law is the same for FaceBook, Twitter, Reddit, El Reg or the Vegan Forum (not my choice but one in the judgement). Everyone jumps on the big tech guys but you have to remember that this equally applies to this forum.
Basically the Government cannot force a private company moderate speech in any way. It is a very basic first amendment right.
Don't take my word for it, read the actual judgement and see what the appeal judges of the 11th circuit (lead by Kevin Newsom, a Trump appointee no less) makes of it: Judgement (pdf)
This also debunks the common carrier argument, the various misinformed cases such as Pruneyard and underlines the fact that using an algorithm to provide each user with their own feed is actually protected speech. 67 fairly short pages that should help to educate any who read them
I have to say, I find the idea of transparency about moderation rules appealing.
Of course. I expect everyone here does. Just as everyone here finds the idea of a voting system which reflects the will of the public appealing.
It is just a shame that both are impossible.
Kenneth Arrow famously proved mathematically that a completely fair voting system is impossible.
No one has yet proved fair moderation is impossible, but Mike Masnick has written several informative articles about the problem. I recommend them.
"Kenneth Arrow famously proved mathematically that a completely fair voting system is impossible."
That does not mean you should just give up. Improvements are possible, that make voting more fair. That is good enough for me, if we ever get around to it.
And everyone sees noon at his own door.
That is why these sort of discussions very often result in screaming matches. Everyone believes they are right, but not everyone can listen to someone else's arguments.
That said, not everyone is capable of presenting a reasoned argument either.
It's "Woke the bogeyman".
The bogeyman that makes you M&M's less-sexy but your kids more-sexy at the same time according to Tucker Carlson on different nights.
It's the bogeyman that infiltrated Disney to make movies about bestiality and homosexuality for 5 year olds (Frozen, if you're wondering what movie that claim was about).
Woke is censoring right wing Republicans they keep saying ad nauseum. Woke keeps putting links to something called "science" on their tweets. They are not free to speak, they say.
Now woke is attacking Elon Musk, according to Elon Musk. Woke says you cannot whip out your tallywacker for an employee on your Gulfstream demand she suck it, then pay quarter of a million when she declines to do so. You'd think he needs to be clearer about the "type" of Masseuse he wants for his Gulfstream, but no, apparently that's alll "woke's" fault he forgot to add "hoe" to the job description.
Many would think that Elon doesn't want to pay full price for Twitter, because he wanted to pay with Tesla shares and the share price collapsed, so now he wants to back out of the deal. But no, its the "woke mind virus" that did that.
Don't say gay, racism, vax status, evolution, tallywacker, science, coup .....
I'd say you like Elon Musk about as much as I do (as in totally not), but the issue is wider than just Twatter.
It would be interested to see how long a revealing post would stay up about Zuckerberg as opposed to one that pushes a child into suicide. I suspect the former would demonstrate that they CAN exercise control in that it would be taken down in a time only measurable in msec, the latter would only be removed after the potential legal costs would exceed the advertising profit. And no, it's not just FB either.
Social media means that village idiots get a megaphone and instead of just being a local problem, they become a global problem. For profit.
"The free press" is mostly, nowadays, a way for people rich enough to own newspapers to promote their views. Even 50 years ago, some press barons tried harder than others to publish more news and less comment, but they are almost all just opinion nowadays.
The role of the "free press" in protecting freedom has pretty much been replaced by the internet nowadays, at least in those countries where internet access is widespread and reasonably open. Social media is part of that, but only part. Many, many campaign groups (some narrowly focused like, say, Humanists UK or Big Brother Watch, others much broader like Liberty or the National Front) use the internet to spread their views and organise their campaigns. That applies equally across the political spectrum.
No one gets to censor those campaign groups' emails - whatever opinion they express, however hateful they are.
Though there is (some) censorship of some material.
Whether the majority agree with a thing or not, does that dictate whether a fringe political group should get a say?
Some Ultra-right wing groups have been outlawed in the UK, indeed membership in them is a crime. Some ultra left groups operate freely and publicised heavily in media in fact.
Do the have a right to espouse their views? Perhaps. Do they have a right to be ridiculed for them? Absolutely. Do they get to super glue themselves to trains, legally? Nope.
For the record I am firmly centre-left, and largely believe in education being the best way to ward off the encroachment of lunatics into our politics and media.
the problem is a large minority of people nowadays have issues decerning the difference between facts and opinions.
I firmly believe that we need to restore the balance of critical thinking and wider opinion in this algorithm-driven opinion funnel world, where our views are curated sorted and centralised and opposing ones excluded as irrelevant.
we need to think about subjects from multiple angles and weigh multiple points of view, make our own decisions on motives and how the facts affect us, and not just accept the opinion of someone not directly involved, just because they are from our "team."
the world needs to stop dividing into "US" and "THEM" and come together as one for the common good. Until we actually have some discourse, outside our digital bubble, we don't realise the breadth of other views that exist and that they may be reasonable and the people that hold them might share some views with us, and the extremes painted by either side, of the opposition, may be exaggerated, possibly to the absurd, to get us to take a more extreme stance in the other direction.
Now there are no perfect solutions, but if everyone has a voice and is free to speak from their perspective, we will realise we have more in common than divides us, and we differ far less than we think. allowing us to find common ground to forge a path where we work together to combine our strengths and not exploit each other's weaknesses, and as a whole achieve a fairer, more contented society, that allows us to understand that life isnt a zero-sum game, and we can all be the best we can be, without the need to tear each other down.
The Orville did an absolutely hilarious episode of a world governed and law-enforced by upvote / downvote. It's really not so far removed from the arguments played out here, or in social media, or politics in the here and now.
FPTP has a lot to answer for as far as the them-and-us mentality goes; if you're not with them, you're against them. Baked in at the highest level of our politics, and the trickle down to everything else is very apparent.
restore the balance of critical thinking and wider opinion in this algorithm-driven opinion funnel world
This. The problem with the 'AI' driven stuff (and we all know it's not true AI) is that it isn't (and likely can't) understand the nuances for human speech. So they resort to word-matching and pretty feeble attempts to parlay past likes into possible future likes - ignoring the fact that doing so just acts as a reinforcement mechanism to the darker side of things.
All the social media promise advanced algorithms to do their content filtering but, in the background, still have to employ lots of staff and contractors to do filtering.
It all boil down to US incorporated companies ( the social medias ) being subject to US ( and local US State laws )
But what if these companies weren't incorporated in the US... but lets say in Ireland or Sweden, or Iceland... They would be subject to the local law of the incorporation country and wouldn't need to bother with all the stupidity of Trump cronies trying to give back what their Great Leader lost by being a wannabe Banana Republic Dictator.
They would still need to deal with the laws of other countries, just as Facebook must make at least some effort to lie about complying with GDPR and having some kind of backdoor preventing the Irish DPC from investigating them. If you operate in a country, even if you are incorporated elsewhere, they will be able to apply their laws to you. With the internet, this isn't always strong. For example, if I put something on my website that China doesn't like, I'm not going to comply with their censorship law and they can either block me or not as they choose. If I were selling something or had my systems located in China, they'd have more leverage to do something about this and could successfully force me to comply. Social media companies sell advertising and thus earn money in the countries where their users are, so those countries have a method for punishing it if laws are not obeyed. Your solution will work as soon as we have a social media company that doesn't care about earning money or having anything located in in the countries of which they don't like the laws.
"The Republican governors of both states justified the laws by claiming that social media sites have been trying to censor conservative voices, an allegation that has not been supported by evidence."
I assume this is satire. There were some glorious failures demonstrated with the pushing of 'fact checking'. The one where the facts seemed to change quite rapidly when the facts came out. As the article states with-
"They have found that social media sites try to take down or block misinformation, which researchers say is more common from right-leaning sources."
Yet the left do seem to get a free pass for misinformation.
"Multiple studies addressing this issue say right-wing folk aren't being censored."
The reason to ban Trump was for breaking Twitters rules. The reason- "permanently suspended... due to the risk of further incitement of violence". Good job Putin doesnt violate such rules. Same with BLM if its a closer to home issue.
The solution to lies is free speech. While some will post lies it is a guarantee of lies if opposing opinion is to be censored (as we have recently seen). Want to see how stupid this is-
"They are free to create their own social media. Nobody is stopping them. The are NOT free to use someone else's property any way they want."
Watching the mental breakdown when Musk talked about buying twitter and allowing free speech was amusing enough. And Trump already has. But I merely reacted to the article which seemed to suggest no bias where clear bias has already been shown.
*Dunno who downvoted you. Its a reasonable comment to make
Social media needs to shift from a centralised topology to a distributed one, users having the right to self-censor their feed as they wish, according to their needs.
You can operate such a service for free by building it on top of an e-mail client, moving messages and content between users via the e-mail protocol, using encryption.
You don't need to pay for bandwidth or servers to host content. You will never see content, never be able to block it, and cannot be forced to block it by whatever dictatorship demands it.
All social media could have been this way from day 1, but the desire to see user content so it could be monetised led companies to use centralised topologies. There are many ways to monetise distributed social media, without the vulnerabilities that a centralised system brings (privacy, censorship). We need to go back and take the other path, using distributed technologies.
Biting the hand that feeds IT © 1998–2022