A spice melange?
That one word in the article tickled me. I guess it’s an exercise to the reader to choose who represents Atredies and Harkonnen in this situation.
Spice coffee anyone?
US Senators Mark Warner (D-VA), Mazie Hirono (D-HI), and Amy Klobuchar (D-MN) on Friday introduced draft legislation to limit the legal protections available to social networks, websites, and anything else that provides an “interactive computer service.” The three politicians proposed a bill they're calling the SAFE TECH Act [ …
This should not be a concern. If a company makes a product found very dangerous, say asbestos, for example, you don't care if forbidding it destroy the company business. Especially when they knew for ages about its dangerousness and did nothing but to hoard the money.
Social media is a tool. We can point at the harm. But we can also find good - for example, a woman tweeting a picture of a school meal provided by a business that's pocketed half the payment. In fact, the whole campaign Rashford has run.
Even the harm is dependent on context. Resisting a legitimate election in the U.S. - bad. Protesting a corrupt election in Lithuania or opposing Putin - good. (It's telling that Myanmar has shut down the internet and that China strongly polices what people say about Winnie the Pooh.)
So regulating it is not like regulating asbestos. It's more like regulating knives or alochol. I'm not afraid of bankrupting firms that manufacture lead water pipes. But I wouldn't want to put fertilizer manufactures out of business just because it can be used to make bombs. Social media is closer to the latter. We need to find way of minimising the harm done while preserving much of the good.
Even asbestos is a tool. As long as you don't breath it it's OK. The problem is to ensure nobody breathes it. It can become so difficult you just ban asbestos - which if not deadly would be a marvelous tool, it has many excellent properties.
The problem with social media is the same. Sure, you could use them properly and without harm. But the very way they work and how they generate revenues makes them extremely dangerous.
If they can't find a business model where social media are not dangerous and can generate money (and they never could) that means their only business model is dangerous and thereby not sustainable, and we can't care if they go out of business.
"But I wouldn't want to put fertilizer manufactures out of business just because it can be used to make bombs"
That's the wrong analogy - fertilizer is never produced to make bombs, social networks, since Zuckerberg stole the images of young women to be "voted", to create his first "product" are designed from inception to build on worst human instincts and to make money from them.
Nobel too believed dynamite could be only used for good purposes, but was naive, it was designed as an explosive, and inevitably has been used as an explosive.
People were able to denounce abuses and to protest even before social media. Even before TV and radio.
> This should not be a concern.
This should be a concern because you assume services means businesses. Whereas, in reality, services might be a charity that provides a self-help forum. If rule changes mean they have to monitor every post then chances are they don't have the funds or the manpower to do it. Result: useful forum silenced - not by what has happened but by what might happen.
This is why the questions posed by Eric Goldman at the end of the article need to be answered. The "afford" bit is a distraction. His questions really boil down to: which services do you think will be affected, which services do you think won't be affected and why do you think this?
"His questions really boil down to: which services do you think will be affected, which services do you think won't be affected and why do you think this?"
But the real question always comes down to - how much harm is it worth causing to beneficial services in order to fix the harmful ones? All policy decisions, especially the broader ones like this, will affect a wide range of different businesses, people, charities, and whatever else. It always comes down to a cost/benefit analysis - is any harm caused by a change worth the benefits that change provides? In this case, shutting down a small charity self-help forum sounds bad, but if that's the only casualty and in doing so we solve all the problems Facebook and others cause, it's probably worth it. Obviously in reality there will be lots of new problems caused and plenty of existing ones that aren't fixed, so the analysis gets a bit more complicated.
Asking which services do you think will be affected, which services do you think won't be affected and why do you think this, is a good start, but you need to finish with... and are you OK with that? Those first three questions are the easy ones, it's that last one that will cause the big arguments.
You know that is the same charity prints a bulletin, it has to be very careful about what it prints? Why laws that apply to the "physical" world should not apply in the "digital" one? I did moderate forums - without being paid. I stopped when forums emptied as most people moved to social networks... I didn't follow them.
Still, there are ways to treat differently a small no-profit charity forum, and a megacorp hugely profiting by "user contents". It's only the latter business that is threatened by new rules.
Which specific "many other sites" do you have in mind?
The cesspool that is YouTube? Yep, they can afford it (and they bloody well should). Wikis? Easy to monitor and police, at least until they come under concerted attack. Porn hubs? Already dealing successfully with much more onerous requirements in the name of consent and age verification.
Or are you thinking of the approximately 74,655,587 sites that solicit reader comments on absolutely everything, like Reddit or El Reg? El Reg, as a reminder, is already successfully operating under the tighter limitations of English law - like many of the better candidates in this group.
Or Usenet? Already quite neatly divided into moderated and unmoderated groups, which is the way everyone should have stayed. No problem there.
No, I'm all for ending the madness that is S230 right now.
also includes personal blogs if you have comments enabled.
I run a couple of websites and every comment has to be approved by one of the three moderators before it becomes visible.
We regularly get bot attacks that try to post things that promote Pron etc. Those get archived and the IP addresses blocked (for legal reasons to show that we have taken action). There are some 90,000 IP's in the blocklist of the DMZ that sits in front of the sites. We add around 100 IP's a week.
These sites are not indexed on Google etc.
How often do you remove IP's from your blocklist?
Sorry, not directed at you personally, Steve, it just reminded me of an issue.
Permanently "dirty" ip addresses are a problem, especially as any abuser is likely to have moved to a differrent service or netblock after a week.
Witness the number of recycled IP's that can never be used for legitimate email, for example.
Or the whole bloody NHS site that isn't accessable if your DNS is on 2.0.0.0/8. (I reported this to NHS IT and nominet. Both ignored me)
It seems to me, as a non-USAian, that 'free speech' has a corollary, that you can say whatever you like, but YOU, as the speaker or writer, are absolutely responsible for what you say or write. AFAIK this is its meaning under British Law as well as most Commonwealth members' legal systems. If it doesn't have this meaning under the US Constitution, kindly enlighten me about its legal meaning there.
A law along these lines would get social media off their current hook BUT, and its a big but, they MUST know who published an objectionable or defamatory post so that the lawyers can hit the correct target, namely the author. In other words, it should not be possible to be legally anonymous on these sites. Said another way, you may ask that the site publishes your post with the 'anonymous' tag, but they must not do that unless they know exactly who you are and can provide a legally correct identity in response to a legal request. If they can't do that, then they didn't follow the rules and get to pick up the legal bill as a direct result of failing to identify the author.
In my understanding the above conforms to British and Commonwealth law and may also define to Free Speech as defined in the US Constitution too: if not, please explain.. Most of Europe seems to use a derivative of Napoleonic Law, which I don't understand at all,
Whistle blowing is quite another issue and nothing whatever to do with Free Speech. It must be fully protected in all cases until legally proved wrong. Releasing any whistle-blowing information to anybody other than the intended recipient should be treated as a crime and prosecuted appropriately.
"unless they know exactly who you are and can provide a legally correct identity"
Slippery slope, that.
Let's take a straw poll - how many of you would be happy if a site accepting user content (uh, such as this!) decided that in order to comply, you need to scan your passport or legal identity document and a utility bill (proof of address) and submit them to the site for verification.
The ONLY site I've ever submitted such things to has been the government site for applying for a residency permit. For being able to write a message about something? Screw that, nope, not happening.
I don't recall this site ever having requiring me to prove the name I gave them is my real name. The only proof of email address as far as I can recall is that I got the link and opened it in the email they sent it - the email address used is not one linked to any of my normal email addresses.
"This site knows my real name and e-mail address. Yours too, otherwise you wouldn't be able to post here."
They know a name, which I don't even think they checked looks at all like a name. Mine looks like a name. Do you think it's the one on my identifying paperwork? It might be. They didn't check. They also know an email address which could receive a verification link. Maybe it's mine, but I can set up emails without my name on them. Also, since I didn't forget my password, I haven't needed it since. They have no proof the address still exists. Or was ever mine. That's pseudonymity for you, which is usually enough for sites like this.
"Most of Europe seems to use a derivative of Napoleonic Law, which I don't understand at all"
When I was a teenager (many years ago when Cockleshell Bay was a thing), somebody explained it to me as follows: in UK law, things that are not forbidden are permitted while in Napoleonic law, things that are not permitted are forbidden.
I live in Spain where the general rule is; If there is a law that says you can do something, you can, however if there is nothing to say you can in law then you may be prosecuted if you do.
An example recently was drones, in theory they were not covered by the regs that covered free flying RC model aircraft that are limited to specific sites, a lot of people assumed they could flydrones anywhere because of that but a couple of people I know were threatened with prosecution by the Guardia for flying drones on the beach and inbthe country.
To me it seems simple. Every company that dabbles in social media needs to ask itself: are we merely providing a platform here, or are we publishing these comments?
If comments are being "promoted" (or the opposite) - regardless of whether this is done by a single big team, or by crowdsourcing, or algorithm - then they are a publisher, and should be prepared to answer as a publisher for what they put out.
If you let the companies themselves answer that question, I guarantee that every single one will say that they're solely a platform and therefore have no duty of responsibility.
The *only* way that companies will take responsibility for the content published on their platforms is if they're made to by legislation.
Then the answer is even simpler, because the companies have no power to remove, promote or delete individual comments. All they could do would be to remove the whole platform.
If they can edit at the individual comment or thread level, then they have to do it.
This "platform" nonsense is precisely what S230 lets them get away with. They're having their cake and eating it too - they're policing (and promoting) comments according to whatever rules they feel like, but without accepting responsibility. That's what has to stop.
It seems to me, as a non-USAian, that 'free speech' has a corollary, that you can say whatever you like, but YOU, as the speaker or writer, are absolutely responsible for what you say or write. AFAIK this is its meaning under British Law as well as most Commonwealth members' legal systems. If it doesn't have this meaning under the US Constitution, kindly enlighten me about its legal meaning there.
Under the US constitution, “free speech” flows from the text of its first amendment:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
US freedom of speech is not absolute — for example, false testimony under oath is not constitutionally protected — but it can be broader than in other jurisdictions: e.g. hate speech is constitutionally protected, except in the case of imminent violence.
It seems to me that all people don’t hold absolute responsibility for what they say or write, regardless of the legal system — for example, children below a certain age (which can vary by legal system) aren’t held in absolute responsibility for what they say or write.
A law along these lines would get social media off their current hook BUT, and it’s a big but, they MUST know who published an objectionable or defamatory post so that the lawyers can hit the correct target, namely the author. In other words, it should not be possible to be legally anonymous on these sites. […] Whistle blowing is quite another issue and nothing whatever to do with Free Speech.
In the US, anonymous (and pseudonymous) speech is recognized as being derived from the first amendment, but again, it is not absolute. Many whistleblowers depend upon anonymous reporting mechanisms, so in my view there can be a connection between whistleblowing and free speech.
Most of Europe seems to use a derivative of Napoleonic Law, which I don't understand at all,
The major difference between a civil law legal system (Napoleonic law is one type of such a system) and a common law legal system is that in a civil law legal system, case law is subordinate to statutory law, and judicial precedent is rare, since judges tend to decide cases using the legal code that underlies the statutes; but in a common law legal system, case law is on an equal footing with statutory law, and judicial precedent is relied upon due to the lack of a legal code that underlies the statutes.
Some jurisdictions use a mixture of these two legal systems, e.g. Scotland, South Africa, Québec, Louisiana.
A law along these lines would get social media off their current hook BUT, and its a big but, they MUST know who published an objectionable or defamatory post so that the lawyers can hit the correct target, namely the author. In other words, it should not be possible to be legally anonymous on these sites. Said another way, you may ask that the site publishes your post with the 'anonymous' tag, but they must not do that unless they know exactly who you are and can provide a legally correct identity in response to a legal request.
Objectionable? Your mother smells of elderberries! Does that mean you should now be able to discover my true identity? If it's defamatory, then you would be free to sue, which means you'd be free to subpoena the platform and ISP to determine (or try to) my identity and serve me*.
Or I post an add on craigslist calling for someone who can terminate with extreme prejudice, one Jacob Marley. He's interferred with my business affairs for too long! But murder-for-hire is already a serious criminal offence, so LEOs could identify and prosecute me, E.Scrooge Esq for that crime.
So basically there isn't really any anonymity on the Internet, and if the objectionable statements are already criminal, then there's already recourse. But this is the 21st Century, where deplatforming and doxing is a popular hobby, with some fairly drastic consequences-
https://www.rt.com/op-ed/514633-free-expression-woke-death-democracy/
I have been expelled from my barristers’ chambers because of a tweet. During my 15 years as a barrister at Cornerstone Barristers and 30 years at the bar, I had an unblemished professional record and was top ranked by legal directories for my work – particularly in public law. And yet my one-sentence tweet on a platform designed to be polemical has ended this particular career.
and then there's been other consequences of people being doxed, like SWAT'ng. And shot. Or subjected to other online or offline harrassment simply for saying something objectionable, but not actionable. So there's some rather obvious chilling effects from removing pseudo-anonymity.
*In which case, my defence may be to ply jurors with copious quantities of elderberry wine in order to watch the jury sway.. I mean sway the jury, and demonstrate that the scent of elderberry isn't objectionable..
More seriously than elderberries, what if I post "Your religion causes hate and violence and its practice should be illegal".
Many people would find that offensive. I see no reason why I should only be allowed to make that claim if I am willing to let the platform know who I am. It is a perfectly reasonable position to take (whether true or false), and potentially an important point to debate (for example in a debate about future laws). It is clearly a matter of free speech to be able to make that claim. And it is also, clearly, a position that requires anonymity to be able to take safely.
Speech cannot be free without the option of anonymity. That has always been the case for speech in public and it needs to be the case for speech intermediated by the Internet as well.
Not "clearly" at all.
Some people will be offended as it conflicts with their world view, their value system.
Some will agree to varying degree. Some just won't care either way.
Others will go out of their way to be offended by anything remotely contentious.
None of these possibilities make the statement inherently offensive.
Tlaloc, the beloved rain-god of the Aztecs brought life to the Earth, and labelling some of his sacrificial rites as “violent” seems a tad on the extreme side. I am sure that if some on social media at the time had been critical of the rituals, they would have been invited to partake in the next round and experience it for themselves.
On a different line of thought, I suppose, in these climate changing times, it might not be a bad idea to curry some favour with gods of rain.
Removed due to racist comments AND a history of prejudiced statements about migrants...charming fellow
https://www.theguardian.com/law/2021/feb/01/barrister-racism-row-jon-holbrook-previously-fired-council-over-anti-migrant-rhetoric
But lets instead publish an article from the modern version of Pravda eh Comrade?
Objectionable? Your mother smells of elderberries! Does that mean you should now be able to discover my true identity?
No, it means when he sends the legal notice to El Reg, they can forward it to you because they know your identity. No reason why you should have to know it, until a court says you should.
https://www.rt.com/op-ed/514633-free-expression-woke-death-democracy/
Could you find a citation that isn't published by a hostile foreign state?
As a Yank... lets get into the basics.
Free speech is the 1st Amendment:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
So you have in the US what is considered protected speech. Note that not all speech is protected. (e.g. Yelling fire in a crowded theater)
You are correct that there are libel laws which hold the speaker responsible for what is said or posted. However there's a balance to the freedom to say what they want and the potential repercussions from said speech.
However Section 230 exempted these sites from those laws saying that they shouldn't be held liable for posts made on their sites because they are not the originator of the speech since they do not curate their content and are merely a platform.
Now we can see that's not true. These site curate their content.
There's more to this... and the simple solution to exempt the protections from FB, Twitter, Google, etc .. because they do curate their site's content... but that won't happen. Congress will end up removing Section 230 altogether which incidentally helps the same offending sites.
Note that your 1st Amendment specifically includes freedom of the press. The press have never been "neutral" - they exert much more editorial control than any web site does, yet the 1st amendment fully protects them. What does "curation" have to do with anything?
Does not freedom of the press extend to web sites that are press? Facebook twitter and Google are not the press so why should they be granted freedoms that in other media are only granted to the press?
It seems to me that Section 230 grants more freedoms than Freedom of the Press, the press are not immune to libel and slander laws.
"However Section 230 exempted these sites from those laws saying that they shouldn't be held liable for posts made on their sites because they are not the originator of the speech since they do not curate their content and are merely a platform.
Now we can see that's not true. These site curate their content."
Don't think that's correct. 230 specifically grants exemption for those sites BECAUSE they curate their user generated content. Before 230 it was black and white. You either made absolutely zero effort to curate content on your site so anything at all could be posted, at which point you could be classed as a distributor and avoid any liability. Or, you could curate the content, but at that point you became liable for EVERYTHING that was posted on there. So the option was essentially between perfect curation or no curation if you wanted to stay out of court. See Compuserve and Prodigy in the 90s, the former had no moderation at all so were found not liable for the content on their site, while Prodigy made an effort to moderate what was on their site and found themselves liable in court for things they missed.
230 provided a mechanism to allows sites to moderate content without becoming liable for it. So they could moderate content they instantly knew to be dodgy without becoming liable for things they missed, provided they took action to deal with anything they missed when notified of its existence.
This is the fitst time I have seen a bill with an acronym in the title, which hasn't made me cringe with the tortuous way they had to mangle language in order to make the title pronounceable. Usually the title contains extraneous words with little or no relationship to the subject of the bill, or they randomly capitalise letters which you woulvd dream of using for an abbreviation.
I think Congress has a whole team of people that come up with these bill acronyms...
My current personal favorite is MAR-A-LAGO Act, or Making Access Records Available to Lead American Government Openness Act.
Couldn't care less on the point of the bill, but the name has stuck.
How about re-wording Section 230's elegibility criteria such that to be eligible for ANY section 230 entitlements, the entity must NOT collect and re-sell ANY users' data at all. If they do, they are not section 230 eligible. (I don't mind ads, but I don't NEED 'targeted' ads. If I do a search, I want to search for what I said, not what your 'algoritthms' THINK I said.) Now, that would be a change in 230 that I could get behind
So what is wrong here. Well if you are publishing something then you are held liable for what you publish. If you add every harry dick and tom do stuff that you earn dinero from publishing then you can say, nup, wasn't me, my livability ends when I make money from them.
If they (twatter et al) want to own the content then they can own all the liability.
Time to bring back Usenet (not that it ever went away, but it lost market share to all these annoying web-based things).
US politicians should be careful about taking out section 230, I suspect a lot of them could fall foul of it. If you're going to change it, just provide immunity until a legal take-down notice turns up, at which point there's 24 hours to remove the offending item, and that if the removed party wishes to challenge it they should be awarded costs (and possibly more) against the legal firm issuing the takedown notice if they win the challenge. That should help cap the frivolous notices, I assume most legal firms will be smart enough to pass on such costs to the originator.
All of the social media sites started out as friendly little places where people could chat and share things. But after a while foreign agencies started creating fake accounts and posting "free speech" to promote internal divisions everywhere. We call it "fake news" but a lot of people believe it's true and march around exercising their "free speech" ... the West is under attack, this is a friendly little war and we're losing because we haven't noticed that the promotion of racial divisions, EU/UK divisions, right wing promotion and stupid government everywhere is all promoted on social media these days. Creeping into social media is how you mount an attack on another countries that believe in "free speech" with virtually no risk when your country handles free speech by poisoning opposing politicians underpants.
In the US you see the same people who scream that Free Speech must never be inhibited standing up saying that the 2nd Amendment can not be changed and acting like they want the original 13th Amendment restored ... all social media promotions that benefit the attacking entities pushing the US and UK off the cliffs.
Yes, it looks like that because they are being manipulated - there's plenty of evidence that fake posts are made on social media by "Americans" with foreign IP addresses. People share posts and are convinced that they are just posts on social media - I'm not just blaming the Russians for starting this, the CIA were pushing fake news years earlier before social media even existed.
> But after a while foreign agencies started creating fake accounts and posting "free speech" to promote internal divisions everywhere.
If only it were all foreign agencies - https://bylinetimes.com/2021/02/02/cambridge-analytica-psychologist-advising-global-covid-19-disinformation-network-linked-to-nigel-farage-and-conservative-party/.
> The project is run by David Fleming, who has worked as a consultant environmental health officer for Dacorum Borough Council in Hertfordshire
> Patrick Fagan is listed on the COVID-19 Assembly website as an “advisor” to the project. This is alongside Professor Martin Kulldorf, a principal co-signatory to the Koch-backed Great Barrington Declaration drafted by a MOD contractor; Toby Young, fake news publisher of pseudoscience blog ‘Lockdown Sceptics’; and Francis Hoar, a leading junior barrister at Field Court Chambers who represented the failed legal challenge to social distancing measures by multi-millionaire Brexiter Simon Dolan.
Whilst you could argue these Brits are stooges, it's much more likely that they're self interested twats who'll trample on anyone and everyone for personal gain. People like Toby Young are very much a homegrown problem
> In the US you see the same people who scream that Free Speech must never be inhibited standing up saying that the 2nd Amendment can not be changed and acting like they want the original 13th Amendment restored
The underlying truth, and the thing these exploit is that people prefer a simple, easily understandable answer, especially to things that they feel disadvantage them. Lie and say "we'll just fix it by...." and you'll get many more supporters than the honest guy who says "well, it's much more complicated than that, the system needs fixing, but you need to make sure you account for x".
Once your lies have someone hooked, it's even easier, because people are inherently tribal in their politics. Just claim the other guy is lying and wants to take your guns/money/children and your followers are all but closed off to argument/reason.
Social media allows for easy, widespread testing of various lies/approaches. You track "engagement" to see which are working best, and then your candidate uses/refers to only the most engaging. Fixing that, though, is really hard - there isn't a magic law you can pass to resolve it, because you've first got to identify what counts as dangerous bullshit.
Merging that mess in with other (slightly easier) wins like hardening the approach to harrassment is a mistake IMO and may well undermine the chances of getting the bill (and by extension, the other improvements) in place.
"hardening the approach to harassment is a mistake IMO" - I hear the same arguments when people suggest that we should restrict people with mental heath issues from walking around the neighborhood with an AK-47 slung over their shoulder.
Banning that, and banning posting lies on social media, would be unconstitutional.
Wow, way to quote me out of context and change the meaning. Let me quote myself properly for you
> Merging that mess in with other (slightly easier) wins like hardening the approach to harrassment is a mistake IMO
In case it isn't clear - hardening the approach to harrassment is a *good* thing. Trying to munge that in with harder to do (correctly) stuff is a mistake because it limits the chances of getting the good/easier stuff in place.
This is going to be an even bigger (much bigger) mess than FOSTA.
"Section (c)(1) of the current law, removes the protections entirely if money exchanges hands, and then changes it from an immunity to merely "an affirmative defense."
So basically a lawyers heckler veto for most of the internet or defending lots of expensive lawsuits. Free speech on the internet was nice while it lasted, if this passes don't expect to say anything even remotely controversial, and forget about things like #MeToo etc.
I am not worried about major companies (Google, Facebook, and Twitter have plenty of money and lawyers), I am worried about it's effect on everyone else, and Congress doesn't have a good track record with unintended consequences. Warner's Q&A on this bill doesn't give me much hope the provisions have been carefully considered.
The lies don't help
" Q: What is the scope of the carve-out for paid content? Does it cover anything beyond paid advertisements?
A: The SAFE TECH Act makes clear that Section 230 immunity does not apply to any paid content. This would include advertisements as well as things like marketplace listings."
Except where it says, "except to the extent the provider or user has accepted payment to make the speech available...". Conditioning immunity on whether service provider receiving payment.
As with any law, there may be good intentions here, but there are severe unintended consequences.
Making platforms responsible for "hate" will by definition result in intense game playing, as Christians will probably end up reporting huge swathes of "offensive content" while the left wing nutters will probably do the same with any content to the right of Corbyn. The result? Platforms will be incentivised to remove any content containing vaguely controversial topics, even when those topics are being politically debated.
The inevitable result is the rollout of AI moderation like Amazon's offering. Quote:
"Image and Video moderation. With Amazon Rekognition you can detect explicit adult or suggestive content, violence, weapons, drugs, tobacco, alcohol, hate symbols, gambling, disturbing content, and rude gestures in both images and videos, and get back a confidence score for each detected label. For videos, Rekognition also returns the timestamps for each detection".
Obviously the social justice/pronoun police are ecstatic about this news, as are Jewish groups who want to filter out the nastiest insults from fringe lunatics, DRM anti-piracy bots included to prevent you playing "happy birthday" or streaming movies / songs / sports will become mandatory, regulated by AI + blocked from being uploaded in the 1st place.
Those in power controlling the AI, will have the literal power to control all communication on the Internet.
The simple solution is for Congress to exempt these sites from 230 protections.
They can also rewrite the laws to be a bit more specific but that's not really necessary.
Did anyone ever stop to consider that FB, Google, Twitter , etc.. want Congress to remove Section 230 protections?
Yeah...
They are big enough that their in house counsel can easily defend against a lot of lawsuits. (e.g. SLAPP suits)
But... the little guys (Parler, Gab, etc ... ) their upcoming competitors... can't survive the lawsuits.
So essentially removing the protections , removes the competition... leaving the BTOs to publicly say "Users can always go to a competitor" Or "Someone can always create a competitor " and that its not their fault that there isn't a competitor... knowing that they will always be dominant.
If Congress wanted to do this right... just declare Google, FB, Twitter... public utilities. Now that would really make things interesting.
Mine's the fireproof coat assuming that the El Reg Monitors let this one thru.
My biggest fear for North America is that some overzealous group, whether well-intentioned or no, is going to succeed in shoving their particular brand of morality down the legal systems' and the publics' throat.
I'm not interested in giving chip-on-the-shoulder public "movements" the ability to try to sue their pet peeves into oblivion because the protections have been gutted. You think these various "Mothers Against ..." groups are a hassle now? Wait until they can try to Shut You Down because they Want To.
We need to get past this stupid, morinic idea that internet social media are any different from any other kind of publishing.
It is publishing and it needs to operate under the same laws existing regarding publishing.
Cute kitty pictures is not worth it giving this form of publication a free pass from civic responsibility.
Only if you run the servers...
But what is 'free' in free speech? do you allow anything you agree with to be published and anything you dont like to be censored?
I once posted on farcebork a picture of 100 toilet rolls with the caption "Covid-19 spreads via toilet rolls" (posted in response to the great toilet roll panic of march 2020), within seconds it had a "fake news" sticker slapped on it by farcebork.
Next comes along someone whos posted up something along the lines of "dominion voting machines were rigged to reject republican votes", this posting remains up and unfiltered even now.
So it seems that farcebork does exercise editorial control of whats posted on the site, therefore already falls outside section 230.
But going back to my first point, who sets the rules as to whats allowed and what is'nt...... because if you just ban people for being racist twats, you can't challenge their thinking with "Then how come you were cheering on those black guys playing for England last week"
I find most of the commentary here wide of the mark. Section 230 is designed to *promote* moderation, by allowing a platform provider to engage in content moderation without *therefore* being redefined from platform provider to content provider. That distinction is mentioned in El Reg's third paragraph, although it was spelled out more explicity in their previous article on Section 230. The so-called Get Out Of Jail Free card was never for failing to moderate, because as a platform provider they never had that obligation. For the platform provider, it's just bits over the wire. The Get Out Of Jail Free card was for moderating, which they could do according to their own internal standard or resource availability, without thereafter being required to do it to some external standard regardless of resource availability.
Parler is a good example. They did not have any legal requirement to moderate their user content. What they had was a contractual requirement with Apple and Google and Amazon. Because of Section 230, Parler was immune from prosecution no matter what illegal posts were made by their users. However, the *users* were never immune from prosecution.
This bill is going to change everything. By requiring moderation of the five bullets in paragraph eight for (quote) social networks, websites, and anything else that provides an “interactive computer service.” (/quote), it looks like my ISP is soon going to be reading my emails, and my posts here on El Reg, and censoring those its "AI" (yeah, right) flags as illegal. Before they *could* do it, now they will *have to* do it.
It's more Iran than USA. But think of the children (the first "E" in SAFE TECH).