>typically in comments sections
shocking
The Times is campaigning to brand Facebook a "publisher" under British law. While an understandable reaction to the horrible content shared by users of the world's most popular social networking website, trying to make it subject to publishing laws would open a whole Pandora's box of trouble. Yesterday the newspaper published …
Illegal material is, well, illegal, and Facebook are supposed to be responsive to notifications of such material. The trouble is that they're not responsive enough. Instead they rather give the impression of doing the opposite; they don't take down illegal material quickly or effectively.
Is that their fault? Well, it's a natural consequence of their chosen business model. Free to use with no effective user identity checks means it's too easy for the nastier (minority) members of society to hijack the service for their own despicable reasons. And society is basically moving to a position where such a chosen business model is not compatible with what society wants.
Facebook have had many, many years to get on top of this type of problem, and they simply haven't done it. They, nor anyone else, cannot expect to be allowed to run such a naive business model indefinitely. At some point they are inevitably going to be told "you're grown up now, you should know better". If they have not been sufficiently strategically savvy to understand that, that's not our fault. Same goes for Twitter, Google, etc.
Ok, so they're all talking about AI filtering, things like that. And if they are planning on claiming that these measures will be adequate, surely they then won't mind being classified as a publisher. On the other hand if they're not sufficiently confident in such filtering measures to accept classification as a publisher, then by definition they're not good enough at filtering.
"Instead of targeting Facebook with new laws, as The Times would, we should instead target those who misuse the platform to promote illegal things."
There's no need to target such people. What they're doing is already illegal. Do you really think that people who post such material are going to read the Times, pay attention, and stop what they're doing? I don't think so.
The problem is that Facebook, Twitter and Google's YouTube make it far too easy for such people to enjoy anonymity. Handing over an IP address is cooperation, but it's not very effective cooperation; it takes a lot of work to unwind an IP address to discover a person's identity. Worse, Facebook's WhatsApp even guarantees that it won't aid the police with their inquiries.
If we're to effectively target abusers of the platforms, then the platforms need to know more about who the users actually are. That's got to be something more than an IP address, a made up user name, and fake details. The trouble is the only real way that a social network can be sure of who a user is is to have had some financial relationship with the user (e.g. a completed credit card transaction). That's very much not compatible with the social networks' current business model.
To make Facebook (and Twitter, and other social networks) liable for users' content would almost certainly lead to the Defamation Act 2013 being substantially amended to remove that important protection. That would have a chilling effect on free speech – ironically, the very effect the act was passed to stop.
I sincerely doubt that it would have a chilling effect on "Free Speech". It would have a chilling effect on illegal material, libellous and abusive posts, and so forth. In the UK and much of the rest of world we do not have the right to libel or abuse someone in a public forum. Not even in the USA (where you can say anything, but there is no guarantee that this is free of consequences Obligatory XKCD). There are laws about libel and abuse.
The problem is that the social networks have made it too easy for people to break such laws and get away with it. Enforcement of these laws is severely hampered by the business model of the social networks. Quite a lot of very ordinary people are very fed up with that state of affairs.
In contrast, a justifiable post / article / publication is, by definition, not libellous or abusive; if you can prove a point, then you'd win in court. Ask Ian Hislop.
Thing is, you could achieve much the same with small fines, just a hundred quid or so for each post not taken down in reasonable time, and same for each appearance of fake/misleading adverts, and suddenly Google, Facebook, etc, would manage to deal with most of the crap.
After all, they are pretty good at following users with targeted adverts, so how hard is it to develop a "this users is an angry moron" sort of profile and limit their ability to post/share shit?
@Paul Crawford,
Thing is, you could achieve much the same with small fines, just a hundred quid or so for each post not taken down in reasonable time, and same for each appearance of fake/misleading adverts, and suddenly Google, Facebook, etc, would manage to deal with most of the crap.
Sure, but that would then sort of be classifying them as a publisher with all the responsibilities that come along with that. "Sort of", as they would be deemed to be the publisher after a period of time.
The scale of fines is interesting though. Germany is talking about €50million, which should really sharpen their attention. A few hundred quid for each serving of a post / ad would indeed be equally attention grabbing.
The difficulty they face is that "dealing with most of the crap" isn't entirely good enough (criminally illegal content). With an AI, automated filtering, etc I seriously doubt that they can get to an acceptable level of suppression of the crap without having a hit/miss rate that also impacts on perfectly innocent material. There's a lot of subjectivity involved, requiring expensive human supervision.
Far better to get malicious posters to be self-censoring, i.e. afraid to be identified and prosecuted.
@Chewi,
I'm sorry, I didn't realise Facebook and WhatsApp were the only means of communicating anonymously these days.
Indeed they're not. But if any law changes are made, any sensible change would apply equally to all providers.
Tighter monitoring of these services will do little but drive the bad people further underground and infringe on other people's liberties.
Well, I think such changes would make the mainstream social network services far better at being self-policing. There would be less need for law enforcement agencies to monitor these services as the providers would be strongly motivated to do that themselves.
Sure, there'd still be things beyond the control of any government / company. Tor is a way of being more hidden, but it seems that's not flawless either. There's been some experimental work done on a purely peer-to-peer social network (no central servers), but it's highly unlikely that such a thing could ever be as all-pervasive as a server based social network. No one wants their mobile battery drained flat before they've got to work.
One suspects that law enforcement agencies all over the world would be quite content to drive bad people so far underground that they're using technologies that are too much hassle for the vast majority of us to bother with.
That would resolve the problem they have right now, in that at least some bad people (e.g. those people putting child pornography on to Facebook private groups) aren't underground at all. They are protected by being hidden amongst the mass of ordinary Facebook users and by Facebook's apparent reluctance to do anything about it and refusal to let law enforcement agencies do it on their behalf.
If the major social network services were effectively purged of all that crap, we'd all be much happier. If that meant more exotic technologies were then all that was left for bad people to use in carrying on their despicable activities, that would mean that law enforcement would largely lose all interest in looking at what anyone is putting up on the mainstream social networks. That would be even better.
I participate in contentious debates on facebook regarding religion with people all around the world. Almost all anti-religious comments are illegal somewhere in the world. Recently someone in Greece made a joke (pun) about a priests name and found themselves subject to arrest and imprisonment in Greece. In other places just stating you are an atheist is enough to get you arrested and given the death penalty. Making negative comments about Islam in the UK could get you arrested and charged with a hate crime. Just who should police facebook or any other social media site or internet forum and whose laws should apply?
"Making negative comments about Islam in the UK could get you arrested and charged with a hate crime" -- Andy Non
As far as I know, things can't be a hate crime unless they are firstly a crime, and I'm pretty sure saying negative things about Islam isn't a crime unless you cross the fairly well defined line to inciting religious hatred.
We should absolutely resist any confusion between causing offence and committing crime. Islam is stupid, and I can't see how anyone believes in this load of old bollocks, or, for that matter, any other religion. The Abrahamic religions seem particularly vile. It beggars belief that functioning adults in the 21st century should believe such absolute nonsense.
Feel free to report this if you think I've committed a crime and presumably we'll find out whether you're right. Hell, if the case goes to court I might get a chance to repeat it all on telly. Criticizing Islam or any of it's equally silly relatives is not a hate crime. Committing crimes against people BECAUSE they adhere to one of these bonkers beliefs, may well be, depending on the circumstances. That seems simple enough to me.
There are at least three groups of people who have motive to confuse the issue: the adherents of the stupid religions; the "professionally offended" who don't hold the stupid beliefs but who for some reason take offence on behalf of those who do; and, more cynically, a third group who want us to feel that we are oppressed by (for instance) Muslims and have had criticism of that load of medieval goat-herd hogwash somehow prohibited by law.
All three groups can just do one, as far as I'm concerned. But if you're not a member of one of these groups, why would you be suggesting that criticizing Islam could be a hate crime? You are helping with the chilling effect you claim to be railing against.
AC because I'd rather not deliberately offend friends who believe in this utter drivel. They know what I think but I usually don't ram it down their throats in return for them not doing the same.
I didnt realised subbing to SKY, smoking 100 fags a day while drinking a gallon of larger, plus 24/7 vomit inducing "social media" via the latest smart phone; all while making your rented home into a pigsty and beating your kids WAS Dickensian.
Well the last bit is, but the beatings arent as bad as when they didnt have 99% of their focus on the smart phone.
I wont hide behind AC either.
Well balanced summary of the arguments.
If the Times win, how will they stand with people publishing an advert (do they still have a personal column?) with a rather bad haiku "A grey goose in dusky sky. A child cries. No one listens - Baudelaire" What's the problem? It's a code for today's IP address for a hidden kiddiepr0n site! 145.253.155.037 (no, not really, just kidding he says, crossing his fingers that it isn't actually, by a billion to one chance, *actually* a kiddiepr0n site) Or maybe it's a trrrrst training site?
That sounds like publishing material that helps criminals...
The UK satirical news magazine Private Eye made a good point.
The Daily Mail newspaper has had various thunderous headlines along the lines of "Google - the Terrorists' Friend", and lengthy articles describing the content that is up on YouTube and how Google (and the posters of the videos) profit from the content remaining viewable. You get the picture.
Private Eye pointed out that on the Daily Mail's own website there is a trove of pretty nasty videos associated with news stories about just how nasty ISIL, etc are. And of course the Daily Mail are quite happy to show a bunch of adverts next to the content...
They had great one recently with pictures of presumably underage girls getting their breasts sliced up. Uncensored too. But it's OK because they were just tribals. The tone was "My isn't the exotic practice alarming?" But I'm pretty sure they really only wanted an excuse to show nipples.
"If the Times win, how will they stand with people publishing an advert (do they still have a personal column?) with a rather bad haiku "A grey goose in dusky sky. A child cries. No one listens - Baudelaire" What's the problem? It's a code for today's IP address for a hidden kiddiepr0n site!
Except you have to be able to establish that code in the first place, meaning you'd have to have met in person (First Contact Problem) which could expose you to the plods. Plus, something nonsensical like that is going to raise some eyebrows. So it poses an interesting issue: how do you deliver instructions in a non-obvious way (meaning it must look legitimate) to someone you probably never met before?
The establishment has no motivation to stifle one of it's best sources of information.
If they were there would already be criminal investigations against material they failed to remove and they would be forcing Facebook to increase it's moderation.
Old media is using this as a bat to try and hit Facebook with because of the shift in power.
The Times and their fellow travelers are still smarting that people would rather peruse Failbook than their not very useful or accurate fishwrap. This is more a case of those who are in glass houses should throw the first stone. Reading the media's hissy fit, it seems they are more upset with the fact they have lost considerable control to shape the message. With the net people are not required forced to get their information from a limited set of locally available sources as was true 20-30 years ago. Plus, the financial model for many papers and magazines has been hammered by the net.
"Instead of targeting Facebook with new laws, as The Times would, we should instead target those who misuse the platform to promote illegal things."
But what does the law do when "those" are hiding themselves behind hostile sovereignty? How does Scotland Yard go after a paedo, for example, who happens to be posting from, say, Cambodia?
Maybe the reason it takes Facebook so long to take down the truly offensive when it can erase a feeding infant in an instant, is down to the possibility that the majority of their reviewers simply refuse to look at certain types of material. After all I strongly doubt that a daily quota of images of child abuse and dismembered bodies viewed is an enforceable performance metric.
It's only when multiple alerts on the same post start flooding, (and the media has their story) in that anything happens.
It might not be right, but it is human nature.
Maybe the real reason is because there are 1.86 billion monthly active facebook users and it costs too much?
https://zephoria.com/top-15-valuable-facebook-statistics/
Even the 5-eyes are having trouble staying on top of this internet thing, folks.
Zuck will need Federal military subsidies or be forced out of business if asked to meet these new public decency compliance requirements,....oh ...... wait a minute.....
It would not be realistic to require FB to manually monitor all posts and besides, its is against FB's business model to employ people for that or any task. I'm not sure AI is the solution even if the technology could be sorted. It's not the AI that is being offended and getting a Bot to accurately judge the public mood is tricky.
It IS realistic to require FB to quickly take down offending posts once reported and the eventual solution perhaps lies in what has happened so far.
There are two parts to dealing with any offending post
- recognising and reporting it
- taking it off the system
FB's business model is to automate or outsource everything. By relying on users to report offending posts FB has outsourced the first task. Why not the second as well?. Let users report a post which then places it on a probation list. If then sufficient users vote the post as offensive the system should delete it automatically. Or you could have a Votes For vs. Votes Against system. Either way it is automated which will please the Zuck no end. The important thing is that letting FB avoid responsibility is not a good idea.
As a test, Thumbs Up if you like this suggestion , Thumbs Down if you think it's twaddle.
You forget about Asymmetry of Knowledge and smear campaigns. People who REALLY hate you can go to rather great lengths to get other people to hate you and can just start using false identities and other techniques to pile on bad votes on you, effectively silencing you with no recourse. You post anything at all, they complain and then flood the vote to get your post removed.
https://zephoria.com/top-15-valuable-facebook-statistics/
Even for a non-FB users, this is an interesting site and thanks for posting it.
Now.. In relation to the discussion at hand...
Photo uploads total 300 million per day
I think a person employed to do such could probably reasonably reliably visually scan 1,000 photos per day. That's a rate of 2/minute on average for a 8-hour day, however while ones clearly of scenery could probably be glanced at in a second, some will take a moment more to decide if there is a risk or not. If you're looking at any comments on the picture the rates will drop - after all the scene could be "the bit of bush where I raped that bitch" or "where the body is buried" - so looking for more context would drop the viewing rate considerably. Maybe 500/day given some basic looking at other data around each photo? I am just guessing here, I know with Digikam I can scroll through my pics PDQ but then I do know what my pics contain (my collection is quite large, while I might go weeks without a single shot some days I can fill several cards and at my worst I think I topped 1,000 pics in one day) and they're organised by event/location etc so I get a lot of context just by the name of the folder. For a person viewing FB pics they wouldn't necessarily have it so easy, because as I mentioned even an innocent-looking picture of a tree could be problematic.
But lets take one person looking at 1,000 pics per day. That'd require 300,000 people employed solely to look at pictures on FB to keep up with demand, if the number is accurate. Double that if the rate would drop to 500 views/day. Any country that could get this job would solve their unemployment problem. And would need some nifty infrastructure to get the pictures to the eyes. Looking at cat pics (eg LOLcats etc), I think most people would probably max out at 5-6 pics/minute, depending on content (that's one picture every 10 seconds for the mathematically challenged)
And I bet the burn-out rate for people thus employed would be pretty high as well. Even if all you see is cute cat pics by some fluke, there's only so much one mind can take. No matter how novel or cute or whatever, such work would quickly become tedious.
Icon.. What I wanna to do the next person who asks me to check every picture in their photo collection after doing a recovery on a drive.
And let's not even get into stego'd pics or pics that are used as very secret signals. As in the picture of the Grand Canyon is one thing while the one of Mount Rushmore is another. Most of these would require a First Contact to work, but I'd have to think that with at least some small thing in common, there could be a way to pass contextual messages around using nothing but innocuous pictures and no prior meeting.
Not sure what the point of The Times' effort here is, because it isn't going to change anything. I hate that I'm about to do this, but I guess I'm taking Facebook's side here...sort of.
Let's say I went to The Times' web site and read an article. Then I decided to comment on it with a narrative that would trigger the UK's indecency laws, meaning that any 'publisher' should not be publishing stuff like that according to UK law. Surely the fact The Times is a publisher wouldn't make them liable for what I wrote in the comments; it would be my responsibility. They would have an obligation to do their best to avoid letting people see it, by 1) taking it down quickly when alerted, 2) having some sort of automated filters, or 3) moderators.
Facebook already does #1, could do #2 if they wanted, but #3 would be infeasible. Is The Times also going to campaign for Facebook to require moderators to approve everything you write on it. If they couple that with a law requiring that UK citizens' moderation by done within the UK, it would be a great jobs creation program that would offset the effect of Brexit!
"If they couple that with a law requiring that UK citizens' moderation by done within the UK, it would be a great jobs creation program that would offset the effect of Brexit!"
The Grauniad is notorious for its apparently zealous moderation of comments. There is a suspicion that it is done by the writers of the respective articles. Anything which rationally challenges the article's viewpoint is liable to be moderated out. On the other hand downright offensive postings often seem to remain unscathed.
It would be nice to see some features of the old Usenet readers resurrected. They allowed the reader to put individual blocks on specific threads or commentards. You gave yourself limits to what you were prepared to read - while not affecting anyone else's choices.
Of course there were some people who tried to get their ISP to block a whole group because it didn't fit their ideology. Usually attempted by flooding it with pr0n - then complaining to their ISP.
Facebook et al are nothing to do with freedom of speech
Lazy web hacks constantly peddle the censorship/oppression buzz words every time someone suggests that "social media" should take some responsibility for what they enable, while conveniently forgetting that these same companies are, far from the great defenders of free speech they want us to see them as, the worst offenders in terms of imposing conformity, binary thinking and selective speech.
Yes you can have freedom of speech until facebook decides that you can't. Then try voting them out of office and see how far you get.
Terrorist or pedo content doesn't need to go through the publisher-filter, when it is handled by different legislation.
But considering facebook somehow "not a publisher" is kinda wrong. Right way to handle it is via licensing - users would license their content whenever they press "submit" button. Automatic or implicit licensing is how it should be handled, and web site operators would need to prove that they have license to publish that content.
I'm not quite sure what problem it is you're hoping to handle. Most sites make it clear in their terms that entering content into to the site acts a license to redisplay that content to others.
The EU's e-Commerce Directive (as implemented in many laws, including those you're thinking about) gives providers of "information society services" such as Facebook an out from such laws. They have to do something about illegal content once informed of it, but are not considered responsible for it unless they exert meaningful editorial control over the substance of the content.
I think its the job of the police to clamp down on crime not Farcebook.
As long as these scumbags are posting shit in the clear it makes thebcops jobs easier.
Remove and filter from the clearnet and you drive it underground thus making the cops jobs harder.
Id rather the authorities gave shit bag organisations like Facadebook a hard time than have them focus on banning necessities and human rights like encryption.
There are processes in place to allow the cops to get info out of Facebook. Warrants etc.
If it is deemed to be the responsibility of service providers to clamp down on crime, there should be regulations and legal protections in place to ensrlure the service providers know where the goal posts lie.
As it stands with moving goalposts and political agendas id imagine its difficult to know where to focus effort and resources as far as the SP's go. Since theres no way to know if you're doing the job right you might as well do fuck all and take that on the chin.
I also think any law and regulation regarding the internet and privacy should be international law and a matter to be debated amongst all nations. That way regardless of where you are and the service is there are no (or fewer) grey areas and enforcement applies everywhere regardless of the resources a nation may have.
Lets face it, if a rogue nation like North Korea became the terrorpeado capital of the internet theres fuck all any nation could do about it.
You might be able to catch the perpetrators but you won't be able to take the content down. Which means you're left with a game of whack a mole.
Without the cooperation of other nations you're basically in political hot water.
It never ceases to amaze me how often these sorts of "For the children!" types think websites should do the job of the police. Guys, you don't want that. When you give corporations police powers, the result is a train wreck that is terrifying to behold... and it'll be in your backyard. The best thing to do is to simply disable reporting being viewed by employees for things people believe are illegal (not the same as against terms of use) and direct them to their local law enforcement at that point in the process... and then have a department set aside to coordinate legal requests/inquiries and streamline the process as an outreach program. Ultimately, it's not the job of a corporation to police people, but it makes sense from a public safety standpoint to be cooperative and make it easy for them to do their job, when and where practical (which won't be the same from one company or website to the next).
Besides... what's legal in one part of the world isn't in another. Even just using this singular example "child porn" -- everyone thinks it's so clear cut but it isn't. Age of consent varies by country. In some countries, like the United States, they've also come down on "depictions" of under-age sex -- which has landed a number of anime collectors in hot water and has led to all manner of collateral damage because how, exactly, do you tell the age of a cartoon? Whether the character is 9, or 900, a lot of times... they look the same, in the same style. How about writing a memoirs or "tell all' by (childhood) rape survivors? Well, legal in some places... not in others.
That's where they get you: They're hoping for a knee-jerk instead of a reasoned and rational response to something. And that's the crowbar they lever into people's civil rights. Not even just rights in their own country -- Websites have had to play second fiddle to every country's idea of what's considered obscene or not. Some muslim countries want to ban pictures of women who aren't basically a burlap bag with eyes. Some countries want certain political viewpoints suppressed (Hi Germany, China, Russia, United States, Britain... *everyone in the everywheres*), and it's utterly impossible to meet all of their demands and still have a website. There is nothing you can say in the world that won't get cheered on by one group and pisses off another.
Until the governments of the world decide to stop having their little turf war over who gets to own the internet, there's only one of three viable answers: Don't get involved. Get involved, but only with the government where your server(s) are located... and convince the many-headed fuckbeast of world government to write a goddamn treaty already about this and setup a proper way of streamlining and handling these requests and what people's rights and responsibilities are when running a website, server, service, etc.
"Until the governments of the world decide to stop having their little turf war over who gets to own the internet, there's only one of three viable answers: Don't get involved. Get involved, but only with the government where your server(s) are located... and convince the many-headed fuckbeast of world government to write a goddamn treaty already about this and setup a proper way of streamlining and handling these requests and what people's rights and responsibilities are when running a website, server, service, etc."
The governments will NEVER stop getting involved because too much is at stake to them (to some, their very sovereignty may be at stake, an existential threat). And many times, not getting involved is not an option; just ask Blackberry. Which leaves option three: essentially trying to beat a fireproof hydra whose heads keep snapping at each other.
"So, because Murdoch owns the Times, and you don't like Murdoch, that makes their article about Facebook getting a free pass to continue to do nothing about users posting child porn totally unacceptable?"
Not totally unacceptable, just hopelessly biased due to the skin at stake for Murdoch. Facebook's essentially a rival firm. ANYTHING coming from a rival needs to taken with significant salt due to the inherent bias of being a rival.
Much of the mostly confected media outrage about what other media are doing/saying is driven by little more than attempts to hobble the commercial competition.
Legacy media empires hate the new media for stealing their lunch, hence all the noisy "public interest" campaigns demanding restrictions and censorship for those now-vast upstarts.
Shutdown Facebook,Twitter,Whatsapp and outlaw all of that!
Facebook, Twitter, Whatsapp and any similar service must be outlawed worldwide.
The managers must be jailed and all their money seized.
Then the world would be a better place and some people would start using their brains again, if anything is left inside those heads.
Stopping people seeing the data doesn't require removing it; simply not serving it will be enough, and when there's a dispute as to which countries' jurisdiction applies, the non-serving can be done selectively. Either way, of course, the data (and attempts to access it) can be used forensically.
In particular, although analyzing pictures is relatively hard to program, just knowing some sample pages of interest to the offenders concerned, and seeing who their friends are and who they pass which URLs to, looks relatively easy, once you've identified a few sample offenders (or sample offensive pages).
It's interesting that until recently there were plenty of cases where FB was proving quite able to censor user content based on "community standards" or other such nicely sounding reasons:
https://www.theguardian.com/technology/2016/sep/09/facebook-history-censoring-nudity-automated-human-means
https://qz.com/719905/a-complete-guide-to-all-the-things-facebook-censors-hate-most/
It seems hypocritical that now it claims it can't easily apply those same standards for hate speech and child abuse imagery.
Why pass a new law against this? It is already illegal! Just use the existing laws to fight it. Its the police's job to go after these people so have them do it. Facebook will happily provide user details so they can be arrested.
That said if this law is passed it would be funny if Facebook simply disabled posts, and comments in the UK. The only thing you could do is post moods.