Reply to post: Re: The irony is so thick here!

Tech firms send Congress checklist of surveillance reforms

Anonymous Coward
Anonymous Coward

Re: The irony is so thick here!

While I agree 100% that the government has no business building/using such a database...

The government, rightly or wrongly, has long ago decided that it’s part of fighting crime / espionage / terrorism.

I think we can all agree that these are all things that are definitely worth fighting.

I wonder how useful such government acquisition schemes are these days. A load of encrypted traffic is nothing like as useful as being able to read that traffic. I suspect that the social network services have evolved to the point where not even a powerful and all seeing organisation like the US government can easily "see" inside all the traffic, and are reduced to a reactive acquisition request; a crime has been committed, here’s the warrant.

The only people who can police the content "proactively" is the social networks, banks and OtT communication providers like WhatsApp. (The banks are already compelled by banking laws to be highly cooperative). And we really do want proactive policing - the whole idea is to stop bad shit happening in the first place, not just help the police sweep up the mess afterwards.

The problem is that the companies are either not interested in policing their content, or do it very badly, or design their services /devices so that not even they can see inside the data streams, or worst of all obstruct live investigations into crimes that have already happened (I'm talking about WhatsApp refusal to say who the London attacker had been talking to just prior to his attack. They do hold that data (it's what drives the ad platform), it's just that they can't tell us what was said).

So it's a bit of a stalemate, isn't it? The companies have the freedom to act that way, and that's what they do because it's the easiest, dumbest way to be taken "seriously" by their 'customers', at least that's the case in the USA. Meanwhile paedophiles, terrorists, criminals of all sorts, continue to use these services effectively immune to pre-emptive detection, save for some interesting advertising preference profiles.

At least here in Europe breaking the stalemate is an active political issue, and not one that the network companies are winning. The contrast between opinion in the USA and the rest of the world is remarkable. Here in Europe we may be cynical of our politicians, but no one really hates the government, or disputes the need for effective police forces. In the USA quite a lot seem to actively loathe their federal government, etc. Weird. Well, perhaps understandable when one considers MS vs FBI over emails in Ireland and the FBI's absurd refusal to use existing treaty arrangements with the Irish authorities.

Anyway, headlines like "Google, the Terrorist's Friend" really do resonate well with public perceptions in the UK. It's early days into the investigation into the Manchester bombing, but if it turns out he was using WhatsApp, or had been watching dodgy content on YouTube, or had relied on Apple's anti-gov communications, that's going to be more bad headlines for the companies.

It's already socially unacceptable to have your own company's ads appear on YouTube next to a hate video, or on Facebook next to a racist rant, etc. Basically European governments do not need to go the full STASI on the social networks, all they need to do is assess those that are good enough / cooperative enough to be socially acceptable, and those that aren't, and make it illegal to advertise on the latter.

It's really that easy, and that's what European governments will do. As it happens some of them have already become more draconian than that - €50m fines in Germany for just fake news.

So, that's a prediction of how European governments will tackle the stalemate. And it will hurt the social networks business in Europe badly. To be good enough in Europe means policing their global content to the satisfaction of Europe. That's an expensive asymmetry.

Knowing that, how can the companies adapt? They're mumbling about AIs and more reviewers, but that is clearly going to be inadequate. Facebook's leaked policies make it clear that they have very little intention of taking anything down, even child abuse. About the only thing they will take down is a photo of a nursing mother.

In short they are not adapting. So their risk is that their non-US business is going to dry up, or start costing them some enormous fines.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2021