
Excellent
That money will go somewhere else. The fact that it is not spent on GooTube means it will be spent on OTHER advertising. Most likely - on normal ads which are served on normal sites. That is actually good news for most of the web.
Several US-based advertisers have now suspended their advertising with YouTube, following over 200 pull-outs in the UK and Europe. Google had run big brand advertising on hate videos including jihadist groups. Johnson & Johnson, Verizon, AT&T are the latest to hit pause, or withdraw ad budgets from YouTube altogether. AT&T is …
No, it won't.
In organizations of the size which is involved in the boycott stuff like this is subject to budget and that budget HAS TO BE SPENT. The agency and the marketing department will spend it on something else if they are boycotting yotube today. Otherwise, their next budget will be reduced by the amount of money they have failed to spend this quarter and no boycotts will help when dealing with the beancounters.
No, it won't. The money goes where the users are. The users are on YouTube.
You're forgetting the laws covering the funding of terrorism. Basically it is illegal to put money into a terrorist cause.
Now, whilst a big advertiser can probably argue that it's Google's job to prevent their fee being directed to some jihadist's pocket, Google aren't doing that. Indeed, Google have been criticised by Parliament and Government for not doing it.
That casts a serious degree of uncertainty as to whether or not an advertiser making a defence in court "it's Google's responsibility" against a charge of funding terrorism. OK, that may sound ridiculous today, but Parliament is clearly heading in a direction towards advertisers being held responsible. What they have expressed recently as a moral obligation could, at a stroke of a civil servant's keyboard and Her Majesty's pen, become law.
The whole episode shows just how stunningly naive or cynical Google, Twitter and Facebook are being concerning the unsuitability of their American practises to doing business elsewhere. They are used to lobbying being effective in the US. It's far less affective elsewhere, especially in Europe.
Criminal Responsibility
It's like they're saying "you can't make us responsible for our content". The UK and Europe are very close to saying, "Well, let's see about that". Making Google's, Twitter's and Facebook's advertising customers criminally responsible for where their money ends up would do it just fine.
Such a result would probably kill advertising funded services outside the US. We'd be heading back towards the old (and successful) Compuserve model. With a paid subscription there is a strong identity trail between a user and their account, strong enough for criminal responsibility to be assigned quickly and easily to the user.
Personally speaking I think that'd be a good idea. Every wage earner in the UK is currently spending approx £150 per year via the price of goods in the shops to pay for online advertising (it's about £7billion per year). I'd quite happily pay that to get subscription mapping and search services that are guaranteed to have no adverts whatsoever, with no data slurp.
...and as soon as the tracking stats (clicks, audience reach and other normal campaign reporting metrics) across their digital campaigns start to dip and/or they start to see CPx increase they'll be back on YT, just as the article states.
The boycott itself is a PR move - brand owners don't have an ethical bone in their body, but they know what can and can't damage brand image and making a hoo-ha about this is all about ensuring they look good.
The terrorist funding laws beg to differ. If there was a case to be made, Google would be the ones getting prosecuted, not their advertising clients. It is the difference between trying to prosecute me sending $100 to a "Feed the starving orphans of Syria" charity has money ending up with ISIS without my knowledge, and trying to prosecute the charity when they have already been told some of their money sent to X ends up with ISIS and continuing to send money to X.
The most damning evidence against them is the fact that when they stopped UK brands from advertising on known terrorist content they left the US brand ads in place! That shows exactly what their feelings are in the matter: we don't care about terrorists putting content on our site if we can make money from it, and we won't take your ads off such content unless forced to.
Google has gone full circle from "do no evil" to sort of in between to actively and knowingly providing substantial financial support for evil so long as it lines their own greedy pockets!
It is the difference between trying to prosecute me sending $100 to a "Feed the starving orphans of Syria" charity has money ending up with ISIS without my knowledge, and trying to prosecute the charity when they have already been told some of their money sent to X ends up with ISIS and continuing to send money to X.
Unless ISIS is advertising, I don't see this.
Also, You can go to jail even if you don't know where your money ends up. And you get a medal of freedom if you ship weapons to ISIS.
@DougS,
Google are definitely heading towards scumbag status...
The terrorist funding laws beg to differ. If there was a case to be made, Google would be the ones getting prosecuted, not their advertising clients.
Google should be the ones being prosecuted, but this is the Internet. Google can avoid having a prosecutable legal presence in a country where they are vulnerable to prosecution, but still sell advertising in that country.
The UK government in particular and European governments in general have form in passing novel laws to bring about a desired outcome it matters related to terrorism. For example back in the late 80s early 90s it was illegal for the press and media to report bomb scares on the London underground. That put a stop to the IRA phoning in hoaxes. In the UK it is illegal to fail to report someone to the police if you know they are preparing a terrorist act. People have gone to jail for that.
Such a law would be unthinkable in the USA.
I think Google and Facebook don't realise their American thinking won't always translate to Europe well. After the events in Europe and London I think there is a strong appetite for legal frameworks that begin to sort out the problem of extremist online content.
A law setting out a system of blacklisted Web sites where it is illegal to place advertising is not a big intellectual leap for most European legislators. It would get strong support in parliaments everywhere. And companies like Google would absolutely have to do everything they can to stay off that list. If such a law were ever put in place there'd have to be thresholds of reasonableness, but those are only ever going to be ratcheted one way...
That would be in addition to making Google, Facebook directly responsible too. Germany is already heading down that line slightly with Fake News.
Allowing free speech (even on YouTube) doesn't mean that advertisers who don't support the views expressed should be expected to help pay for the organisations expressing themselves.
Surely there must be some way for advertisers to tell Google 'do not put our adverts on content provided by the following: ...'
"Surely there must be some way for advertisers to tell Google 'do not put our adverts on content provided by the following: ..."
Unless you are keeping very quiet about a massive break-through in Artificial Intelligence, the three dots at the end of your question can only be a list of specific providers.
It is an insoluble problem. Expecting a machine to be able to say "This is offensive, but that isn't." is absurd in a world where real human beings chosen for their good judgement have been able to argue in court for centuries over the exact definition of offensive and whether particular material is offensive.
Reply Icon
>"Surely there must be some way for advertisers to tell Google 'do not put our adverts on content provided by the following: ..."
Unless you are keeping very quiet about a massive break-through in Artificial Intelligence, the three dots at the end of your question can only be a list of specific providers.<
Either identifiable entities or meaningful categories should be manageable. There's enough money in the business to cover however much it costs to keep advertisers happy (or Google's business model is untenable). Advertisers may want to dissociate themselves from all sorts of content providers - particular political parties, religious groups, government agencies, competitors, themselves, entities from countries they don't trade in, etc; it needn't only be 'offensive' stuff (which is of course a subjective category, not a judgment that could be trusted to anyone else, human or AI). I'm sure some advertisers are delighted to be associated with content providers others might find 'offensive'.
The surprise is not that some advertisers object to some content, but that they hadn't already insisted on some mechanism to enforce their preferences - and instituted routine checks.
The Theory of Advertising predicts that these companies leaving the advertising platforms will suffer a drop in sales ... but what happens if their sales don't drop? It's my belief that 90% of the money spent on web advertising is completely wasted - most of the time nobody watches them, they are a major annoyance and the fact that Ad-blockers are so popular should tell you something.
But as the old skit goes, while ad men may be wasting most of their money, there's no way to know WHERE that "most" is actually being wasted, and that 10% is enough to support all the rest, so ad men keep plugging.
As for the ad-blockers being popular, are they REALLY popular? Popular as in the hoi polloi are using them now, or do they just SEEM popular because the tech-oriented are being squeaky wheels?
I've used ad-blocking browser plug-ins for years. I'm installing a Pi-hole on my home network this weekend to see if I can provide my other network devices with similar benefits. A Pi kit and two pages of installation instructions isn't exactly turn-key, but it probably wouldn't take much for some enterprising soul to resell assembled Pi kits with Pi-hole (or something like it) preloaded. Popularity would then be proportional to the value proposition for the consumer. Consumers certainly liked those commercial-skipping DVRs for the 3.5 seconds that they were allowed to exist.
Ugh. I can't believe I'm actually advocating IoS for something.
Note the emphasis on providing tools for advertisers to make the choices [...] rather than Google increasing resources or editorial control.
Because if they did exercise additional editorial control then they might lose their "Safe Harbor" status under the DMCA and similar laws. Remember, Google still pretends that they don't really run YouTube - it just magically runs itself and everyone/anyone else gets to be the gatekeepers.
Plus it would cost money to do otherwise, and we can't have that now can we.
Oh all the moral outrage...
Sorry but this is just money, you can bet your house on some of these companies using suppliers in the middle east that fund them anyway because they are cheaper than someone else.
I'm also not buying the google free speech crap, they work with governments so if the content is banned how do you find the people viewing it?
Perhaps advertisers will notice a slight drop in page views, but sales will stay the same and they'll suddenly realise all those click throughs from Youtube are actually just Google bots who were falsifying numbers.
Surely most people have an adblock running for Youtube these days any way?
"Surely most people have an adblock running for Youtube these days any way?"
no 'ad block' per se, but a possible alternative method:
a) special non-priv login for youtube browsing
b) minimal plugins to avoid security and content view problems [but include movie downloaders, etc.]
c) download before you view [so you can skip things and/or get non-stuttery content with limited bandwidth]
d) make sure the browser CLEARS EVERYTHING (i.e. history, cache, cookies) when you close it.
e) always view anonymously (no logins)
f) NEVER use a flash plugin
yeah, works ok for me.
So, an advert for a progressive newspaper appears on a promotional video for a terrorist group. If we can get over the outrage we can also see a couple of other interpretations:
I think the first possibility gives us food for thought and the second cause for optimism.
Actually it's not even for a terrorist group it's the Britain First YouTube channel as indicated by the Britain First logo and text at the bottom of the screen grab. So maybe Google have noticed a correlation between Britain First members and them flooding the Guardian comments pages with crap.
>Actually it's not even for a terrorist group it's the Britain First YouTube channel
I suspect it is both and this is where it gets messy. I would guess that the video is an Islamic-extremist video (based on a very superficial assessment of dress and beards) reposted by Britain First as a "discussion point." How do you tell if a post is supporting the video content or is opposed to it?
We have a UK political group posting into which a Guardian advert was inserted. If that's the case, Google's algorithms seem to be working pretty much as expected. My understanding from this article and previous ones is that the US doesn't want to censor videos as it provides an opportunity to insert "counter-messaging," which is exactly what happened here. My message to the Guardian is, "that's what you get when you use a cloud-based AI advertising system from another jurisdiction. You have zero control." Maybe there should be a way for advertisers to manage a list of posts they don't want their adverts to appear in.
I do have to question if the BBC had done the same thing, would the Guardian be as outraged, or is this really about the Guardian being annoyed that it helped fund the BNP?
You make some excellent points, including spotting that the picture in wide use is a snapshot of a Britain First channel on YouTube, hardly matching the "Islamic extremist" description many others are using (but certainly extremist in their own way).
"is this really about the Guardian being annoyed that it helped fund the BNP?"
The particular names aren't really significant - the "ads in inappropriate places" topic now goes much further than just the New Guardian, which has very little other than the name and Polly Toynbee in common with its illustrious ancestor. Even Will Hutton has recently resigned from the Board of the Scott Trust Limited (which has been and is the Guardian's parent organisation):
https://beta.companieshouse.gov.uk/company/06706464/officers
There was a time in happier days, thirty years ago. when the (older, more principled, less Buzzfeed-like) Guardian ran a series of video/TV adverts based on the principle that things aren't necessarily what they seem at first glance:
https://www.youtube.com/watch?v=_SsccRkLLzU (adblock recommended, as always with YouTube)
Times change.
" some people who watch terrorist videos might also read the Guardian
some people who watch terrorist videos might start reading the Guardian and become more progressive as a result
I think the first possibility gives us food for thought and the second cause for optimism."
What a load of bollocks.
I don't agree that ad agencies will have to come crawling back.
At least some of them must be considering this as an opportunity to gracefully extricate themselves from a pattern of throwing away money.
The big plus would be the cover of doing it for moral reasons instead of rectifying a bone headed mistake.
Keep beating that drum. Eventually you'll be able to shut down new media so the old media can rule once more. Just keep beating that drum.
If the intelligence services really cared they could fund teams set up to locate and identify illegal content and add it to the content ID system and have it automatically notify them / take it down.
A: you can't magically take down something that has never been added to a system for taking down.
B: even where content matches to data in the database you need to make sure it isn't covered by fair use (satire, review, news)
But ahh well, lets get get private companies to be the censors for the progressives, they're the best.
https://www.youtube.com/watch?v=2AhGYo9TExU
WHAT YOU GONNA DO ABOUT IT? (cue derisive laughter)
Behind the mealy-mouth PR spin, this is what they're all really saying. They do something heinous, there's a short flurry of impotent outrage... and they get away with it. They know we have nothing useful we can do about them, they simply wait a little while and move on to the next attack.
Applies to Google, Facebook, the NSA, GHCQ, innumerable local police agencies, phone-tracking retailers, IoT vendors, political parties... my fingers would cramp before finishing this list.
This post has been deleted by its author
Except this video with the Guardian ad was posted by a right-wing British group protesting the Islamic video. Do you want Google to ban protesting ISIS?
If I say I don't like Nazis should the reg block this post because it says Nazis?
Of course the real question is why there wasn't an ad for the Daily Mail, I doubt many Alt-right you-tubers are Gruniad readers
I reckon if we can just convince the Jihadis that e.g. dressing up as bananas and singing harmless childrens songs about fluffy kittens would ensure such new videos /never/ get banned, they might just go for it. :-)
This post has been deleted by its author
Whether intentional or not..!
Google has hundreds of PhDs as the article says, generating fancy algorithms all the time (if deep neural networks can even be called 'algorithms', but that's another debate) which are increasingly able to classify content on the interwebs.
So powerful have these algorithms become, that Google are increasingly expected, by the moral police and the social justice warriors, to use them to cleanse the interwebs of undesirable content. Hate-speech, kiddie porn, etc etc.. Of course, only a troll would support such vile and reprehensible content, and only a troll would criticise those who wish to cleanse it.
In this case, Google has failed to apply said all-powerful algorithms to classify hate-speech as hate-speech, OR failed to take action after said all-powerful algorithms have successfully classified hate-speech as hate-speech.
Assuming the latter (i.e. the algorithms work), Google have two options:
1: They stop the ads on hate-speech videos. The next question from the moral police (in every jurisdiction) will be: "well if you can stop ads for hate-speech then why can't you just censor hate-speech altogether (for my sovereign definition of hate-speech)"
Or 2: postpone the inevitable by saying that their algorithms aren't yet perfect, and that they will implement this soon. Same result as above, just slightly later.
Or 3 (not a real option): Take a stand against the moral panic and permit hate-speech along with everything else, while losing revenue AND suffering the wrath of the SJWs!
(anon, obviously)
The reinforcing rail in front of the receiver and the front sight are fairly distinctive features of the AK-47 and very different to the AR15 family.
If you see a video with beards, balaclavas, green army jackets and AK47s you know they are terrorists and the video is banned, if you see the same images but an AR15 then they are simply a cultural organisation and can be funded - and invited to your St Patrick's day parade.
It would only take a couple of lines of OpenCV or scikit-image to recognise the difference between terrorists and freedom fighters so long as they can be persuaded to point their assault rifles in the air.