Why do they host ads?
Money.
You're welcome.
US Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) on Friday sent letters to the CEOs of Amazon and Google asking why their ad businesses fund websites hosting child sexual abuse material (CSAM) and allow government ads to appear on sites with illegal imagery. "Recent research indicates that Google, as recently …
This post has been deleted by its author
Surely, that's a tautology. It's the purpose of business to make money. They're souless machines, not humans.
Any appearance of humanity comes from the Marketing and Brand Image budget.
Seems to me that this website simply doesn't spend the big bucks using the automated filtering systems used by the big players. I wonder how much they take to run. These filtering tools don't come cheap.
“Within the law …” … Doesn’t seem to be bothering the Orange Jesus and St Elon much with the DoGE wood-chipper.
(By policy over decades … USAid is effectively a subsidy to US Farmers generating the tons of food aid).
https://knowyourmeme.com/memes/leopards-eating-peoples-faces-party
Surely they won't care, after all they did take over a CSAM site, AND THEN RAN IT THEMSELVES FOR A YEAR!
Of course it was for reasons of catching more people, but that doesn't change the fact that they ran the site distributing and accepting new CSAM for that year... Real standup guys, very trustworthy.
Giant corporations are just completely amoral sociopaths in the purest sense. All they care about is money. I know I've said this before, sorry for being a broken record, but they'd all (even the ones who go on and on about their bullshit mission statements) very happily grind live kittens and puppies into blood meal if they could somehow make a decent profit on it, and they would tell you it was a moral imperative and societal necessity because it increased stockholder value.
So neither Google or Amazon (or MS or Softbank or...) actually give a single eff about supporting CSAM if it raises revenue (why would they?), they only care about possible reputation damage from being caught doing it impacting their bottom line, so that's why they hire spin doctors.
"And numerous major advertisers are said to have run ads on these websites since 2021, it's claimed, including Acer, Adidas, Adobe, Amazon Prime, Dyson, Google Pixel, Hallmark, Honda, HP, MasterCard, Starbucks, Unilever, and the US Department of Homeland Security, among others."
I can see why all the the companies who have products or services to flog are 'major advertisers; but why is the US dept of homeland security on that list? If they are a 'major advertiser' WTF are they advertising?
Stuff like this:
https://www.dhs.gov/archive/dhs-campaigns
Clicking on the right one of those will tell you that this is "Canned Food Month", with the advice helpfully translated into Arabic, Russian, Han Chinese. I wonder how long before the Doge of Venice Beach finds that out, and feeds DHS into the wood shredder?
Because it is a very large internet. To manually check every page before the ad appeared is not viable, and any other option will allow some incidences where this occurs.
Of course if one fail was enough to get you shut down, we wouldn't have any politicians. That's a pretty good deal. No internet and no politicians.
There is a progressive campaign underway to take full control of the net, ending web 2.0 and other services, especially cross border ones, by governments globally. The UK, which wants back doors in everything, has an internet censorship act kicking in, in July. Others will follow. It is all done under the premise of protecting the children from 'harms' and 'public morality'. The Chinese use the same scam for the same purposes. They are governments, so they will win in the end. Enjoy your internet use whilst you still have it. It will all go soon. It makes you wonder why they are spending so much on AI when there will soon be so much less internet activity. We should be cutting down investment in tech in preparation for the winding down of most internet services.
It is indeed a large internet but the vast majority of websites are relatively benign being news like here, company information, various vendors and other general information. It shouldn't be impossible to manually check the kind of sites that share images and similar, it's even easier if you are only checking sites you are contracted to push adverts to.
A team of 20 would cost around a $1m per year and could probably check 1,000 sites a week. (maybe the "AI" they've spending billions on could help out here)
These companies make $billions in profit, they could easily afford to employ hundreds of staff to check this, they don't because the fines are cheaper. The obvious fix would be to massively increase any penalties until it's cheaper for the companies to actually do something than the current model of meaningless platitudes and paying any fines.
"To manually check every page before the ad appeared is not viable, and any other option will allow some incidences where this occurs."
Manually? Google are a business making loud noises about how good they are at AI. If they're that good they can check every page on the fly to see if it would be acceptable.
These senators ask why these advertisers place ads on *one site*. That one site accepts *user uploaded content* of the image variety. Someone uploaded images that the senators were informed of and decided they didn't like it.
So the problem is that the site exists at all, we can't have any sites that will host user-uploaded content? Or every corporation in the United States has to blackball any site that hosts user-generated content? Why isn't Blogger on this list, a Googoyle-owned service, which has also been seen hosing CSAM?
This reeks of another case of "Know Your Neighbor," where we as the government can't easily prohibit this generally-legal activity, so we expect you the corporation to not do business with whomever we suggest is bad -- and if *you* don't find the "bad people" before us, then you're part of the problem!!
I was quite surprised that the advertisers quizzed for this report were happy to admit that they gave all this cash to Google and Amazon and yet those companies provided them basically zero information about where their ads were placed. You guys are paying for this service you know? If you want info, then you have the power to demand it. It would only take a few of the biggest brands to get together and cooperate.
Of course, the immediate next comment does suggest that this whole system is designed to give plausible deniability rather than information. Which is an interesting tradeoff for the advertisers. You want wide distribution of your ads, at low cost, with minimal reputational damage. You'd also like to know how effective your advertising spend is, which is a much harder thing to measure. So maybe the trade-off works best where you can blame the already unpopular Google/Facebook/Amazon, get very little info but also have relatively lower cost and complexity than having to have a lot of expertise in-house to operate your advertising more manually?
It's clear how little Google care from the quality of ads they allow on Youtube - over this weekend there seems to have been a campaign for this one amazing trick, that all men should know, where you use salt and honey to make it last longer - at which point the skip button appears and so my knowledge ends. But I'm guessing it's not a recipe for curing bacon (though I suspect pork is involved) - and nor is it a recipe for honey-roasted nuts...
Were I a legitimate advertiser (of which many were also shown on the same videos) paying for my content to be rammed into Drachinifel's excellent naval history Youtube vids - I'd be pretty pissed off to be associated with shitty spam penis enlargement ads. I should probably just give him some Patreon cash and then block all ads in Youtube.
This post has been deleted by its author
It's almost as if advertising is a giant pyramid scheme where the efficicacy of ads is irrelevant to the beancounters, since the advertising cost will be baked into the product price anyway
So nobody really bothers to check where their ads are shown, because ultimately they're not paying for it
And the general public pay extra for everything to cover the cost of advertising, which also hoovers their personal data, and now apparently fund exploitation sites
Yeah, nothing to see here
"Adalytics unintentionally and accidentally came across a historical, archived instance where a major advertiser's digital ads were served to a URLScan.io bot that was crawling and archiving an ibb.co page which appeared to be displaying explicit imagery of a young child," the report says, adding the biz immediately ceased viewing the archived page and reported the incident to the FBI, US Homeland Security special agents, America's National Center for Missing and Exploited Children (NCMEC), and the Canadian Center for Child Protection (C3P).
* "a historical, archived instance" - what is meant by 'archived' in this instance? Where was it archived, and how?
* "appeared to be displaying explicit imagery of a young child" - the page either was, or was not displaying said image. Who/what made this determination, a bot, or a human? How was that determination made?
* "the biz immediately ceased viewing the archived page" - what does this mean? A business is a collection of people. Was every employee of that business gathered around a monitor, looking at that image? (I presume not, but am making a point of the article author's obscuring anthropomorphism of the word 'business', shortened as 'biz'.)
In all this, PR reps, solicitors, executives, spinmeisters, politicians, and ninnies are blowing up a whiteout of euphemisms, doubletalk, meaningless talk, and elision of facts, making it near-damned impossible to see whether this specific instance is a true problem, or just a tempest in a teapot.
The Analytics report* stops just short of accusing the ad tech vendors of funding the distribution of CSAM, it only claims they may have. "These advertisers may have inadvertently contributed funding to a website that is known to host and/or distribute CSAM."
The report makes no reference to the specific imgbb.com or ibb.co page URLs from the historical, archive with ads appearing next to CSAM, while claiming they reported a number of images Of course the report can't contain the URLs and images, reporting just the number of images and ads should be sufficient.
Instead the report references pages with ads appearing next to adult porn as examples of ads funding CSAM because the website is known to host Child Sexual Abuse Material. The image referenced by the URL is irrelevant because the website is known to host Child Sexual Abuse Material every placement of advertising on imgbb.com is funding CSMA.
The report claims the website is known to host CSMA because NCMEC sent alerts to imgbb 2 in 2221, 5 in 2022 an 20 in 2023 while failing to mention imgbb took the the images down. Some others known to host Child Sexual Abuse Material that NCMEC sent notifications to are Apple, Facebook, GitHub, Google, and Microsoft.
*Are ad tech vendors facilitating or monitoring ads on a website that hosts Child Sexual Abuse Material ? https://adalytics.io/blog/adtech-vendors-csam-full-report
Instead the report references pages with ads appearing next to adult porn as examples of ads funding CSAM because the website is known to host Child Sexual Abuse Material. The image referenced by the URL is irrelevant because the website is known to host Child Sexual Abuse Material every placement of advertising on imgbb.com is funding CSMA.
I think there are obviously a lot of sensitivities around this, especially when CSAM is often a strict liability offence, ie you looked, it downloaded, you were in possession of CSAM so you could win membership in your national sex offenders register.
I stumbled on this once when someone sent me a link to another free image sharing site. I was curious how it could afford to run the service, and poked around a bit. Then found some CSAM images. The site's reporting function didn't seem to do anything and I copied the links to the IWF's reporting page. Then pondered that technically, I may have committed an offence and hoping that common sense would apply, and that viewing the images could be correlated with reporting said images. Or if at some point, I might have to rely on that for my defence, but being strict liability, the options for a defence can be rather limited.
But that still doesn't explain why advertisers and adbrokers are helping fund sites that are known to host CSAM and why they're not using their financial muscle to incentivise free image sites to improve their detection and take-down procedures.
Congress make the laws, not the advertisers. If the honourable senators, or anybody else, believes a given web-sites is illegal, it is within their power to do something about it but until such time as due process is completed, advertisers are free to make as much money from these sites as the want.
Always amazes me that politicians expect "magic bullet" software that can automatically identify all bad content, be it CSAM, copyright infringing material or whatever.
Despite the "AI" hype, that is a long way away. Digital fingerprinting would only work if the CSAM matched a known image the system had been trained on, not material that system was not aware of (and always chance of false positives)
Having said that, ad pushing has zero relationship to integrity / due diligence, all about the money.
I was thinking along similar lines. I'm going to be generous and allow that AI can do better than just matching a known image. But that does bring about a question. How is the AI supposed to identify CSAM or anything illegal unless such content was part of it's training? Presents sort of a conundrum, don't it?