They cannot stop me from saying "."
OK, they can but I only said ".". That guy over there said "...".
Researchers at Microsoft and OpenAI, among others, have proposed "personhood credentials" to counter the online deception enabled by the AI models sold by Microsoft and OpenAI, among others. "Malicious actors have been exploiting anonymity as a way to deceive others online," explained Shrey Jain, a Microsoft product manager, …
> authenticated identifiers bestowed by some authority on those deemed to be legitimate people.
Well, that is never going to be open to abuse (from both angles, issuing ids that shouldn't be and not issuing ones that should)
> US states, for example, could offer them to anyone with a tax identification number and the corresponding PHC could be biometrically based, or not.
Neat idea, no protection unless you are literally paying for it; and just maybe we'll mess up your "biometrics" ("No idea why the camera didn't work for you, maybe something to do with the colour contrast, we'll get right onto fixing that, try back in six months time")
> The proposed PHC identifiers are not supposed to be publicly linkable to a specific individual once granted – though presumably unmasking a PHC holder could be done with an appropriate legal demand.
That can't be abused, nope, not at all.
> Although PHCs prevent linking the credential across services, users should understand that their other online activities can still be tracked and potentially de-anonymized through existing methods
And how long before the above two are used to subvert the "prevent linking the credential across services" - in a far more pleasingly reliable fashion than all the commonplace ways, which are (to an extent, and with effort) defeatable by locally deleting cookies, changing browsers, firing up VMs...
Ah yes, we will protect you against the AIs - just don't think too deeply about how we are going to do it.
> Jacob Hoffman-Andrews, senior staff technologist at the Electronic Frontier Foundation, told The Register that he took a look at the paper and "from the start it's wildly dystopian."
Indeed.
Proving you are a person is something we are going to have to get used to. AI/automated methods are crossing the 95% barrier and are getting to be too hard to differentiate from authentic by normal-reasonable folks. That's text, that's voice, and that's video. Until we come up with a root of trust to use--and use it, you can't trust anything stored digital fully. Systems get compromised, files altered after they are created... really a dystopia of what we were promised the internet would do for us.
PKI/Certificates are what current tech looks like and TBH, between this and curated CRLs from a root of trust you 'trust'--This combo is something we 'know how to do' and are continuing to improve. Are there problems, sure. Far less issues with what we have now than some imaginary solution no one as fielded yet. Maybe we could eliminate SPAM texts, emails, and calls using this tech?
I considered something like this a while ago, having a lot of soft identifiers for logging into websites.
You could use identifiers like computer name, browser used, tabs open, sites logged into in current session, input from webcam (both face and background) to build up a identity of the person trying to log in. If this is doesn't match, then more intrusive options can be asked for... fingerprint, password, voice recognition, etc.
It would create a constantly changing ID of you and you could specify the accuracy required for different sites. E.g. Spotify might need low accuracy but it would contribute to the accuracy required to check your emails which then would give enough of an idea that it is you to log into your online banking. If you needed direct access to a high accuracy site, then more traditional methods of identification (remember, biometrics are not a password) and a password could be used.
> You could use identifiers like computer name, browser used,
So, the things the current advertising trackers use? Not sure that counts as innovative (Btw, you forgot what fonts are available, apparently one of the better identifiers, horrifyingly).
> tabs open, sites logged into in current session,
Yay, not only do we lose even more privacy, but now I have to remember to get "Google Naughty Nurses" open whenever I want to access Spotify.[1]
> input from webcam (both face and background) to build up a identity of the person trying to log in. If this is doesn't match, then more intrusive options can be asked for... fingerprint, password, voice recognition, etc.
Password is more intrusive than face & background recognition pulled from a webcam?
This is some new meaning of the word "intrusive" I've not seen before.
[1] sadly, open tabs is a good identifier for me, 'cos I keep leaving them open, meaning to come back to them Real Soon Now, as the weeks go by: 128GB of core is not all roses and loveliness, it leads to the dark side.
Why is that desirable? Start with something that's not soft. The soft factors will not be suitable passwords and they're very likely to break. They won't be that hard for someone to impersonate, and they will be very likely to show false negatives just because someone's computer broke and was replaced or they clicked on one of those Google ads and ended up using Chrome even though they didn't want to. You'd be constantly ringing false alarms and gaining nothing for it.
That's to say nothing of the privacy nightmare that would be. I do not give my biometrics to any random site on the web. I don't tell them where else I browse. If something is medium or higher security, let's stick with the direct methods. If something is low security, let's also stick with direct methods but we can use a cookie, which I can allowlist, to remember me and keep me logged in. Neither of those is likely to lock me out at an inopportune time. Neither of those will result in well-deserved GDPR fines as your solution almost certainly would.
A few years ago I could say "pictures or it didn't happen". Early this year I could spot the difference between a real picture of a human and an AI generated one. Now when something doesn't happen the pictures are hard to discredit and when something does happen people are skeptical of the genuine pictures.
> We'll know our disinformation program is complete when everything the US public believes is false.
Given the reams of speculative fiction on the powers of AIs, why did we go ahead and build the bloody things with no idea how to manage the inevitable genie once it was out of the bottle?
Oh, right. Money.
Even Asimov, writing in a time functionally before electronic computers, grappled with how to prove that a person wasn't a robot in 1946 (Evidence) and wasn't able to come up with an answer - three laws or not.
I don't think AI is an inherently bad idea, but it really needs to be done more carefully than it has been done.
The Ultimate Collection Of Winsock Software
That is still a thing?
Ah, no, I see the name is being used for something - completely different now. Although I was glad to see that, unlike other companies trading on an old brand name, their <a href="https://www.tucows.com/about-us/history:company history</a> hasn't tried to completely erase their humble origins.
Sorry, sorry, completely off topic, let's get back to online privacy and deception.
This is just an attempt by the AI companies to show they're trying to do something about the mess they've created. It's too complicated to make work, at least in a democracy, and it doesn't solve the problem they're trying to solve, anyway, which is providing a way to see if something is AI generated (as if a computer program can't be used to sign an arbitrary document if given a key).
It's just a smokescreen to fool regulators into thinking something's being done.
A human, by the time they’ve loaded the page, accepted some cookies, found a log in button, zoomed in to the tiny name/password fields, taken a moment to remember the spelling and then typed and hit enter would take slightly longer than the few nanoseconds an LLM would.
Only a dumb robot would ever complete Captcha these days.
It's fairly straightforward to degrade performance to resemble a human. Your bot can then be working on millions of other pages while waiting for the human delay to expire.
We're asking machines to differentiate between humans and machines. And we're at the point where most metrics a machine can measure, another can fake. And it's only going to get worse.
This seems to be, sort of, the start of a useful think that got abandoned before it was thought through properly. Given the problem is 'how do I tell what has been mass produced by an AI disinformation farm?' what's being proposed doesn't look like much of a solution to that. Mulling it over there are 3 broad categories I can think of that apply here:
1. Information that is generated by AI and the producer/relevant authority wants it to be labelled as such.
2. Stuff that just anyone wants to post without being particularly bothered about whether anyone pays any attention to it or not.
3. Information that the producer wants to unambiguously tag as produced by them.
For the first point, "relevant authority" was put in there to cover scenarios where local law requires that AI content be labelled as such. It would work for people that are interested in following local law but obviously fall flat on its face otherwise. A producer may want to label something to prove that they have ridden the prompt dragon skilfully enough to get it to cough up something worthwhile, mostly like a visual artist signing a painting.
The second really, really doesn't need nor should require any kind of crypto-authentication shenanigans. If I want to post utter dross like this on the internet that's between me and my own foolishness and people should probably treat it with all the respect that deserves. Not a lot, for those not used to British sarcasm.
For the third, that might actually be useful. If I want to be sure that something I'm reading really has been produced by the organisation that it's claimed has produced it and hasn't been altered in any way having a handy way of doing so would be helpful. I know that's been possible for decades, it just hasn't spread beyond the niches where the techies find it useful to just be a thing that everyone uses.
Anywho, random ramblings - feel free to point out any and all silliness in the above.
Rosie
> AI is changing the way we should think about malicious actors' ability to be successful in those attacks. It makes it easier to create content that is indistinguishable from human-created content,
If you educated your children in grammar, punctuation and spelling perhaps we'd be able to tell the difference between AI written nonsense and posts by Americans.
If you educated your children in grammar, punctuation and spelling perhaps we'd be able to tell the difference between AI written nonsense and posts by Americans.
Unfortunately I suspect that horse bolted a few generations ago.
Just a glimmer of scepticism and a skerrick critical thinking might serve but that might well require the clarity of an internalized, precise, grammatical language. Plain common sense is, as ever, a bridge too far.
One thought: swearing is likely a fairly human activity and a coherent stream of profanity is potentially beyond the capacity of contemporary AI (and perhaps US religious fundamentalists*. :)
* failing on coherence generally. :)
But, really, I don't think that proves anything.
Perhaps we should look at this the other way around. We aren't "proving" that we are human, we're "proving" that we aren't a machine. Which shows what a lovely dystopian future awaits us.
Icon, because fiction suggests that when the machines are in charge it doesn't end well.
And the typical way of doing that, for a fee of course, is handing over various official identity documents.
Given that most of these companies give exactly zero shits about the security of the information that they hold, and might maybe offer a couple of years of anti fraud "protection" (while your identity has been compromised potentially for life), the simplest and most logical response is...
Fuck off.
I'm not jumping through your hoop to prove I'm real. Y'all made this mess, you fix it.
That solves neither set of problems. Unless you make them prove identity, you have no way of knowing whether the person that showed up at your office claiming to be X is X or not, so if you issue them some token they can use later, you'll have done nothing other than proving that, at some point, a person was there. Someone who wants tokens for bots can hire a hundred locals and have them walk through your office giving random names and sending the tokens back to base.
Meanwhile, if you're going to solve this problem by collecting lots of identity details, then you don't have to make them show up physically. You can just use the complicated, dangerous, and useless solution these "researchers" came up with. Since you're unlikely to put offices everywhere humans live, the digital solution, while all the adjectives and some much nastier ones would still apply, would at least be implementable while the physical one would have all the same downsides and also not work.
I agree it is not perfect, but it slows down the ability to authenticate in bulk which means that it is less cost effective for the ne'er do wells. What we have to accept is that there's no such thing as a free lunch for either party (the party requiring authentication, and the party being authenticated).
Ok, slightly off-topic, but I need to get this off my chest...
I needed some documents that required to be urgently apostilled recently. The FCDO (Foreign, Commonwealth & Development Office) website says you have to do this through them which takes many weeks to complete. The process involves getting Royal Mail or a courier to pick up the documents to deliver to the FCDO in Milton Keynes, waiting for weeks for them to stamp and return them. Who would I entrust such important documents to? Royal Mail? Not a hope in hell (I can elaborate on this). A courier? (All have disclaimers about loss in transit). I'd rather get a minicab who I know and trust to deliver them, so I paid someone to do a point-to-point delivery. This was refused by the FCDO as they weren't an "authorised" courier. Luckily someone in the queue to have documents apostilled offered to help who amazingly happened to live just round the corner from me. The upshot was that I was able to get the process completed for around the same cost, but within 48 hours rather than 4-6 weeks.
The big issue here is Who Can You Trust? There is a big difference between an "official" channel that takes weeks and someone who can do the same thing within hours. I complained to the FCDO that people desperate to have documents apostilled could easily be scammed by rogue agents, so the FCDO are actually helping the scammers by taking so long to complete a process which takes hours for someone else to do. I thought twice before handing money to a complete stranger, and there really was no way of knowing if I would be scammed or not, but in the end gut instinct paid off. An exception to the rule that says if something sounds too good to be true then it is.
But really, what is the difference between delivering something using a point-to-point delivery and a delivery that involves festering in a delivery depot with multiple chances of being misdirected or stolen (a Man in the Middle vulnerability)? If existing relatively manual processes can be so flawed, what chance is there for the future?
If your only goal is slowing them down, then you have better ways that won't have stronger impacts on the legitimate users than the illegitimate ones. If I'm a criminal and I want a bunch of verified accounts, then I'm probably planning to make a bit of money out of this. I can bribe some people to go to your office and collect tokens for me; if I'm successful, I'll get it back. Meanwhile, someone who just lives in a rural area will try to go themselves and it will take a lot more time.
There are lots of ways to slow down a transaction. Collect an address, mail something to it, make them enter the number. Get a credit card number, charge something to it, wait for it to post, then return it. In many cases, there is no compelling reason to do either of those things. A physical office visit is worse than both of them in many cases, not that either of those is good.
"It provides for governments – or potential hand-wavy other issuers, but in reality, probably governments – to grant people their personhood, which is actually something that governments are historically very bad at," he said.
Apart from birth and death certificates, passports, ID cards (those countries that have them), driving licences, social security, etc... In fact, if you wanted an organisation to check that someone was a real actual live person, then the government would probably be the best one to do it and they wouldn't need Microsoft, Open AI, or the rest of band of hangers on who have signed up to this to help them do that.
EFF meanwhile has a few more paragraphs railing against the government. Is that their default position or something? It's MS and Open AI that came up with this "solution" for a problem they themselves created in the first place. Perhaps the EFF could comment on that.
To be fair to the EFF, the paper itself does point out that the model would create a readily-abusable concentration of power in the PHC issuers, and the authors say they are "concerned about these dynamics"... but then just hand-wave it away by more-or-less saying the PHC issuers should try not to be naughty. They've come up with a model that strongly protects against abuse by the consumers of the PHCs, but provides no protection at all on the issuing side. It looks like they've only done half the job.
> EFF meanwhile has a few more paragraphs railing against the government
Not against *the" government (whichever one you happen to live under, or would like to live under) but against "governments" - many of which have, shall we say, problematic histories.
>> to grant people their personhood
> Apart from birth and death certificates, passports, ID cards (those countries that have them), driving licences, social security
For example, you might want to add to your list "voter registration".
"Apart from birth and death certificates, passports, ID cards (those countries that have them), driving licences, social security, etc..."
Let me guess. You live somewhere where the paperwork seems to be handled adequately, and you've never had significant problems with yours. Congratulations, but your experience is not that of many other people. Yes, if you need to find a place that has validated paperwork, governments are your best bet, but that doesn't mean they're good. It means that all your other options are even worse.
People manage to get born without getting birth certificates. There are some weird people who see that as a good thing. Often, governments don't notice until many years later when that person tries to get some other documentation and can't prove their identity. As a child, any physical paperwork about me was handled by my parents. What would have happened if they lost it, it was destroyed in some way, etc? Problems, many problems. What happens when one person gets copies of documents and uses them to live as someone else? Chaos. For instance, the real person gets put in a mental institution. That's not entirely due to incompetence. Sometimes, the challenges of establishing an identity are that hard.
That is all in a nice developed country that spends lots of money on those databases. It works even more badly when the government is dysfunctional, the identity database had a bomb drop on it, or a government is specifically trying to delete people from the database. It works badly when people travel without reporting in, either because they didn't feel like it or because it's illegal (there are illegal international migrants of course, but in some countries like China, there can be illegal intranational migrants too and paperwork is messy). Or simply a place where people don't report a birth because they've rarely done it before and don't really see why they need to now.
MS have been trying at this in one form or another for a quarter of a century now. This is just using a problem created by themsleves as another step in coming up with an identity management scheme for the whole world, or at least that part of the world which is online.
The fact that some nations would be happy to cede control of this to MS and OpenAI and others do identity management so badly that their citizens are motivated to go to MS and OpenAI, does not mean that everyone should have to use them.
What checks and balances are there for those of us living in functioning democracies? Many. What checks and balances do MS and OpenAI have? Only shareholders who want to see the line go up.
The EFF managed to hardwave concerns away and start talking about governments which is completely the wrong idea. They should be more concerned about those who create tools which allow the Internet to fill up with AI slop that will make it impossible to trust anyone who expresses any idea online and, instead of offering ways to identify AI output, they want to impose their way of identifing people.
Is there any other purpose to this "Generative AI" except for deception? Based on everything that I've seen, it's whole reason for being, explicitly, is deception.
Deceiving humans that a human wrote some piece of text, that a human curated content, that a human is answering your question, that a human is behind a decision, that a human is empathizing with you, that a human is doing your bidding, that a human wrote the music, created a video, etc... all without a human doing those things.
The whole point of gen AI is to deceive you by trying to convince you that you are interacting when a human or that the thing you're interacting with, has a human behind it, when you aren't interacting with a human and there is no human behind what you're interacting with. The whole point of GenAI is to deceive you!
And then these are the bastards that claim to want to safeguard interactions with 'persons'? May they hit their pinky toe hard against a table leg every 42... no, 21 seconds until infinity!
People get tracked around the web in the most detail possible, but they can't figure out who is a bot and who isn't?
Ceterum censeo GenAI esse delendam!
Of course the other comments above are right about the impossibility of doing what these firms want to do. But this would be a good time to think about the requirements for Identity: what do we want from a modern concept of identity?
The first thing, in my view, is to stop it being 1-to-1. There is no reason why any entity (person or otherwise) should be restricted to a single identity. Each person should be able to create identities at will (just like I create a new email address for almost everyone I engage with on email). All equally valid, and not connected in any way except how the person wants them connected. Not just pseudonyms, but complete identities, any of which you can use at any time, for any purpose.
What good would that do? There are places where that's fine, and they already let you do that by using something like an email address as an identifier. There are lots of places that don't want that and won't let you do that, so they'll still use something that's unique per person, and if you take away one, they'll find another one. That's never going to end.
Three problems with that:
1. Most things I connect to never see my MAC address.
2. Most things that do see my MAC address will happily accept a randomized one, including a continually randomizing one that changes by the hour. Those that won't work with that, for example corporate networks with MAC allowlists, will need one that doesn't change but don't need it to be globally unique or the one specifically issued by my manufacturer.
3. Most computers and phones allow me to change the MAC address manually if not automatically. That doesn't generally apply to IoT stuff, but a lot of the ones that I use will let me do it.
From authenticating as human, then handing over control to an AI to post to its heart's content? I presume my authentication would be revoked at some point, which would be annoying for me if I couldn't easily re-authenticate. But there are many people who would accept money in exchange for "trashing" their online identity in that way, so you'd have an endless supply of such throwaway "authentic humans" for AIs to use as so many Guy Fawkes masks.
I'm ignoring ALL the other potential issues with this, and just trying to see if it is even possible for this to solve this one particular problem.
I guess if having my identity authentication revoked would automatically remove all my posts at least the "damage" of AIs masquerading as humans would be limited. But what happens if I'm falsely accused of posting some AI stuff, or I'm tricked into it (i.e. maybe I share a link that turns out to be have been written by an AI) Do I lose my "write access" to the internet and I can only read stuff for some period of time until I can re-apply? That seems like something that griefers and trolls would love to inflict on others - the internet equivalent of swatting if you can get a high profile internet personality kicked off the internet by falsely accusing/implicating them of being an AI!
Basically there is no method for this to work at all. If the identity tokens are used at all, then someone will get them. They might buy them. They might steal them. They might get an issuer to generate lots of new ones. It falls into the same challenge that has hampered all cryptographic key-based systems since we've had them: key management is hard, especially when the users don't care. All the potential harms you point out are realistic, and there are tons more available. For limited groups and limited uses, this could work as well as anything else, but when you try to apply it to the whole internet, it breaks almost immediately.
Why the holy &@$* would anyone who isn't part of some stupid Generative AI cult agree to this?
If you're not an AI you're going to have to prove you're actually human by signing your output to prove your personhood?
I see three flaws with this plan, as described.
1. What stops me, as a person, using my "personhood credentials" to sign Generative AI out put to claim it as my own? Thereby entirely defeating the stated aim of this scheme in the first place?
2. Having been given a set of "personhood credentials" how am I supposed to make use of them, and keep them safe? As a tech savvy individual I may be able to make sense of this, but the general public? Content creators who are not computer experts? This is unreasonable.
3. As I have already stated, why should everyone not engaged in trying to drive down the price of creative works though the use of Generative AI have to pay the price for fixing the problems created by the use Generative AI?