"The dialogue is therefore not a useful one unless you respond by turning the feature off."
Then, for me at least, the dialogue would be very useful. That is, if I ever wanted to use Microsoft Photos.
Microsoft has begun rolling out an update to the Photos app in Windows 10 that prompts you to confirm "all appropriate consents from the people in your photos and videos", in order to use facial recog to find snaps of your friends and loved ones. The feature itself is not new. Photos has attempted to use facial recognition to …
The company has also come up with a great counter-example of intuitive UI by presenting a confirmatory dialogue that makes it unclear whether you should click Accept or Decline to disable the feature.
It seems this is official confirmation if any were needed that designing an understandable useful UI is an now a lost art down at Redmond.
Instead of this dialog up being waved through in review, What should have happened was the person responsible should have been immediately strung up by the balls by Mark Russinovich from the highest lamppost outside One Microsoft Way while Redmond Chen stood next to them shouting, "and if the rest of you fuckers don't fucking learn Microsoft Windows User Experience book which was published in 2002 inside out then you're fucking next".
Their designers take great care to ensure that the end user gets misdirected to make the right choice not the choice the user wants to make. The real objective is to lead users to unknowingly accept terms that are unfavorable to them but are very attractive to the corporate interests that control the process.
I was naive enough to think this was on-device facial recognition like Apple's Photos and just a case of bad UI design, but you're right, if they're misdirecting people like this then farmed out to the mothership it is.
The company has also come up with a great counter-example of intuitive UI by presenting a confirmatory dialogue that makes it unclear whether you should click Accept or Decline to disable the feature.
Fully compliant with the GWX User Interface Guidelines, then.
Yeah, the second dialog would have appeared in the old UI guidelines in examples on how NOT design a confirmation dialog. YES/NO were the correct answers to a "Disable?" question. But it looks the new UI is all about design and transparencies, not usability.
And let's discuss about the first one having "Accept" has the default settings - which is not "informed consent" under GDPR - and under GDPR, biometrics is considered a “special category of personal data” that requires both a special legal basis for processing and an accompanying data protection impact assessment.
It will be interesting to know if Microsoft trying to offload all responsibilities to the user stands scrutiny when consent it not fully informed and opt.in - especially if its telemetry sends out data about the face recognition results....
"It does if MS is keeping, linking or useing the data anywhere else."
Yeah, except, MS is not keeping, linking or using (sic) the data anywhere else. The processing is local.
I read the article thoroughly and this is mentioned couple of times:
'Facial groupings "are not accessible beyond the context of the device file system' and 'AI used on your local device to help tag photos'
My first comment was 100% factual yet people are downvoting "because Microsoft". Not that I'm surprised.
"Facial groupings "are not accessible beyond the context of the device file system'" could mean the tags are not uploaded to Onedrive but the work to generate the tags could still be done on Azure somewhere.
The fact that they're speaking in legalese and the UI is confusing aren't reasons to be optimistic.
"'Facial groupings "are not accessible beyond the context of the device file system' and 'AI used on your local device to help tag photos'"
Look carefully at what legalese doesn't say. Note that your quote doesn't say processing is local. It says something quite convoluted which might lead you to think it does but it actually doesn't.
Maybe - but being part of Windows then depends on who uses it and for what. What matters is the user and the reason of processing, not what application processes the data.
If you run your local club and store there photos of your meetings there, - you fall under GDPR rules. If you're a teacher and store your pupils photo there, you fall under GDPR.
The very fact MS asks for the user obtaining consent shows it knows people will end in such kind of situations, and is trying to avoid to face the consequences in a court....
Under GDPR, you need to record consent, and that consent needs to be understood.
Photographs off themselves I would not class as Special Category. The post processing of the photgraph to create a record is used to create a suggestion would be special category - the linking of features to a person - and with your confirmation. This requires 2 forms of legal basis (as noted by LDS). Assuming that photographs are post processed, that data and processing (and then the photo) should fall under the category. Microsoft needs that basis - not the user, I would suggest.
Re consent -
Does Microsoft store the images in a post process world for its own use? Will MS use the data to 'improve' the user experiance of confirmed 'hits' of your pics with those of your friends, others - or security apparatchik ? We dont know - how can someone consent to it?
With this feature, the way this reads, one person who did not consent would mean that the system would throw a canary - or does it process the picture regardless, but not show you the matches?
As noted, onedrive automatically stores pics you take as you take them. Does it process then? Do we know what it does?
Generally, consent is one of the most problematic legal basis for processing under GDPR - mainly because you must keep a record of that. That does mean its a right royal PITA!
Far more questions than answers!
Probably best not use it. Or onedrive (of FB/Google, for that matter) for happysnaps.
I think similar problems exist with Google/chrome, on an android phone if you send an ordinary MMS the photo you want to send is loaded to google pucs and presumably kept there as well as sending a copy to the recipient of the MMS
I have no account that I have set up with google pics so mo access to see what they have and whether it is possible to delete the few pices I have sent.
I wonder if those pics are used as part of Google's efforts to profile both me and the recipients of my messages. Now I have alternative means for sending images.
Photographs off themselves I would not class as Special Category. The post processing of the photgraph to create a record is used to create a suggestion would be special category
I'd class them as special, especially if they're 'special' pics not intended for public performance. But then image rights have always been a legal minefield, eg an infamous case where a German gentleman was filmed at a racing event with someone who was not his wife. Which resulted in some fun around expectations of privacy in public instead of private. But from the article..
...that prompts you to confirm "all appropriate consents from the people in your photos and videos", in order to use facial recog to find snaps of your friends and loved ones.
find where? If that's in private space, then maybe it's ok. So copy all your ex's social media profile and then find pics of your ex, and possibly not-so-loved ones. So shades of this-
https://en.wikipedia.org/wiki/Nosedive_(Black_Mirror)
and other Black Mirror episodes that should have been viewed as warnings, not marketing roadmaps. One solution to the consent issue would of course be to use facial recognition to ID everyone in an image and automatically send them opt-out consent forms. If MS is storing biometrics though, I'd very much doubt it can push obtaining consent for any usage onto the users though. Sadly, it's one of those technological genies* that is unlikely to get put back into it's bottle.
*or vikings. I think there's still an injunction against putting a name to that 'famous' face.
I suspect there's a very interesting legal point here around what happens if I complain to a data protection registrar that someone has used MS Photos without my permission and then serve a notice on Microsoft to give me the details of where they've misused my data?
I apologise - I would have made the last sentence clearer if I didn't already need some paracetamol and a long lie down just thinking about it.
One issue that this touches one is the necessity of 'model releases' for photographs taken under many settings. This is a tricky situation for photographers as it is not always clear when they are needed. Though a good rule of thumb is unless the photo is taken in a very public setting (where a person would have absolutely no expectation of privacy) you must get a model release to show the photo. In a public setting it can get tricky as to when you need a release as it can depend on the nature of the photo, celebrity status of the person, and the actual location. Paparazzi are welll known running up to the legal edge of not needing a model release.
Facial recognition and tagging photos taken by others probably need a model releases for all in the photos though this not an area that has been tested in court. Slurp's warning box is an attempt to avoid legal responsibility as they are the deep pockets in a potential lawsuit. Given most people do not understand the finer points of the law when it comes to model releases if they are even aware of the need for one this seems like a 'WTF are you babbling about' to most.
Model releases are forms that say the photographer can display one's image as the photographer see's fit with or without any payment to the person. There is standard wording for them that will stand up in court.
Model releases are usually needed for commercial use of someone's image - i.e. usually they are not needed for news reporting and the like- but what is commercial use is sometimes complex understand.
Model releases can also state what are the permitted uses of someone's image (i.e. a model may not want his or her image used for a given class of products).
Still, model releases don't cover biometric data - this is a wholly new class of information, as that can "uniquely" identify and individual (OK, we commentards here know how many system fails with little changes, but still the aim is that) thereby in laws like GDPR it has far stricter requirements.
I don't think a plain model release which doesn't cover explicitly biometric data would be valid.
Still, model releases don't cover biometric data - this is a wholly new class of information, as that can "uniquely" identify and individual (OK, we commentards here know how many system fails with little changes, but still the aim is that) thereby in laws like GDPR it has far stricter requirements.
They can and arguably should. But first define 'biometric'. Is that the digital image stored in my passport, or a hash that could uniquely identify me from some standardised (or proprietary) face mapping system. But model releases can be pretty broad and licence use of a subject's likeness, store, publish, manipulate and possibly end up being used to train facial recognition systems if you're a well shot celeb.. But that's one of those areas where the higher up the food chain you are, the more likely you could be to have a more restrictive release, and access to lawyers who can navigate the minefield.
Again it's a genie long escaped, ie Facebook's encouraging (ok, demanding) people use real names, and tag photos with the names of people who are in them.. Which has gone on for over a decade.
Why? Model releases are specific to specific images, they don't give you broad rights over a person. With a model release you are not allowed to disclose any other sensitive and private information you may discover in the process of making the images.
GDPR classify biometric data as sensitive information, and explicitly limit the scope of their processing.
There's a reason why Facebook face recognition can't be used in EU.
Biometric data are data that can uniquely identify you, whatever they are computed. Biometric data *are not* a photo. It's the processing of one or multiple images that create them, and you'd need explicit consent to get and use them. At least in privacy conscious countries.
Why? Model releases are specific to specific images, they don't give you broad rights over a person. With a model release you are not allowed to disclose any other sensitive and private information you may discover in the process of making the images.
They can do. So contract a model to be the 'face of..' and that would probably stipulate how that model's likeness can be used during the duration of the contract, ie no-compete clauses. But that's more a combination of contract and release. Otherwise, a standard release could give you pretty broad rights over the likeness during that session, and permit wide usage and creation of derivative works. If the model has enough clout, then they may get to select & specify specific images and usage, otherwise the release is usually agreed and signed pre-shoot.. And usually my first shot would be of the model holding the signed release.
As for other sensitive and private information, disclosure may be required. So especially things like age and ID verification if the images are in any way adult. Or as it's a contractual arrangement, basic personal details like legal name, address, payment details etc etc. Stage names may get used publically, but behind the scenes, personal info is required, and also required to be protected per GDPR etc.
But that's business, ie commercial usage & restrictions per GDPR etc. Where it gets a lot murkier is when personal and commercial get blurred, ie MS & others doing biometrics and/or hoarding personal data from unsuspecting private individuals.
My intent as an advanced amateur, occasional pro photographer was to highlight when getting proper permission to display a photo is required not always easy to determine. Also, the general public is not even aware of the model releases or any of the legal nuances. As the model release is contract, the photographer has certain legal obligations as spelled out in the release. As someone else pointed out, the higher up the celebrity food chain the subject is the more detailed the release will likely be.
You take my picture in public, as part of a group scene (say at a beach or a crowd crossing the street, or as a spectator in a packed sports venue) fine. You PUBLISH that photo, not so fine. You publish that photo and make money, where's mine. You publish MY photo on Facebook, you have just signed your own death sentence, mate...
There seems to be a huge confusion in this thread about this whole issue. Model releases are a vehicle for avoiding the need to pay royalties, and as such they are purely contractual and have no relation to privacy. Data protection is human rights law, and aims to ensure the data subject has control over the use of their personal data (actually the inverse of the intent of the model release, the object of which which is the photo subject relinquishing control over the image). Consequently the two are not really comparable.
By accumulated opinion (no test case having yet been brought) a photo constitutes personal data under the European GDPR if:
[1] it is processed by a business (not by a private person)
[2] it identifies a data subject or can a identify a data subject in association with other data held by a business processing it as data
Also by precedent (in the UK at least), there is no expectation of privacy in a public place (which is probably why the police currently think it's OK to use facial recognition on the street).
Consequently, individuals imaged incidentally in photos of general scenes or in groups are unlikely to be considered data subjects unless processing is performed to identify them as individuals and that processing is performed by a business.
If such processing is performed, the party authorising it is responsible, not the provider of any tool for performing it. So in this case at first sight Microsoft is probably right. The big issue is then whether they perform any processing themselves on the results of the identification process. If so they are responsible for that processing.
However considering the wider human rights context an interesting question is posed by posting photos including recognisable images of others on social media, as, inter alia, the right of that other not to interact with the said social media (both the right to a private life and the right to hold and have respected an ethical position or belief) could be infringed. This is particularly an issue for those not signed up to a social media service, as unless a name has been attached to a profile accumulated by it from third party submissions, it is likely to be impossible to exercise the right of access, as the name will probably be the primary key used for discovery. Thus, despite the profile constituting personal data and the profiles individual being de facto a data subject within the meaning of the legislation, there will probably be no way for said data subject to exercise their rights or obtain redress.
This is particularly an issue for those not signed up to a social media service, as unless a name has been attached to a profile accumulated by it from third party submissions, it is likely to be impossible to exercise the right of access, as the name will probably be the primary key used for discovery.
Agreed, although the subject should be able to exercise the right of access. Doing that may result in having to give the social media service even more personal information. So for a number of years, I happily existed on Facebook under a pseudonym. Then a friend tagged me at a party with my legal name, which then allowed Facebook to link the profiles and a search for me threw up my pseudonym. Fun while it lasted. But then to verify 'me' in either guise, most social media companies want scans of things like passports or driving licences, which of course means giving them even more personal data. Handy if you're say, Facebook and looking to get into e-gov 'official' ID verification schemes, which conveniently also allow them to add a lot of value to the personal data they already hoard and trade.
(Oh, and perhaps a bit harsh on releases.. It's not so much dodging royalties (or shouldn't be), but specifying when royalties may be due & how much. Again it's one of those minefields where the unwary can be abused, or end up with a lot of legal headaches.)
I recall from a while back that Apple's "photos" application did facial analysis too to fill the "people" tab, and there was no real way to kill it off (it used to be a sub process you could zap, but that ability has since long vanished).
My problem with this sort of malarkey is simple: I. do. not. want. it. I don't care if it's Apple, Microsoft, Oracle, IBM (etc etc), if a user wants analysis done on any of their data, the user should make that choice before it is even started (not afterwards), the user should be in control of what happens with that analysis and the user should be able to zap the results and return to a pre-analytics state. That goes for images, that goes for email, that goes for spreadsheets, web use - you name it.
The user pays for the system and resources such as power and bandwidth, often for the OS as well so they should have the choice. I know that for the bigger companies, users are more seen as cows that they can just keep on milking for profit but I think it's time users start pushing back - preferrably by means of regulation. I have seen what comes of promises - self regulation very much isn't. I'm no fan of regulation, but it appears we cannot count on a bit of self control of providers.
I isolate from that the so-called "free" providers becase that's their game, for those I would love to see a ban on the word "free" because it's the biggest online lie going. Users pay with personal data and exposure to marketing, so free is demonstrably false and misleading.