Nice example but..
.. how do we know that's not Nicolas Cage with John Travolta's face on?
Psychology researchers at Glasgow University say they have increased the accuracy of automated face recognition to 100 per cent. If the claims are true, this development will have far-reaching consequences for privacy and security in modern society. Mike Burton, Professor of Psychology at Glasgow, and lecturer Rob Jenkins say …
The current prime minister is Tony Brown Thatcher.
There, I've added in stale, out of date, data to obtain the average Prime Minister. Is it made more accurate by adding in the stale data?
I don't look like I did 10 years ago. If you find a face that looks like me from 10 years ago, then it's someone else, probably a younger person, at least 10 years younger.
"Burton and Jenkins claim to have achieved good results using compilations of snaps culled randomly from the internet in many cases"
Not sure that's very representative of CCTV, unless they were using "scene shots" from the web, and even then the lighting tends to be very good compared to CCTV.
Maybe you need to pair this with motion-tracking, autozooming, night-vision-enabled, multi-camera CCTV to achieve the full "eye-o-sauron" effect.
Surely easier to just subdermally RFID the populace and be done with it.
John
AFAIK, we're still not at the stage of 100% accurate OCR. We're close enough, but not 100%. Just scan an older book, and see how any OCR program still throws the occasional brainfart. And that's, you know, black letters on white paper.
Add to the fact that even humans occasionally mis-recognize faces. Think of all those "oh, I thought you were someone else" or "oh, I didn't recognize you without the moustache" moments.
So a program can do it 100% accurately, eh? Even in bad lighting, weird angles, wearing sunglasses, having new haircut, having put on some weight, etc? Because that's the kind of stuff it will have to deal with in practice.
Heh. Methinks some marketing guys should be sent to a school of game design or creative writing or such, to learn how to keep the bullshit from tripping everyone's suspension of disbelief.
I've been worried about the facebook tagging system for a while, you are essentially going out of your way to 'teach' a system what your face looks like from numerous angles under different lighting conditions. It seems to me this information could be very desirable to some organisations/governments. What's to stop facebook selling off this information?
Is this relatively safe from abuse, because they'd have to go looking for the photies to average, and therefore we can guarantee that it's only going to be used in a focused search for given individuals?
Or is it a scary scary Big Brother moment as we realise they can have automated bots trawl the web for pics associated with your name to produce an average face for everyone?
How useful would it be to exchange photos between people to "poison" the auto seraches...? Or to digitally manipulate actual photos of yourself so that a human eye wouldn't notice the difference, but the more systematic auto-average would choke/stumble/ poison itself on?
Tim Cootes and co-workers have long been working on obtaining not just an average, but a much fuller parametric statistical description of appearance, called an active appearance model. It does NOT yield 100% accuracy, but can separate parameters which identify persons from those that identify expressions within the same person.
I think the implications are less profound than the authors claim. They average portraits, which are generally near head-on, and very rarely from above or below, CCTV images tend to image people from above, yielding very different images on average. An active appearance model would be better suited to deal with that.
"producing a composite average face for a person, synthesised from twenty different pictures across a range of ages, lighting and so on."
Surly the difficulty will be getting the crims to give 20 photos of varying ages to the cozers.
Or perhaps the endless galleries of plebs on MySpace/Facebook might prove to have a use after all.
....of a serious "egg on face" moment from a team using neural nets to automate photo reconnaissance analysis.
They got to 100% accuracy showing their rig pics of woods with camouflaged tanks in and pics of woods without any camouflaged tanks in. Easy, innit? Come the great, glorious day demoing their product to the top brass, they fed it a whole new series of pics provided by the sceptical military types in question and it got the whole lot wrong.
Their (very large) test data set was two sets of pics of the same pieces of terrain. On day one it was just terrain and on day two there was a military exercise going on, hence the tanks. The problem was that day one was bright and sunny and day two was overcast. They'd spent a shit-load of cash developing a system that could spot a picture of trees taken on a dull, drizzly day with 100% accuracy.
Moral: If you're getting 100% accuracy out of your testing, the most common explanation is that your testing is faulty.
Or does:
"This simple procedure increased the accuracy of an industry standard face-recognition algorithm from 54 per cent to 100 per cent"
make utterly perfect sense to anyone else in suggesting that in tests they had 54% accuracy before using the average face and 100% after using it?
People and computers are far better at recognition when they have seen a face a few times in different conditions - this is what has upped the recognition rate with this system, even if it's otherwise unremarkable.
I hope the 100% figure is something that escaped via PR spin or journalistic excess, because the authors certainly don't mean you can have a 100% accurate system.
What you have to remember is that an accurate system needs in its database many different photographic examples of a face taken at different times from different angles and with different lighting. You can then combine these quite crudely to extract the invariants from which recognition can be made, using your favoured method.
But even if you do have multiple images of your terrorist suspect or ASBO-touting hoodie, if your system is trying to match a huge number of inputs (like at an airport) with a small sample set (your few dozen suspects) it's not going to work very well.
I can see why most correspondents are horrified by this research, but the people involved really are trying to make the most of what's already happening - most convictions based on CCTV evidence come from confessions given from suspects who recognise themselves in CCTV footage without realising that they could never be identified to a court's satisfaction on the basis of a few blurry pixels alone. What Burton and his team have shown elsewhere (and the police have difficulty believing) is that even if your CCTV is a professional photographer taking studio quality portraits, recognition based on a single examplar is still poor.
Claiming 100% accuracy is clearly rubbish.
CCTV quality images are rarely very clear, and humans often find it difficult to tell who is who. Even if you have the best cameras available, what are the chances someone will look right at one and give you a nice full on head-shot?
This sounds exactly like the lies initially spouted about DNA being 100% unique and infallible.
Some people wear makeup. Some people grow beards or change their hair style/colour. Plastic surgery is pretty popular these days.
Lets say there are two identical twins. The maximum accuracy would then be 50%. 0.2% of the population are identical twins.
...but I can't respect most of its readers anymore - readers, who after reading, 'clearly referred to 100% accuracy in their tests, not 100% real-world accuracy' couldn't wait to fire up their keyboards and blurt out, "HUH HUH 100% ACCURACY IS IMPOSSIBLE LOOK I AM VERY SMART AND SMARTER THAN ANY SCIENTIST STUPID NOT 100% HA HA!"
If you'll excuse me, I'm going back to youtube for a dose of intelligence.
This technology may work well for several visually distinct faces, however any talk of facial recognition must consider that it will be applied to millions of faces. Frankly, there are not *that* many visually distinct faces.
Take any given set of facial parameters, let's say some eye-nose-ear spacing ratio. The more people a system (or even a human being) needs to distinguish between, the more resolution is needed in measurements. At any given resolution (ie 0.05mm) , the system can mathematically differentiate between only so many possibilities.
So the (only) mathematical solution using the same ratios is to increase the resolution (ie 0.001mm). However this will not yield more accuracy, due to the impossibility of capturing such small facial details consistently. A small rotation would throw the measurement off, or even a pulse could change it.
A computer could be at least as good as a human given the same input, but 100%? I remain skeptical.
I'd be interested in learning how many distinct faces a human being can recognize.
Reading the actual paper reveals some, but not all, details.
They took 500 photos of 25 male celebs (removing 41, which were already in the database) and tried to identify them using the MyHeritage FaceVacs system which includes pictures of 3,628 celebs They got an initial *hit rate* of 54% for all 459 images. They then reduced the dataset to 25 images via their averaging method and got a hit rate of 100%.
Problems:
Data reduction - the two tests are not comparable (25 unique vs 459 non-unique)
Sample size - a test size of 25 is pretty small.
Information - not a lot of detail is available(i.e. criteria used to select test images)
Real world - to be able to identify a 'perp' you need 20 different pictures taken in different lighting conditions and ages to be able to combine them
Real world 2 - women get off scot free!
Overall, a potentially interesting result, but the lack of detail and small test set mean that it is difficult to assess the significance of it. Due to the issue of needing several versions of the person you're wishing to identify it doesn't seem terribly useful in the real world.
Mate, and what was the point of _your_ blurt then? So you too can hear yourself stroking your ego and polishing your statue about how much smarter you are? :P
And while I'm at that, you remind me of something that already got me sick and tired of reading Slashdot: the wave of "nooo, don't think for yourself, don't comment, you're not worthy. Just trust the High Priests... err... scientists" posters.
Here's a couple of random thoughts for you:
1. RTFA before jumping in to berate others. The summary linked to does _not_ make any mention of, basically, "yeah, well, it was 100% in our one test with photos of that one guy, but we'll have to see how good it is in practice." Capisci? They're not saying that it's only their test case that's badly designed and doesn't catch problems. They _do_ make the broad claim that their technique raises the recognition rate (note: without any other caveats or qualifiers) to 100%.
I mean berating people for posting is already pretty pretentious as it is. Berating them because reality doesn't fit your wild assumptions, is outright silly, you know.
2. "Scientist" is a term very mis-used today. _Some_ scientists know their stuff, but _some_ others are just signing their name on any propaganda piece that a PR agency asks them to sign. And some just do plain old mistakes. And some are massively mis-represented by some department starved for grant money, or by the press. Etc. Some ability to do critical thinking for yourself is crucial in filtering the real science from the BS.
Believing everything that mentions "scientists" is as dangerous, if not more dangerous, than rejecting science as a whole.
3. You don't need anyone's royal seal of approval to use your own brain. If you want to comment on something that the High Priests... err... scientists said, please do. That's why you have that round-ish thing atop your neck. It's not there just so it won't rain down your throat, you know ;)
Believe it or not, Galileo didn't have any special qualifications when he went against the cosmology of the great Aristoteles. Einstein was just a nobody working at the patent office when he thought he can improve Lorentz's theories. Etc.
No, I'm not saying I'm an Einstein or Galileo. But the principle remains: you don't need anyone's formal approval to start thinking about it, even if it says "scientists." God or Evolution (pick whichever you wish) gave you a brain. Use it.
As the chief scientist of a company developing face recongition technology for the banking industry, I have to say I feel that these published results are very misleading. This paper demonstrates little or no understanding of what is actually entailed in evaluation of real world biometric systems. Tests with this few participants are not statistically valid and should not be used to infer the performance of a system on a larger database. I am actually shocked that they made it into a well known peer reviewed journal such as Science with this result.
Leaving aside the question of whether 100% recognition is achievable in reality, the truth is that achieving *near* 100% recognition on a set of 25 people is not impressive and can be done with any of a variety of techniques. The recent FRVT results demonstrate that several software platforms can achieve near 100% performance over much larger databases than this paper discusses.
IMO, the challenge in face recognition today is not differentiating faces among a small group of people, it is dealing with very large databases of people (millions) using images collected in a completely uncontrolled environment. I don't see this technique being useful particularly in this endeavor. Also, as one poster has already mentioned, the technique is related to the active appearance work of Cootes but less powerful. The technique of simply averaging patterns to reduce noise is well known and has been used previously in many applications.
In sum, I don't believe that this work is either novel or particularly useful and I don't beleive the result will apply to more challenging applications such as surveillance.
time to invest in companies importing head-covering accessories, hoodies, and sunglasses, not to mention simple paper filter masks, a la the SARS outbreaks.
once the credulous many catch wind of this research, they will go running for cover, literally. like P. T. Barnum said, there's one born every minute.
the icon is what my composite picture looks like.
I completely agree. I work in image analysis and computer vision as a scientist (which seems to be a bit of a swear word to some posters) and have seen several good studies entailing MILLIONS of people to validate iris-scans (by John Daugman and coworkers). They built a solid mathematical and experimental validation of their work, and it does work in practice. Face recognition is going to be FAR harder, and IMO the only reason this work ended up in Science is that they got reviewers from the wrong field. Try placing this in IEEE Trans Pattern Analysis and Machine Intelligence and you would be shot down in flames by the reviewers.
BTW, some people running to the defence of scientists because of some other posters criticism of this piece of science have got the wrong end of the stick: science is all about being critical about your own and other peoples' work. If you can't stand criticism, don't get into science. Any scientist worth his salt has sufficient ego to shrug off the more ranting kind of critics found on the internet (and indeed in science itself :-) ).