Image "Cloaking" for Personal Privacy
At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail."
Posted Jul 22, 2020 22:53 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (5 responses)
> This code is intended only for personal privacy protection or academic research.
I'm going to assume the first line is just a standard CYA "there's no warranty" disclaimer, and not an actual condition on use (because it would flatly contradict the LICENSE file). However, the patent is a great deal more alarming, and in fact, I'm not sure I can recommend using this thing as long as that sentence remains there. It basically amounts to "You can do what you like with our software, but we could turn around and sue you at any time, once the USPTO rubber stamps our patent."
Posted Jul 23, 2020 5:37 UTC (Thu)
by epa (subscriber, #39769)
[Link] (3 responses)
Posted Jul 23, 2020 5:56 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Posted Jul 23, 2020 8:46 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (1 responses)
Cheers,
Posted Jul 23, 2020 19:27 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link]
Posted Jul 24, 2020 13:57 UTC (Fri)
by t-v (subscriber, #112111)
[Link]
Regarding patent, I must admit I have a hard time telling what the real innovation compared to the now classic adversarial example work (of the Goodfellow... and Carlini... variants implemented in CleverHans) is except uploading the image somewhere. That may or may not mean that one would find prior art.
Posted Jul 22, 2020 23:46 UTC (Wed)
by FLHerne (guest, #105373)
[Link] (9 responses)
This assumes not only that all existing facial-recognition systems are vulnerable to their specific tweaking approach, but that future ones will be too. The modified photos will be out there indefinitely.
A lot of research is being done on solving this class of weakness -- it's a serious problem for self-driving vehicles too, there've been demonstrations of minor changes to street signs that lead to completely different recognition outcomes.
While their list of current systems fooled is impressive, I think the absolute assurance of privacy given here is unwarranted.
Posted Jul 23, 2020 1:53 UTC (Thu)
by gus3 (guest, #61103)
[Link] (8 responses)
Images of human faces are a mix of hard borders and soft shading. A proper facial-recognition system doesn't depend on these; it uses the points of the face (eyes, nostrils, lips, visible teeth, ears, visible hair-line, jaw, cheekbones, musculature) to build a face it can recognize.
The hackers and crackers already have tools against the Fawkes system. The images aren't cloaked, no matter how much you want them to be.
Remember: the enemy always has the better hand. It's your job to close the gap between the enemy's hand and yours.
Posted Jul 23, 2020 2:17 UTC (Thu)
by felixfix (subscriber, #242)
[Link]
Posted Jul 23, 2020 7:08 UTC (Thu)
by smurf (subscriber, #17840)
[Link] (3 responses)
Their point isn't to make the image in the manipulated photos unrecognizeable, but to make it not-your-own. The problem I see with their idea is that as soon as there are two sets of images of you out there, any adversary worth their salt will not simply replace the old parameter cloud with the new, as the Fawkes authors assume, but split them off into two sets of clouds which are both recognized as "you".
So this probably works WRT shop surveillance systems that try to find who that repeat customer is, might conceivably defend against run-of-the-mill police surveillance cameras if you can get your passport photos replaced with a Fawkes pic (more difficult as authorities start to insist on taking the pics themselves instead of you walking in with one from the photo booth), but not at all when the opponent is the NSA and their ilk.
Posted Jul 23, 2020 8:44 UTC (Thu)
by Sesse (subscriber, #53779)
[Link]
Posted Jul 23, 2020 12:15 UTC (Thu)
by ibukanov (subscriber, #3942)
[Link]
Posted Jul 23, 2020 19:31 UTC (Thu)
by nilsmeyer (guest, #122604)
[Link]
Posted Jul 23, 2020 14:13 UTC (Thu)
by clump (subscriber, #27801)
[Link] (2 responses)
By all means, continue to learn to read symbols designed for humans. However it should be relatively inexpensive (and safer) to tell a machine "this is a stop sign".
Posted Jul 23, 2020 15:52 UTC (Thu)
by magfr (subscriber, #16052)
[Link]
Posted Jul 24, 2020 15:04 UTC (Fri)
by rgmoore (✭ supporter ✭, #75)
[Link]
If you add machine-friendly identifiers, you'd better make sure they have the same kinds of legal rules surrounding them that human-readable signs do. Otherwise malicious actors will be able to mess with the system with legal impunity. It could be very bad if people could create new traffic signs only autonomous vehicles knew about.
Posted Jul 23, 2020 19:18 UTC (Thu)
by jem (subscriber, #24231)
[Link] (6 responses)
Posted Jul 23, 2020 19:28 UTC (Thu)
by nilsmeyer (guest, #122604)
[Link] (5 responses)
Posted Jul 23, 2020 19:56 UTC (Thu)
by sjj (guest, #2020)
[Link] (4 responses)
Posted Jul 24, 2020 3:44 UTC (Fri)
by gdt (subscriber, #6284)
[Link] (1 responses)
Posted Jul 24, 2020 9:56 UTC (Fri)
by nilsmeyer (guest, #122604)
[Link]
We already know this happens to people based on other criteria. It also happens a lot where facial recognition is used, especially if you have a darker skin tone where facial recognition causes a lot more false positives.
Posted Jul 24, 2020 9:44 UTC (Fri)
by nilsmeyer (guest, #122604)
[Link] (1 responses)
Posted Jul 24, 2020 19:08 UTC (Fri)
by NYKevin (subscriber, #129325)
[Link]
So, in my opinion, doing anything that makes a passport look less valid in the eyes of border officials is a really bad idea, regardless of whether doing so is technically legal or not.
Image "Cloaking" for Personal Privacy
>
> We are currently exploring the filing of a provisional patent on the Fawkes algorithm.
Image "Cloaking" for Personal Privacy
Image "Cloaking" for Personal Privacy
Image "Cloaking" for Personal Privacy
Wol
Image "Cloaking" for Personal Privacy
Image "Cloaking" for Personal Privacy
Image "Cloaking" for Personal Privacy
street signs vs. faces
street signs vs. faces
street signs vs. faces
street signs vs. faces
street signs vs. faces
street signs vs. faces
street signs vs. faces
street signs vs. faces
street signs vs. faces
Image "Cloaking" for Personal Privacy
Image "Cloaking" for Personal Privacy
Image "Cloaking" for Personal Privacy
Image "Cloaking" for Personal Privacy
Image "Cloaking" for Personal Privacy
Image "Cloaking" for Personal Privacy
Image "Cloaking" for Personal Privacy