ProofMode: a camera app for verifiable photography
The default apps on a mobile platform like Android are familiar targets for replacement, especially for developers concerned about security. But while messaging and voice apps (which can be replaced by Signal and Ostel, for instance) may be the best known examples, the non-profit Guardian Project has taken up the cause of improving the security features of the camera app. Its latest such project is ProofMode, an app to let users take photos and videos that can be verified as authentic by third parties.
Media captured with ProofMode is combined with metadata about the source device and its environment at capture time, then signed with a device-specific private PGP key. The result can be used to attest that the contents of the file have not been retouched or otherwise tampered with, and that the capture took place when and where the user says it did. For professional reporters or even citizen journalists capturing sensitive imagery, such an attestation provides a defense against accusations of fakery — an all-too-common response when critiquing those in positions of power. But making that goal accessible to real-world users has been a bit of a challenge for the Guardian Project.
CameraV
It is widely accepted that every facet of digital photography has both an upside and a downside. Digital cameras are cheap and carry no film or development costs, but digital images are impermanent and easily erased. Instant cloud storage and online sharing make media distribution easy, but do so at the cost of privacy and individual ownership. Perhaps nowhere is the dichotomy more critical, however, than in the case of news photography. Activists, journalists, and ordinary citizens have documented important world events using the cameras in their mobile devices, capturing everything from political uprisings to sudden acts of unspeakable violence. The flipside, though, is that the authenticity of digital photos and videos is hard to prove, and detractors are wont to dismiss any evidence that they don't like as fake.
Improving the verification situation was the goal of the Guardian Project's 2015 app, CameraV. The app provided a rather complex framework for attesting to the untampered state of recorded images, which the team eventually decided was inhibiting its adoption by journalists, activists, and other potential users. ProofMode is an attempt to whittle the CameraV model back to its bare essentials. Nevertheless, a quick look at CameraV is useful for understanding the approach.
CameraV attests to the unmodified state of an image by taking a
snapshot of the device's sensor readings the same instant that the photograph is
taken. The sensor data recorded for the snapshot is user-configurable,
consisting of geolocation data (including magnetometer readings, GPS, and network
location information), accelerometer readings, and environmental
sensors (such as ambient light, barometric pressure, and
temperature). Network device state, such as the list of visible
Bluetooth devices and WiFi access points, can optionally be included as
well. In addition, the standard Exif image tags (which include the
make and model of the device as well as camera settings) are
recorded. A full list is provided in the CameraV user's guide.
All of this metadata is stored in JSON Mobile Media Metadata (J3M) format and is appended to the image file, a process termed "notarization". The file is then MD5-hashed and the result signed with the user's OpenPGP key. CameraV provides another Android Intent service to let users verify the hash on any CameraV-notarized image they receive.
The signature can be published with the image, enabling third parties to verify that the metadata matches what the photographer claims about the location and context of the image. In theory, some of that metadata (such as nearby cell towers) could also be verified by an outside source. The app can also generate a short SHA-1 fingerprint of the signed file intended to be sent out separately. This fingerprint is short enough to fit into an SMS message, so that users can immediately relay proof of their recording, even if they do not have a means to upload the image itself until later. Users can share their digitally notarized images to public services or to publish them over Tor to a secure server that the user controls.
CameraV takes a number of steps to ensure that images are not altered while on the user's device, lest the app then be used to create phony attestations and undermine trust in the system. First, the MD5 hash of image or video that is saved alongside the device-sensor metadata is computed over the raw pixel data (or raw video frames), as a mechanism to protect against the image being faked using some other on-camera app before the user publishes it for consumption. Second, the full internal file path of the raw image file is saved with the metadata, which serves as a record that the CameraV app is the source of the file. Third, app-specific encrypted storage is used for the device's local file storage — including the media, the metadata, and key material. Finally, the OpenPGP key used is specific to the app itself. The key is generated when the user first sets up CameraV; the installer prompts the user to take a series of photos that are used as input for the key-generation step.
Rethinking the complexity issues
CameraV's design hits a lot of the bullet points that security-conscious developers care about, but it certainly never gained a mass following. Among other stumbling blocks, the user had to decide in advance to use the CameraV app to record any potentially sensitive imagery. That might be fine for someone documenting human rights violations as a full-time job, but is less plausible for a spur-of-the-moment incident — and it does not work for situations where the user only realizes the newsworthiness of a photo or video after the fact. In addition, there may be situations where it is genuinely harmful to have detailed geolocation information stored in a photo, so using CameraV for all photos might frighten off some potential users.
Consequently, in 2016 the Guardian Project began working on a sequel of sorts to CameraV. That effort is what became ProofMode, which was first announced to the public on the project's blog in February 2017. The announcement describes ProofMode as a "reboot," but it is worth noting that CameraV remains available (through the Google Play Store as well as through the F-Droid repository) and is still being updated.
ProofMode essentially turns CameraV's metadata-recording process into a background service and makes its available to the user as a "share" action (through Android's Intent API). When any media is captured with any camera app, ProofMode takes a snapshot of the device sensor readings. The user then has the option of choosing "Share Proof" from their camera app's sharing menu.
At present, ProofMode offers three sharing options: "Notarize Only"
(which shares only the SHA-1 fingerprint code), "Share Proof Only"
(which shares a signed copy of the metadata files), and "Share
Proof with Media" (which appends the metadata to the media file and
signs the result, as in the CameraV case). Whichever option the user
chooses, selecting it immediately brings up another "share" panel so
the user can pick an app to finalize the action — thus directing
the ProofMode file to email, SMS, a messaging app, Slack, or any other
option that supports attaching files.
In March, Bruce Schneier posted about ProofMode on his blog, which spawned a series of in-depth questions in the comment section. As might be expected on such a public forum, the comments ranged from complaints about the minutia of the app's approach to security to bold assertions that true authentication on a mobile device is unattainable.
Among the more specific issues, though, the commenters criticized ProofMode's use of unencrypted storage space, its practice of extracting the PGP private key into RAM with the associated passphrase, and how the keys are generated on the device. There were also some interesting questions about how a malicious user might be able to generate a fake ProofMode notary file by hand.
The Guardian Project's Nathan Freitas responded at length to the criticism in the comment thread, and later reiterated much of the same information on the Guardian Project blog. As to the lower-level security steps, he assured commenters that the team knew what it was doing (citing the fact that Guardian Project ported Tor to Android, for example) and pointed to open issues on the ProofMode bug tracker for several of the enhancements requested (such as the use of secure storage for credentials).
On other issues, Freitas contended that there may simply be a valid
difference of opinion. For example, the on-device generation of key
pairs may seem less than totally secure, but Freitas noted that the
keys in question are app-specific and not designed for use as a long-term user
identity. "Our thinking was more focused on integrity through
digital signatures, with a bit of lightweight, transient identity
added on.
" Nevertheless, he added, the project does have an
issue open to port key storage to the Android
Keystore system service.
Android also provides some APIs that can protect against tampering. Freitas said that the project has already integrated the SafetyNet API, which is used to detect if the app is running in an emulator (although ProofMode does not block this behavior; it simply notes it in the metadata store). In the longer term, the team is also exploring implementing stronger security features, such as more robust hashing mechanisms or the Blockchain-based OpenTimestamps.
Ultimately, however, complexity is the enemy of growing a broad
user base, at least from the Guardian Project's perspective. Freitas
told the Schneier commenters that the goal is to provide notarization
and security for "every day activists around the world, who may
only have a cheap smartphone as their only computing device
"
rather than cryptographers. In an email, he also noted that ProofMode
requires little to no training for users to understand, which is a
stark contrast to the complexity of CameraV.
Verification versus anonymity
Given all the talk about recording sensor input and geolocation information, a privacy-conscious user might well ask whether or not CameraV and ProofMode take a step backward for those users who are interested in recording sensitive events but are also legitimately worried about being identified and targeted for their trouble. This is a real concern, and the Guardian Project has several approaches to addressing it.
The first is that CameraV and ProofMode both provide options for disabling some of the more sensitive metadata that can be captured. For now, that includes the network information and geolocation data. Second, potentially identifiable metadata like Bluetooth device MAC addresses are not recorded in the clear, but only in hashed form. And the project has an issue open to allowing wiping ProofMode metadata files from a device.
For the extreme case, however — when a user might want to
completely sanitize an image of all traceable information before
publishing it — there is too little overlap with the intent of
ProofMode, but the project has published a separate app that may fit
the bill.
That anonymizing app is called ObscuraCam. It automatically removes geolocation data and the device make and model metadata from any captured photo. It also provides a mechanism for the user to block out or pixelate faces, signs, or other areas of the image that might be sensitive.
At the moment, it is not possible to use ObscuraCam in conjunction with ProofMode (attempting to do so crashes the ProofMode app), but the precise interplay between the two security models likely would require some serious thought anyway. Nevertheless, if anonymity is of importance, it is good to know there is an option.
In the pudding
In the final analysis, neither CameraV nor ProofMode is of much value if it remains merely a theoretical service: it has to be usable to real-world, end-user human beings. In my own personal tests, CameraV is complex enough that it is little surprise that it has not been adopted en masse. The first step after installation requires the user to set up a "secure database," the preferences screen is not particularly user-friendly, and the sharing features are high on detail but light on interface polish.
On the other hand, ProofMode makes serious strides forward in ease-of-use but, at present, it lacks the built-in documentation that a new user might require in order to make the right choices. If one has not read the ProofMode blog posts, the sharing options ("Notarize Only" and "Share Proof Only") might not be easy to decipher. Obviously, the project is still in pre-release mode, though, so there is plenty of reason to believe that the final version will hit the right notes.
Readers with long memories might also recall that the CameraV–ProofMode saga marks the second time that the Guardian Project developed a security app only to later refactor the code into a system service. The first instance was PanicKit, a framework for erasing device data from multiple apps that grew out of the project's earlier storage-erasing app InTheClear.
Freitas calls this a
coincidence, however, rather than a development trend. With PanicKit,
he said, the goal was to develop a service that third-party app
developers would find useful, too. ProofMode, in contrast, was merely
a simplification of the original concept designed to meet the needs of
a broader audience. Regardless of how one looks at it, though, most
will likely agree that if security features come built into the
operating system at a lower level — eliminating the need to choose
"secure apps" or "insecure apps" — then the end users will benefit in the end.
Index entries for this article | |
---|---|
GuestArticles | Willis, Nathan |
Posted Jun 25, 2017 23:39 UTC (Sun)
by droundy (subscriber, #4559)
[Link] (3 responses)
Posted Jun 26, 2017 5:34 UTC (Mon)
by mjthayer (guest, #39183)
[Link] (1 responses)
Posted Jun 26, 2017 7:59 UTC (Mon)
by hifi (guest, #109741)
[Link]
This could help fight against *other* people doctoring your image and claiming it to be the original as you can always prove that the original image file you have had been hashed before the fake image proving you have the original.
I don't see any way to unanimously prove when/where a picture was taken, only the earliest time when it was publicly known to exist and who claimed to own it at that time.
Posted Jun 26, 2017 5:48 UTC (Mon)
by felixfix (subscriber, #242)
[Link]
One clue is the time stamp and location. If it's an incident where time or location matters, then any delay or going offsite for quiet manipulation will show up as a discrepancy.
Can the proof app's actions be duplicated by a fakery program? That would make delayed offsite manipulation feasible.
But if the proof app immediately broadcasts a hash of the raw picture, then transmission of the full proofed picture can wait, and manipulation may be impossible.
Posted Jun 26, 2017 6:33 UTC (Mon)
by eru (subscriber, #2753)
[Link] (1 responses)
Posted Jun 26, 2017 12:32 UTC (Mon)
by pboddie (guest, #50784)
[Link]
Also the picture might be documentation of an object rather than an event. Let us suppose that you meet someone who shows you evidence of some bad thing or other that you cannot just take away with you and show to everyone. You might want to withhold geolocation data in order to prevent identifying your helpers who might already be tracked and therefore be easily associated with the published picture later on.
But I think that withholding geolocation data is optional, if I understood the article properly.
Posted Jun 29, 2017 21:37 UTC (Thu)
by branden (guest, #7029)
[Link]
>cringe<
ProofMode: a camera app for verifiable photography
ProofMode: a camera app for verifiable photography
ProofMode: a camera app for verifiable photography
ProofMode: a camera app for verifiable photography
ProofMode: a camera app for verifiable photography
ProofMode: a camera app for verifiable photography
ProofMode: a camera app for verifiable photography