Strategies for offline PGP key storage
While the adoption of OpenPGP by the general population is marginal at best, it is a critical component for the security community and particularly for Linux distributions. For example, every package uploaded into Debian is verified by the central repository using the maintainer's OpenPGP keys and the repository itself is, in turn, signed using a separate key. If upstream packages also use such signatures, this creates a complete trust path from the original upstream developer to users. Beyond that, pull requests for the Linux kernel are verified using signatures as well. Therefore, the stakes are high: a compromise of the release key, or even of a single maintainer's key, could enable devastating attacks against many machines.
That has led the Debian community to develop a good grasp of best practices for cryptographic signatures (which are typically handled using GNU Privacy Guard, also known as GnuPG or GPG). For example, weak (less than 2048 bits) and vulnerable PGPv3 keys were removed from the keyring in 2015, and there is a strong culture of cross-signing keys between Debian members at in-person meetings. Yet even Debian developers (DDs) do not seem to have established practices on how to actually store critical private key material, as we can see in this discussion on the debian-project mailing list. That email boiled down to a simple request: can I have a "key dongles for dummies" tutorial? Key dongles, or keycards as we'll call them here, are small devices that allow users to store keys on an offline device and provide one possible solution for protecting private key material. In this article, I hope to use my experience in this domain to clarify the issue of how to store those precious private keys that, if compromised, could enable arbitrary code execution on millions of machines all over the world.
Why store keys offline?
Before we go into details about storing keys offline, it may be
useful to do a small reminder of how the OpenPGP standard works.
OpenPGP keys are made of a main public/private key pair, the
certification key, used to sign user identifiers and subkeys. My
public key, shown below, has the usual main certification/signature key (marked
SC
) but also an encryption subkey (marked E
), a separate signature
key (S
), and two authentication keys (marked A
)
which I use as RSA
keys to log into servers using SSH, thanks to the Monkeysphere
project.
pub rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19] 8DC901CE64146C048AD50FBB792152527B75921E uid [ultimate] Antoine Beaupré <anarcat@anarc.at> uid [ultimate] Antoine Beaupré <anarcat@koumbit.org> uid [ultimate] Antoine Beaupré <anarcat@orangeseeds.org> uid [ultimate] Antoine Beaupré <anarcat@debian.org> sub rsa2048/B7F648FED2DF2587 2012-07-18 [A] sub rsa2048/604E4B3EEE02855A 2012-07-20 [A] sub rsa4096/A51D5B109C5A5581 2009-05-29 [E] sub rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub
) and identities (uid
) are
bound by the main
certification key using cryptographic self-signatures. So while an
attacker stealing a private subkey can spoof signatures in my name or
authenticate to other servers, that key can always be revoked by the
main certification key. But if the certification key gets stolen, all
bets are off: the attacker can create or revoke identities or subkeys
as they wish. In a catastrophic scenario, an attacker could even steal
the key and remove your copies, taking complete control of the key,
without any possibility of recovery. Incidentally, this is why it is
so important to generate a revocation certificate and store it
offline.
So by moving the certification key offline, we reduce the attack surface on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or signature) can stay online but if they get stolen, the certification key can revoke those keys without having to revoke the main certification key as well. Note that a stolen encryption key is a different problem: even if we revoke the encryption subkey, this will only affect future encrypted messages. Previous messages will be readable by the attacker with the stolen subkey even if that subkey gets revoked, so the benefits of revoking encryption certificates are more limited.
Common strategies for offline key storage
Considering the security tradeoffs, some propose storing those critical keys offline to reduce those threats. But where exactly? In an attempt to answer that question, Jonathan McDowell, a member of the Debian keyring maintenance team, said that there are three options: use an external LUKS-encrypted volume, an air-gapped system, or a keycard.
Full-disk encryption like LUKS adds an extra layer of security by hiding
the content of the key from an attacker. Even though private keyrings are
usually protected by a passphrase, they are easily
identifiable as a keyring. But when a volume is fully encrypted, it's not
immediately
obvious to an attacker there is private key material on the device. According
to Sean Whitton, another advantage of LUKS over plain GnuPG keyring
encryption is that you can pass the --iter-time
argument when
creating a LUKS partition to increase key-derivation delay, which makes
brute-forcing much harder.
Indeed, GnuPG 2.x doesn't have a run-time option to
configure the
key-derivation algorithm, although a patch was introduced recently to
make
the delay configurable at compile
time in gpg-agent, which is now responsible for all secret key
operations.
The downside of external volumes is complexity: GnuPG makes it difficult to extract secrets out of its keyring, which makes the first setup tricky and error-prone. This is easier in the 2.x series thanks to the new storage system and the associated keygrip files, but it still requires arcane knowledge of GPG internals. It is also inconvenient to use secret keys stored outside your main keyring when you actually do need to use them, as GPG doesn't know where to find those keys anymore.
Another option is to set up a separate air-gapped system to perform certification operations. An example is the PGP clean room project, which is a live system based on Debian and designed by DD Daniel Pocock to operate an OpenPGP and X.509 certificate authority using commodity hardware. The basic principle is to store the secrets on a different machine that is never connected to the network and, therefore, not exposed to attacks, at least in theory. I have personally discarded that approach because I feel air-gapped systems provide a false sense of security: data eventually does need to come in and out of the system, somehow, even if only to propagate signatures out of the system, which exposes the system to attacks.
System updates are similarly problematic: to keep the system secure, timely security updates need to be deployed to the air-gapped system. A common use pattern is to share data through USB keys, which introduce a vulnerability where attacks like BadUSB can infect the air-gapped system. From there, there is a multitude of exotic ways of exfiltrating the data using LEDs, infrared cameras, or the good old TEMPEST attack. I therefore concluded the complexity tradeoffs of an air-gapped system are not worth it. Furthermore, the workflow for air-gapped systems is complex: even though PGP clean room went a long way, it's still lacking even simple scripts that allow signing or transferring keys, which is a problem shared by the external LUKS storage approach.
Keycard advantages
The approach I have chosen is to use a cryptographic keycard: an external device, usually connected through the USB port, that stores the private key material and performs critical cryptographic operations on the behalf of the host. For example, the FST-01 keycard can perform RSA and ECC public-key decryption without ever exposing the private key material to the host. In effect, a keycard is a miniature computer that performs restricted computations for another host. Keycards usually support multiple "slots" to store subkeys. The OpenPGP standard specifies there are three subkeys available by default: for signature, authentication, and encryption. Finally, keycards can have an actual physical keypad to enter passwords so a potential keylogger cannot capture them, although the keycards I have access to do not feature such a keypad.
We could easily draw a parallel between keycards and an air-gapped system; in effect, a keycard is a miniaturized air-gapped computer and suffers from similar problems. An attacker can intercept data on the host system and attack the device in the same way, if not more easily, because a keycard is actually "online" (i.e. clearly not air-gapped) when connected. The advantage over a fully-fledged air-gapped computer, however, is that the keycard implements only a restricted set of operations. So it is easier to create an open hardware and software design that is audited and verified, which is much harder to accomplish for a general-purpose computer.
Like air-gapped systems, keycards address the scenario where an
attacker wants to get the private key material. While an
attacker could fool the keycard into signing or decrypting some
data, this is possible only while the key is physically connected,
and the keycard software will prompt the user for a password before
doing the operation, though the keycard can cache the password for some time. In effect, it thwarts offline attacks: to
brute-force the key's password, the attacker needs to be on the target
system and try to guess the keycard's password, which will lock itself after a
limited number of tries. It also provides for a clean and standard
interface to store keys offline: a single GnuPG command moves private
key material to a keycard (the keytocard
command in the
--edit-key
interface), whereas moving private key material to a LUKS-encrypted
device or air-gapped computer is more complex.
Keycards are also useful if you operate on multiple computers. A
common problem when using GnuPG on multiple machines is how to safely
copy and synchronize private key material among different devices, which
introduces new security problems. Indeed, a "good rule of
thumb in a forensics lab
", according
to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum
personal data possible on your
systems
". Keycards provide the best of both worlds here: you can use
your private key on multiple computers without actually storing it in
multiple places. In fact, Mike Gerwitz went as far
as saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.
Keycard tradeoffs
As Gerwitz hinted, there are multiple downsides to using a keycard, however. Another DD, Wouter Verhelst clearly expressed the tradeoffs:
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it).
Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc.
"Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop — and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key — a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints.
You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically.
Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume.
Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power.
Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.
Quick keycard "howto"
Getting keys onto a keycard is easy enough:
Start with a temporary key to test the procedure:
export GNUPGHOME=$(mktemp -d) gpg --generate-key
-
Edit the key using its user ID (UID):
gpg --edit-key UID
-
Use the key command to select the first subkey, then copy it to the keycard (you can also use the
addcardkey
command to just generate a new subkey directly on the keycard):gpg> key 1 gpg> keytocard
If you want to move the subkey, use the
save
command, which will remove the local copy of the private key, so the keycard will be the only copy of the secret key. Otherwise use thequit
command to save the key on the keycard, but keep the secret key in your normal keyring; answer "n" to "save changes?" and "y" to "quit without saving?" . This way the keycard is a backup of your secret key.Once you are satisfied with the results, repeat steps 1 through 4 with your normal keyring (unset
$GNUPGHOME
)
When a key is moved to a keycard, --list-secret-keys
will show it as
sec>
(or ssb>
for subkeys) instead of the usual sec
keyword. If
the key is completely missing (for example, if you moved it to a LUKS
container), the #
sign is used instead. If you need to use a key
from a keycard backup, you simply do gpg --card-edit
with
the key plugged in,
then type the fetch
command at the prompt to fetch the public key
that corresponds to the private key on the
keycard (which stays on the keycard). This is the same procedure as the one
to use the secret key on another computer.
Conclusion
There are already informal OpenPGP best-practices guides out there and some recommend storing keys offline, but they rarely explain what exactly that means. Storing your primary secret key offline is important in dealing with possible compromises and we examined the main ways of doing so: either with an air-gapped system, LUKS-encrypted keyring, or by using keycards. Each approach has its own tradeoffs, but I recommend getting familiar with keycards if you use multiple computers and want a standardized interface with minimal configuration trouble.
And of course, those approaches can be combined. This tutorial, for example, uses a keycard on an air-gapped computer, which neatly resolves the question of how to transmit signatures between the air-gapped system and the world. It is definitely not for the faint of heart, however.
Once one has decided to use a keycard, the next order of business is to choose a specific device. That choice will be addressed in a followup article, where I will look at performance, physical design, and other considerations.
Index entries for this article | |
---|---|
Security | Encryption/Email |
GuestArticles | Beaupré, Antoine |
Posted Oct 3, 2017 9:43 UTC (Tue)
by merge (subscriber, #65339)
[Link] (6 responses)
Posted Oct 3, 2017 10:32 UTC (Tue)
by grawity (subscriber, #80596)
[Link] (5 responses)
…There is?
You can already have a separate subkey for signing files/messages, which expires in a month or two.
The master key is only required for certifying other keys and updates to your own subkeys, e.g. when you need to add a subkey or update the expiry time.
Posted Oct 3, 2017 12:10 UTC (Tue)
by ms (subscriber, #41272)
[Link] (4 responses)
Posted Oct 3, 2017 13:28 UTC (Tue)
by merge (subscriber, #65339)
[Link] (3 responses)
So, why use key cards? Creating known-good keys that in turn *can* get compromised until they expire seems more cheap, more safe, and more easy to use, especially when you're at it anyways, regularly finding a place to unlock your most secured files for a very short period of time.
Posted Oct 5, 2017 6:48 UTC (Thu)
by madhatter (subscriber, #4665)
[Link] (2 responses)
If instead you have one highly-secure long-lived key that's on a HSM, and you use it to sign your ephemeral encryption keys, then any correspondent who has the public part of your long-lived signing key can get your current public key off any old keyserver and immediately know whether to trust it or not.
Posted Oct 5, 2017 7:00 UTC (Thu)
by merge (subscriber, #65339)
[Link] (1 responses)
Posted Oct 5, 2017 12:53 UTC (Thu)
by anarcat (subscriber, #66354)
[Link]
1. gpg chooses the latest signing subkey (I would have expected it would sign with all available signing subkeys)
I had to go back to inline signing to send email... And I had to specify the signing key with a bang ("!") at the end, which was weird and unusual (I would have expected the keygrip to work here for example).
So in short, it's a pain in the back to rotate signing keys, I wouldn't recommend having a workflow based on doing that on a regular basis, unless you control key propagation.
Posted Oct 3, 2017 9:46 UTC (Tue)
by epa (subscriber, #39769)
[Link] (3 responses)
Posted Oct 3, 2017 11:52 UTC (Tue)
by Funcan (subscriber, #44209)
[Link]
You can start to look at USB <-> serial converters and such, but really they just become an implementation detail of "design a secure dongle".
Posted Oct 3, 2017 17:57 UTC (Tue)
by drag (guest, #31333)
[Link] (1 responses)
You could print out the master code, destroy the digital copies and just use that. You could even be all cloak and dagger, encrypt the master and split the code up into 2 or more fragments. Keep one half locked in your desk and the second half in a laminated card in your wallet. Or maybe have a 'little black book' of keys you can scan in and then have the password to decrypt them in your wallet.
The downside is that you lose all the features of a proper keycard. The upside is that pretty much everything you need is at your local office supply store.
Posted Oct 5, 2017 13:17 UTC (Thu)
by genaro (subscriber, #82632)
[Link]
I did a research paper in college on this topic. It's feasible to export ascii-armored keys and read them with QR. 4096-bit RSA keys are rough, but workable. With newer EC keys the QR method gets much, much easier.
Posted Oct 3, 2017 10:11 UTC (Tue)
by ngiger@mus.ch (subscriber, #4013)
[Link] (10 responses)
Posted Oct 3, 2017 11:31 UTC (Tue)
by anarcat (subscriber, #66354)
[Link] (1 responses)
Posted Oct 4, 2017 23:35 UTC (Wed)
by nnesse (guest, #118902)
[Link]
Posted Oct 3, 2017 13:40 UTC (Tue)
by eahay (guest, #110720)
[Link] (7 responses)
Posted Oct 3, 2017 19:54 UTC (Tue)
by dd9jn (✭ supporter ✭, #4459)
[Link] (3 responses)
Posted Oct 3, 2017 19:58 UTC (Tue)
by anarcat (subscriber, #66354)
[Link] (2 responses)
Posted Oct 5, 2017 2:51 UTC (Thu)
by Trelane (subscriber, #56877)
[Link]
Posted Oct 5, 2017 6:34 UTC (Thu)
by intrigeri (subscriber, #82634)
[Link]
Posted Oct 3, 2017 22:14 UTC (Tue)
by dsommers (subscriber, #55274)
[Link] (2 responses)
But after I switched to RHEL 7.4, gpg --card-status gives me "Card error" - BUT running openpgp-tool works! So it seems gpg is grumpy about it for some reasons. Anyone got a good idea what could be the issue? I might have forgotten a silly step, but can't figure out what it could be.
Posted Oct 5, 2017 4:53 UTC (Thu)
by jans (guest, #108889)
[Link] (1 responses)
Posted Oct 5, 2017 9:22 UTC (Thu)
by dsommers (subscriber, #55274)
[Link]
Again, thank you!
Posted Oct 4, 2017 16:20 UTC (Wed)
by faramir (subscriber, #2327)
[Link] (11 responses)
If the keycard caches your password, could they wait until you authenticate to the card and then piggyback on that authentication for their own operations? Is there any indication on the keycard when it is being actively used?
Or maybe they capture the password as you enter it and exfiltrate it. Next time you go to Starbucks, they mug you and steal your keycard as well as your wallet. Depending on how high value a target you are, this seems reasonable. If you are a developer, you might be a much higher value target then you realize; depending on who uses the software that you write.
Posted Oct 4, 2017 20:52 UTC (Wed)
by anarcat (subscriber, #66354)
[Link] (10 responses)
I would rather see a keycard that would force me to tap it to confirm operations. Really, if you're concerned about that level of attacks, you should use one of those card readers that requires a PIN to be entered before operations are allowed on the key.
I'm not claiming offline key storage is the silver bullet, but it does solve *some* attack scenarios. The question is if the tradeoffs are worth it for *you*.
Posted Oct 4, 2017 21:43 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Posted Oct 13, 2017 13:34 UTC (Fri)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Oct 13, 2017 14:03 UTC (Fri)
by johill (subscriber, #25196)
[Link] (1 responses)
Posted Oct 17, 2017 21:23 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Posted Oct 4, 2017 21:44 UTC (Wed)
by karkhaz (subscriber, #99844)
[Link] (2 responses)
Is the touch-to-sign feature on YubiKey 4 what you're looking for?
> YubiKey 4 introduces a new touch feature that allows to protect the use of the private keys with an additional layer. When this functionality is enabled, the result of a cryptographic operation involving a private key (signature, decryption or authentication) is released only if the correct user PIN is provided _and_ the YubiKey touch sensor is triggered
Posted Oct 5, 2017 12:55 UTC (Thu)
by anarcat (subscriber, #66354)
[Link] (1 responses)
Posted Oct 13, 2017 3:46 UTC (Fri)
by ras (subscriber, #33059)
[Link]
The fly in the ointment is its proprietary. Ergo some assume it's probably backdoor'ed. I'd be acting on the assumption too, even though I think on the balance of probabilities it's not. Add closed + proprietary and Debian don't mix well, and it doesn't look like Yubikey would fly with Debian.
Posted Oct 5, 2017 10:44 UTC (Thu)
by tao (subscriber, #17563)
[Link] (2 responses)
The term I'd normally associate with a system that can withstand things like badUSB would be tamper-proof. An ATM, for instance.
Sometimes there's an overlap, and there are degrees of airgapping and tamper-proofing. You probably don't want wifi, BT, etc. for your ATM, but it's definitely connected to the Internet, though hopefully on a VLAN.
Posted Oct 5, 2017 12:57 UTC (Thu)
by anarcat (subscriber, #66354)
[Link] (1 responses)
I could have written a whole article about air-gapped computers - that wasn't my purpose here. It's one of the approaches you can use, and i know it has its merits. the problem is the tradeoffs seem off to me. if you're connected to the internet anyways, how does it differ from a workstation behind a LAN?
the definitions of "air-gapped" sure seem pretty flexible around here... :p which is another problem: if we don't have a clear definition of what an "air gap" is, you're going to have trouble creating a proper threat model analysis...
Posted Oct 5, 2017 15:05 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link]
It doesn't. You and tao are both saying that an "air-gapped" system is not connected to either the Internet or a LAN. The difference is that tao's definition of "air-gapped" (reasonably, IMHO) does not encompass protection against a local attacker with physical access to the system, e.g. the BadUSB attack. That threat model requires a system which is "tamper-proof", which is a separate consideration from "air-gapped". A "tamper-proof" system can have network links (e.g. ATMs) and an "air-gapped" system can have USB ports. (Suitably restricted, of course—you don't your air-gapped system to automatically establish an Internet connection just because someone plugged a USB network adapter into the port intended for security keys. However, that can be addressed by limiting the USB drivers available, and/or configuring a whitelist of allowed devices.)
Posted Oct 6, 2017 15:53 UTC (Fri)
by tomj (subscriber, #63242)
[Link]
To backup the smartcard signing keys for desaster recovery (=broken or burnt card), we developed a tool to backup the keys as QR codes. The tool comes with a restore script and we tested soaking the paper in dirty water for a few hours and it still worked.
Here's the link:
gnupg-users announcement:
Posted Oct 8, 2017 12:39 UTC (Sun)
by neal (subscriber, #7439)
[Link]
https://gnupg.org/ftp/people/neal/an-advanced-introductio...
Posted Oct 12, 2017 7:06 UTC (Thu)
by Garak (guest, #99377)
[Link]
Posted Oct 15, 2017 23:30 UTC (Sun)
by metasequoia (guest, #119065)
[Link]
It only costs $2 on ebay.
You can find it by searching on "STM32F103 Minimum system development board".
To use it, one will also need a 3.3 volt USB-UART adapter to program the board, (also really cheap).
And one will also need to figure out some way to protect it from breakage - Embedding it in a lump of moldable thermosetting plastic seems like the easiest thing to do. It also offers some security/tamper resistance.
Apart from the Fimo or epoxy potting material, the total cost of the dongle could work out to under $4.
You should know that the Blue Pill boards USB connectors are notorious for breaking off. They are only soldered on lightly and need reinforcement. Even with it they remain very easy to break. So to avoid it coming off I would either completely replace the USB with a plug and embed it in plastic as shown in the image below, or put it in a small case, and leave a short cable permanently attached to it. You really should plan on doing that unless you only use it at home at your desk. Or don't expect it to last long. It will break.
This is the one you want. http://wiki.stm32duino.com/index.php?title=Blue_Pill
Also, note that these boards have some other issues, which may impact their usefulness unless addressed. For example. "The USB standard requires a 1.5 kΩ pullup resistor on D+, but this board is known to have a wrong value (R10 on the board). It ships with either a 10 kΩ resistor or a 4.7 kΩ resistor, but it should be replaced with a 1.5 kΩ resistor, or put an appropriate resistor value (e.g 1.8 kΩ) in between PA12 and 3.3V. It is also true that some PCs are tolerant of incorrect value so, before you change the resistance, you can try if it works in your case."
Software repository: https://anonscm.debian.org/cgit/gnuk/gnuk/gnuk.git
changelog:
https://anonscm.debian.org/cgit/gnuk/gnuk/gnuk.git/commit...
You'll need the arm-none-eabi-gcc toolchain,
If you use the USB-UART flashing method make sure the UART device you use can be set to 3.3 volts. Many of them have a jumper to allow the setting of either 3.3 volts or 5 volts.
Programming:
To do this you can use a linux program called stm32flash
(There are a number of different ways to flash the software onto an stm32 board but this seems to me to be the simplest.)
Youll need to use either headers (typically soldered) or clips of some kind to in a robust manner connect to the following pins to upload the program to the board.
Ground goes to the "G" pin, the second inward from the bottom right corner. +3.3 volts goes to the "3.3" pin on the bottom right corner. TXD on the USB-UART goes to the A10 pin and RXD goes to the A9 pin. All of these pins are on the bottom row.
If the upload doesn't work, try reversing the connections to A9 and A10, some USB UART devices label their pins with what you are supposed to connect them to, not what they are. So its reversed.
You cannot program the dongle using its USB connection, which is a very good thing in this context.
So..
Change the boot0 jumper to 1,
start the upload program, telling it where the binary file is.
It will run and tell you its successfully completed. Then remove power, change the boot0 jumper back to 0 and reboot.
It should now work.
Gnuk documentation: http://www.fsij.org/doc-gnuk/index.html
FSIJ blog showing a good way of protecting a device by embedding it in moldable plastic.
https://www.fsij.org/category/gnuk.html >> https://www.fsij.org/images/gnuk/FST-01G-201701-00.jpg
Posted Nov 11, 2017 19:40 UTC (Sat)
by metasequoia (guest, #119065)
[Link]
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
2. notmuch-emacs and mutt do not allow you to choose which subkey to use to sign outgoing messages
3. debsign *does* allow you to choose the signing subkey, but that's about the only thing
Communicating with an air-gapped system
Communicating with an air-gapped system
Communicating with an air-gapped system
Communicating with an air-gapped system
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
Strategies for offline PGP key storage
I suspect this is related to access restrictions and usually is solved by proper UDEV rules. See these instructions.
Strategies for offline PGP key storage
Strategies for offline PGP key storage
misusing USB keycards?
If an attacker has control over the computer in which the keycard is installed, they can subvert your data before it is sent to the card. Or simply just use the card directly.
misusing USB keycards?
If you enable USB on a system so you can use a USB based keycard, aren't you leaving that system open to BadUSB or similar ttacks?
Yes, it's one of my core criticism of "airgapped" systems: they are never really airgapped. If you are referring to normal systems, I frankly don't know if you can still run an interactive terminal *without* USB these days. Unless you have a PS/2 mouse and keyboard (and port!), you're pretty much forced to use USB and therefore exposed to that vector anyways.
If an attacker has control over the computer in which the keycard is installed, they can subvert your data before it is sent to the card. Or simply just use the card directly.
Yep. They can use the card to do any operations it requires. But the point is they can do that only when it's plugged in: the second the key is unplugged, they can't do their evil thing anymore. Furthermore, they can't "steal" the key from you, unless they can find a way to subvert the keycard controller somehow, which is a critical difference with having the key on-disk.
If the keycard caches your password, could they wait until you authenticate to the card and then piggyback on that authentication for their own operations? Is there any indication on the keycard when it is being actively used?
Yes, they could and no, there's *generally* no visual indicator (although the Yubikey NEO does have a neat little LED in the middle that buzzes when things are happening on the key. It's hardly usable as an indicator, however.
Or maybe they capture the password as you enter it and exfiltrate it. Next time you go to Starbucks, they mug you and steal your keycard as well as your wallet. Depending on how high value a target you are, this seems reasonable. If you are a developer, you might be a much higher value target then you realize; depending on who uses the software that you write.
I'm not sure there are such great protections against mugging. Pipewrench cryptography beats any design you can create, really - if that's your threat model, it seems to me you're setting yourself up to failure.
misusing USB keycards?
misusing USB keycards?
misusing USB keycards?
misusing USB keycards?
misusing USB keycards?
misusing USB keycards?
misusing USB keycards?
misusing USB keycards?
misusing USB keycards?
misusing USB keycards?
Strategies for offline PGP key storage
https://github.com/intra2net/paperbackup
https://lists.gnupg.org/pipermail/gnupg-users/2017-Februa...
Strategies for offline PGP key storage
Snowden and Gemalto
Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware.
I think its better to have this mindset in general so that you don't even really consider it a new element so much as just another element. But I understand the phrasing given the history of the subject.
In addition, you need to trust that the implementation is correct.
Um, yeah. Again, seems to be a stating the obvious kind of thing. But again, due to how the NSA managed to get the entire industry to ignore closed source firmware threat surface for so many years pre-Snowden, I do grok the chosen phrasing here.
Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate.
Perhaps not what is being referenced here, but I'll state the obvious and say that any manufacturer that has any lines of hardware/software source code that are not made available to the user, or closed source tools not made available to the user/ProductOwner, shouldn't be trusted without a healthy amount of skepticism and paranoia. In other words, if the manufacturer is physically able to update some code in the device I 'purchased' (not leased), then I want that ability too or no sale. Here is where all the spooks get to enjoy the fun toys. They get (or have stolen) the ability to maximize the security of the product by having the ability to enhance every enhanceable line of code in the system. Just because you sneak a peak of your spook friend using a particular product, doesn't mean that product would enhance the security of others who paid the same price at the same store for it.
Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power.
Yeah, but for these use cases you really don't need that much. So as with the prior string of points, I feel like there isn't quite the right focus here.
One thing I'd focus on is Gemalto and Snowden. When I recall forming opinions about this subject pre-Snowden, I recall Gemalto being within a narrow category of interesting as far as my paranoid concerns go. Thus I took particular notice of the Gemalto substory of the Snowden revelations. I suppose referencing it is what I would have done to bolster the first sentence in the graf I quoted at top. But then again, that's probably why I don't get paid to write for large audiences.
(full disclosure, I did work for Keyhole(aka GoogleEarth) in 2002-2004 which was partly funded by In-Q-Tel/CIA(referenced in wikipedia quote), but in no way related to this beyond general computer competency (typical LWN/Schneier readership awareness of cybersecurity issues)
https://en.wikipedia.org/w/index.php?title=Gemalto&oldid=801546887#Security_breaches
Security breaches
According to documents leaked by Edward Snowden, NSA's and GCHQ's Mobile Handset Exploitation Team[57] infiltrated Gemalto's infrastructure to steal SIM authentication keys, allowing them to secretly monitor mobile communications.[58] GCHQ codenamed the program "DAPINO GAMMA". The secret GCHQ document leaked by Snowden also claimed the ability to manipulate billing records to conceal their own activity and having access to authentication servers to decrypt voice calls and text messages.[58] Snowden stated that "When the NSA and GCHQ compromised the security of potentially billions of phones (3g/4g encryption relies on the shared secret resident on the sim), they not only screwed the manufacturer, they screwed all of us, because the only way to address the security compromise is to recall and replace every SIM sold by Gemalto."[59]
The breach subsequently refueled suspicions against Gemalto chairman Alex J. Mandl, given his role in the CIA venture capital firm In-Q-Tel.[60]
GCHQ and NSA declined to comment on the matter.[61] Gemalto issued a press release on February 25, 2015 saying there were "reasonable grounds to believe that an operation by NSA and GCHQ probably happened," but denying that the government agencies gained access to any authentication keys.[62][63]
Strategies for offline PGP key storage
You'll need to set a jumper on the Blue Pill, "Boot0" to 1 briefly to allow the binary to be uploaded to it, then set it back.
https://sourceforge.net/p/stm32flash/wiki/Home/
connect the wires, then apply power
Strategies for offline PGP key storage