Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
(Nearly) full tickless operation in 3.10
work is underway to add digital signatures to this
This has the advantage that it can work with any logfile and isn't tied to any particular log format.
Forward secure sealing
Posted Aug 23, 2012 11:04 UTC (Thu) by mezcalero (subscriber, #45103)
And I don't understand the point of the signing logtools is supposed to do: if the signing key stays around locally, and is not changed regularly an attacker can sign anything he wants with it, including messages from the past.
In summary: what systemd's journal does here, and what logtools does is very different. And um, I think FSS is much more useful for admins.
Posted Aug 23, 2012 19:39 UTC (Thu) by dlang (✭ supporter ✭, #313)
All your objections to the signing key apply equally to the systemd signing key. In all cases the key needs to be around locally so that it can be used to sign new logs, and if the attacker can get access to it, they can delete the existing log and fabricate a new one.
Yes, the hash needs to be sent off the machine, but it's a lot less data to send than sending every log.
logtools was created as a response to systemd claiming that it's hash chains made their logging tamperproof. Unlike the systemd announcements, logtools calls out the weakness in this (that the entire logfile can be recreated unless the hash is sent off the machine)
Posted Aug 24, 2012 0:49 UTC (Fri) by mezcalero (subscriber, #45103)
And no, with the systemd journal the sealing key (that remains on the system) cannot be used to "change history". The old sealing key is forgotten and erased when a new sealing key is calculated and you cannot get back from the new one to the old one.
Next time, please read what I wrote, before commenting about the technical background of it, please.
Again: the FSS stuff is very different from what logtools does. Please read up. The journal's FSS stuff is much more useful than hash chains/signatures are. We do not require sending the topmost hash away.
Posted Aug 24, 2012 0:55 UTC (Fri) by dlang (✭ supporter ✭, #313)
If they did that, then the new logs that systemd tried to write to the files would look like the fake ones.
unless you are relying on systemd to know the current sealing key (and that won't work across a reboot, unless systemd stores the sealing key somewhere on the system, at which point it's vulnerable to being replaced, unless it's sent off the box, just like the hash that logtools has)
Posted Aug 24, 2012 11:44 UTC (Fri) by DavidS (subscriber, #84675)
For verification a off-system key is needed which is never stored on the system, but only displayed at key-generation time.
Does it keep an intruder from rm -Rf / ? No.
Does it reliably raise a red flag in a system audit? Yes.
Could an android app be written that could tell me every timerange where a downloaded FSS log is trustworthy without a third system? Yes.
Is that an improvement over having no verification (in security) and needing special logging equipment (in cost)? Yes.
Therefore I believe that FSS is a pareto optimal solution (http://en.wikipedia.org/wiki/Pareto_efficiency). There may be systems that are more secure and systems that are cheaper (intellectually, monetary and otherwise). There may be even systems that have similar security at similar costs.
Posted Aug 24, 2012 23:20 UTC (Fri) by dlang (✭ supporter ✭, #313)
If it's not stored somewhere off the system, then it can be replaced along with the file and you are no better off than the simple hashing that logtools does. If you can send the key off the system, you can send the logtools hash off the system.
If it's stored on the same filesystem as the 'sealed' file, then it can be replaced, along with the file it's protecting.
The only case I am seeing where this helps you is if the system has not been restarted and so you can query systemd to find out what it thinks the current key is to validate that it matches the file.
If you don't send the key off the box, I don't see why this is any better than the initial systemd hashing. both will detect if a file has been edited after the fact, but neither will detect if a file has been forged in it's entirety.
Posted Aug 24, 2012 23:42 UTC (Fri) by jake (editor, #205)
aiui, the verification key can calculate the keys at any given point in time. so the old key isn't needed, just the log and the verification key (which is kept elsewhere).
because the attacker can't calculate the keys from the past, they can't forge a validating log file that covers the past. they can delete it, or forge messages after the compromise, but can't go back in time.
Posted Aug 25, 2012 0:03 UTC (Sat) by dlang (✭ supporter ✭, #313)
so you are saying that if the verification key is stored off the box, it can tell that not only is every line 'sealed' but that none of the lines are missing?
This sounds like it is relying on some signing technology that's beyond what I'm aware of as the state of the art. This is possible, but I would have expected to hear about such a new technology through the security side of things rather than as something implemented in any FOSS project as the first word.
I am skeptical about this, because very similar claims were made about the hashing that systemd implemented
The ability to generate a key for signing documents where you can use a key for signing once, use it to generate a new one and forget the old one, and people verifying the signature can not only validate it no matter which key you use, but verify the order that you signed the documents and validate that there are no gaps in the order is something that seems like it would be a revolution in digital signatures.
If it's something less than this, the limitations are likely to greatly weaken it's value for logs as well.
If, for example, you have to have every document that was ever signed by the sender, then it's going to be much less useful. You don't keep all logs that a system has ever generated, you roll them and delete the old logs
Posted Aug 25, 2012 0:31 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
That's how it works, I expect.
Posted Aug 25, 2012 0:50 UTC (Sat) by dlang (✭ supporter ✭, #313)
are you saying that this uses the hash chaining (similar to what logtools does) and then 'seals' the hash, and it's the fact that the hash chaining detects gaps that makes this work, not the sealing?
Posted Aug 25, 2012 0:53 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
Each log block is hashed and sealed separately. Using hashes, of course, but each such block is independent from all the previous blocks.
Posted Aug 25, 2012 1:00 UTC (Sat) by dlang (✭ supporter ✭, #313)
so what stops someone from deleting an entire block?
Posted Aug 25, 2012 1:03 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
Posted Aug 25, 2012 1:10 UTC (Sat) by dlang (✭ supporter ✭, #313)
Posted Aug 25, 2012 1:15 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
I.e. if you delete a block then the administrator would be able to see that your current key can't be generated without skipping a block.
Posted Aug 25, 2012 1:26 UTC (Sat) by dlang (✭ supporter ✭, #313)
Posted Aug 25, 2012 1:30 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
You can't recreate key_3 from key_4 (which you know) and you can't modify the sealed block.
Posted Aug 25, 2012 1:33 UTC (Sat) by dlang (✭ supporter ✭, #313)
how is the admin supposed to know that this log file was supposed to start with key_1? remember that logs rotate and so you cannot count on having logs since the beginning of time, so make the nubmers 748934 and up instead of 1 and up.
Posted Aug 25, 2012 1:40 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
I have not yet checked how log sealing actually works, so I'm making it up as I go along. But it certainly seems doable.
Posted Aug 25, 2012 1:47 UTC (Sat) by dlang (✭ supporter ✭, #313)
I don't see how keys can be time dependent. As you note there will be downtime when keys don't get rotated, and any records of what those downtimes were are subject to being tampered with.
Posted Aug 25, 2012 1:48 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
Posted Aug 25, 2012 2:01 UTC (Sat) by dlang (✭ supporter ✭, #313)
A) which of the keys was used to sign this block
B) which key should have been in use at that time
If such technology exists, I'm very interested in learning about it. But I would have expected that technology like this would be in use for things much more significant than just logs.
Posted Aug 25, 2012 2:12 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
> B) which key should have been in use at that time
That's true because of the key's construction.
Suppose that you have the initial key (key_0) constructed on 24 Aug 2012, 00:00 UTC. You encode the key update interval (say, 10 minutes) in this key and start using it. Every 10 minutes you then generate the next key and use it for signing.
Each signed block is numbered, without gaps.
If it turns out that your current system time is ahead of the key's time (because you've resumed your computer from a long sleep) then you insert a special "skip-time" block, signed by your existing (and now obsolete) key and the time-derived key. I.e. that block in essence authenticates a legitimate gap in the log history.
Again, I'm making it up as I go along :) So I might be incorrect.
Posted Aug 25, 2012 3:22 UTC (Sat) by dlang (✭ supporter ✭, #313)
I am assuming that this is an asymmetric key, and so knowing the validation key does not let you know the signing key. If that's not the case, all you would need to do is to seed a PRNG and use the output as your symmetric key along with a generation number. That wouldn't be nearly as good as as if this is an asymmetric key.
your 'skip time' log entry does you no good if you don't have logs from all time (what happens when you delete the 'skip time' log entry), you would have to chrun thorugh however many key generation cycles you need to get you to the valid key (and hope that time never goes backwards on your box) as was noted by someone else on this thread.
we can speculate and design what we think is a valid algorithm here, but we don't know if that's what's being implemented in this case.
Posted Aug 25, 2012 0:51 UTC (Sat) by nybble41 (subscriber, #55106)
K = HASH(V)
K[n] = HASH(K[n-1])
S[m] = HASH(K[n] + M[m])
From V you can get any K[n], and from any K[n] you can get K[i] where i >= n, but not V or K[j] where j < n. From K[n] and M[m] you can get S[m], and with K[n], M[m], and S[m] you can verify that M[m] was logged while K[n] was known. Periodically you would calculate and store a new K[n] and securely wipe K[n-1].
Once an intruder is on the system they could wipe the logs (in full or selectively), and fake any _future_ messages signed, but not _past_ entries requiring signing keys which have already been wiped from the system.
Posted Aug 25, 2012 0:55 UTC (Sat) by dlang (✭ supporter ✭, #313)
If an attacker can delete all log entires newer than X and put whatever they want after that point, the logs are again worthless.
These are the sorts of things that the initial hashing implementation from systemd was broken on, and why I am skeptical of further security claims from the same product.
Posted Aug 25, 2012 1:10 UTC (Sat) by nybble41 (subscriber, #55106)
There is some value in ensuring that an attacker can't fabricate log entries from before the break-in. If you want to prevent any individual log entries from being deleted, or (more likely) detect any such deletions, you have no choice but to send _some_ data (the log entries or an updated hash) to an external, uncompromised system.
> If an attacker can delete all log entires newer than X and put whatever they want after that point, the logs are again worthless.
That the attacker will be able to fabricate new log entries after the break-in is inevitable, even if you log to an external system. This scheme does prevent the scenario you describe for new log entries between time X (assuming X is before the attack starts) and the generation of the signing key in effect at the time of the actual compromise.
This could be combined with hash chaining to make it harder to get away with erasing individual log entries while keeping other, later, entries. For example, each message M[m] could include the signature of the previous message (S[m-1]). That would make any gaps rather obvious, without requiring you to have all previous log entries on hand to verify the hashes.
Posted Aug 25, 2012 1:24 UTC (Sat) by dlang (✭ supporter ✭, #313)
This is part of my point
If you are required to send _some_ data, then simple hashing is enough (as implemented by logtools for example)
The claim here is that by sending the verification key off of the system at key creation time, there is no need to ever send any other data off of the system for you to know that your logs haven't been tampered with.
If you can truncate the file (either partially or create the entire logfile) and write bogus stuff after the breakin, you can truncate the part of the file that shows the breakin and re-create everything after that point, making the logs look like they had been created before the breakin.
Posted Aug 25, 2012 1:49 UTC (Sat) by nybble41 (subscriber, #55106)
Assuming the messages are chained as I described, that's only true if you're willing to accept a gap in the logs from the first deleted entry to the beginning of the valid interval for the K[n] in effect at the time you compromised the system.
> If you are required to send _some_ data, then simple hashing is enough
The "simple hashing" requires you to send data _continuously_ as the logs are updated. That's a more difficult problem than making a record of a single verification key once at the beginning of the log.
Posted Aug 25, 2012 1:59 UTC (Sat) by dlang (✭ supporter ✭, #313)
so you delete everything and there's no 'valid' entry to compare anything to that will let you detect the gap.
I understand that they are claiming that this verification key eliminates the need to send any data off the box ever again, I'm just not believing it. If someone can point me to the peer reviewed papers that describe how the technology can work, I'll believe it.
Posted Aug 25, 2012 2:12 UTC (Sat) by nybble41 (subscriber, #55106)
If you deleted everything then I wouldn't need a "valid" entry to compare against; the simple lack of previous logs would be plenty suspicious by itself.
Posted Aug 25, 2012 0:59 UTC (Sat) by dlang (✭ supporter ✭, #313)
only if the person doing the verification can know that those log entries needed to be signed by an old key.
If the signing doesn't create gaps, what stops someone from creating new log entries that look like old ones, but are signed by the newer key?
remember that everything that systemd relies on as part of it's validation can be forged by a root user (including the SCM_CREDENTIALS, which even systemd forges when sending the logs on to syslog)
Posted Aug 25, 2012 1:12 UTC (Sat) by nybble41 (subscriber, #55106)
Posted Aug 25, 2012 1:16 UTC (Sat) by dlang (✭ supporter ✭, #313)
how can it tell key 20 from key 2000? or more precisely, how can it tell that something was signed by key 20 instead of being signed by key 2000.
Posted Aug 25, 2012 1:33 UTC (Sat) by nybble41 (subscriber, #55106)
The keys follow a very specific pseudo-random sequence. K = HASH^2000(K) is an entirely different value from K = HASH^20(K). Using a different key value will result in a different message signature S[m] = HASH(M[m] + K[n]). Assuming the message includes a timestamp t[m], keys are rotated every ten seconds, and you have the verification key V and the initial time t0, then n = floor((t[m] - t0) / (10 seconds)) and you would expect message M[m] to have S[m] = HASH(M[m] + K[n]).
Posted Aug 25, 2012 1:37 UTC (Sat) by dlang (✭ supporter ✭, #313)
Even if you can do this, you still have the question of how do you know that message should have been signed by key 20 instead of key 2000, or better still, how it should have been signed by key 934503459 instead of 934505459.
Posted Aug 25, 2012 1:43 UTC (Sat) by dlang (✭ supporter ✭, #313)
This is an honest question. this seems like it would be a breakthrough in non-reputiation if you can sign one message with a key and send it to person A, a second with the next key and send it to person B and a third with the next key and send it to person C and therefor prove that message B was send between message A and message C
As far as I know, all signing services currently boil down to "we include the timestamp in what we sign, and since you trust our signature, you trust the timestamp", with the trust in the signature being that you trust that nobody else is able to sign anything that can be verified with the verification key you have.
It's very possible that I am ignorant of some digital signing technology here, but this seems like such a useful combination of features that I would have expected to have heard that such things are at least possible, even without knowing the details.
Posted Aug 25, 2012 2:05 UTC (Sat) by nybble41 (subscriber, #55106)
In this scheme you already know which key the message should have been signed with, so you only need to check that one key. However, if all else fails you could generate every signing key from the verification key up to the present and calculate the message hash for each key, checking it against the provided signature.
> Even if you can do this, you still have the question of how do you know that message should have been signed by key 20 instead of key 2000, or better still, how it should have been signed by key 934503459 instead of 934505459.
I already answered this at least twice. The initial time t0 and key change interval are known. The message includes a timestamp t[m]. If it was really logged at that time, the signing key will be K[floor((t[m]-t0)/interval)].
Since the question has come up, the logging system knows t0 and n in addition to K[n], and rotates keys however many times are necessary after downtime until n is correct for the timestamp of the next log entry.
Posted Aug 25, 2012 2:11 UTC (Sat) by dlang (✭ supporter ✭, #313)
You seem to know about this technology, can you point me at a paper on it?
Posted Aug 25, 2012 2:14 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
Posted Aug 25, 2012 2:19 UTC (Sat) by nybble41 (subscriber, #55106)
As for "knowing this technology", I have no idea how systemd actually implements this. I only presented one possible implementation based on the design requirements. Obviously, since I made the implementation up on the spot, I can't point you to any papers; everything there is to know is in this thread.
Posted Aug 25, 2012 3:27 UTC (Sat) by dlang (✭ supporter ✭, #313)
However, I was assuming that this was some form of asymmetric key, since that's the norm for signing something. The problem with using a symmetric key is that the person trying to validate the signature is also in a position to forge the signature.
Posted Aug 25, 2012 3:41 UTC (Sat) by dlang (✭ supporter ✭, #313)
if you are on key 9834750927 and need to iterate through that key generation routing that many times to get you from the starting validation key to the key needed to validate the file, it's going to take a long time.
Posted Aug 25, 2012 4:54 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
And you can easily walk through the keys. If you do log sealing every minute then key 9834750927 would be some time after 20711.
Given that AES on modern CPU works can produce about 1Gb of data per second, it'll take only a few minutes to walk to that point.
Posted Aug 25, 2012 19:00 UTC (Sat) by nybble41 (subscriber, #55106)
Yes, but that doesn't matter here, since the person doing the validation is also the person who administers the server; they're _already_ in a position to forge log messages, if they cared to do so.
You are correct that the signing key is basically just the output from a PRNG, but the PRNG does need to have a special property that some PRNGs lack: the computation must only work in the forward direction. Given the internal state of the PRNG, it must not be possible to go back to a previous state and generate a past signing key.
For example, both the following functions will produce a stream of pseudo-random numbers:
F = HASH(seed)
F[n] = HASH(F[n-1])
G[n] = HASH(seed + n)
However, only the former PRNG would be suitable, because computing G[n] requires the original seed value, and given the seed you can compute any G[n], past or future.
Posted Aug 25, 2012 20:22 UTC (Sat) by ikm (subscriber, #493)
May I suggest S[m] = HMAC(K[n], M[m])?
Also, calculating K[n] from V is O(n). If we use the systemd default of 15 minutes per key, we would have 35,040 iterations per year, which doesn't seem bad. If we, however, decide to narrow it down to 10 seconds, as the article suggested we could, we would get a much worse looking number of 3,153,600 iterations per year, which might get a little expensive, especially if the verification is done on an Android device. Other than that, the scheme you've proposed seems fit and may even be the actual scheme systemd uses.
Posted Aug 25, 2012 20:38 UTC (Sat) by ikm (subscriber, #493)
Posted Aug 25, 2012 21:13 UTC (Sat) by ikm (subscriber, #493)
The only missing piece is what to do in case the system goes down. All log data before the system came up can then be erased with a plausible explanation that the system was down at that time. If an attacker gains entry, he can erase all traces of his activity and hard-reboot the machine once he's done, making everything look like it was a hardware failure. I wonder if journald accounts for that.
Posted Aug 28, 2012 14:48 UTC (Tue) by mathstuf (subscriber, #69389)
Well, systemd is the first thing running in these situations. Conceptually, it could do the sealing before starting anything else. The only leak I can think of there is if systemd itself is compromised in which case you're SOL anyways. In the general case, it might be an issue.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds