the compression that they talk about here sounds very impressinve and sophisticated, but looking for identical strings of text and replacing them with shorter placeholders (i.e. pointers) is exactly what gzip, bzip2, etc do today. I would be surprised if the journal software was really able to do much better.
with many TB of real-world logs, I'm getting the following results
gzip -9 give me 10:1 compression
bzip2 -9 gives me 20:1 compression but is significantly slower
doing a zgrep or zcat from a compressed file is actually faster than grep or cat from the uncompressed file (on a fairly sophisticated disk array.