User: Password:
|
|
Subscribe / Log in / New account

zcache: a compressed page cache

zcache: a compressed page cache

Posted Jul 29, 2010 6:45 UTC (Thu) by Tara_Li (subscriber, #26706)
Parent article: zcache: a compressed page cache

Hard drives have gotten *SO* slow, relative to CPU/memory speeds, that I have to wonder that there's any question about the value of compressing storage. And it's not like we need to use the absolute best compressor - anything reasonable should be more than good enough to get us a big boost. I know right now, my Ubuntu 10.04 system is using /dev/ramzswap0 (which seems to be some kind of compressing RAM swap system) to great effect - I can almost instantly tell when swap starts having to hit the hard drive.


(Log in to post comments)

zcache: a compressed page cache

Posted Jul 29, 2010 10:01 UTC (Thu) by saffroy (subscriber, #43999) [Link]

Of course, there is question about the value of compressing stored data.

Without compression, I/O can occur with little participation of the CPU, which "only" has to setup data structures, do a bit of device I/O, and let the DMA process all data while the CPU can do other useful stuff for you. Of course, uncompressed data is larger, and take more time to read or write, but that's not the whole story.

With compression, the CPU has to process each and every byte of data that's read or written. Your compression algorithm has better be fast and efficient on your data. And you consume a LOT more CPU time than before, and some more memory bandwidth, and you also probably trash your memory caches heavily. And you may drain your laptop battery faster too (depends which of large disk I/O vs. compression+small I/O needs more power).

Now I definitely support the idea of compressing data before I/O, it's just that it has to be carefully thought out in order to be a definite improvement, it does not work equally well on every workload. For instance, I bet the bitmaps of the photos I'm editing in the Gimp are hard to compress fast; how will the kernel know that it should probably not try to compress them?

zcache: a compressed page cache

Posted Jul 29, 2010 23:56 UTC (Thu) by jzbiciak (subscriber, #5246) [Link]

The tradeoff skews very heavily in favor of compression these days, at least if you're using a traditional HD.

Hard drive seek times are measured in multiple milliseconds. 1ms is 1 million cycles on a 1GHz CPU. You can do a lot in that time. For every seek you eliminate, you buy back millions and millions of CPU cycles.

As for your GIMP working set? The main gain you'll get there is from the other data in your system compressing, making room for GIMP. And besides, GIMP does its own paging via its tile cache, doesn't it?

zcache: a compressed page cache

Posted Jul 30, 2010 20:09 UTC (Fri) by joern (subscriber, #22392) [Link]

Actually, hard drives penalize compression as well. You have plenty of bandwidth to waste anyway, so compressing the data to roughly half size does not help much. But with compression, it becomes much nearly impossible to avoid fragmentation. Whenever content changes, the compressed size changes.

Wrt. swap, compression may well end up reducing seeks. But for file backed data, compression costs a lot more performance than the cpu cycles spent in zlib or whereever.

zcache: a compressed page cache

Posted Jul 31, 2010 3:00 UTC (Sat) by jzbiciak (subscriber, #5246) [Link]

The tradeoff with swap is that it can delay (and maybe completely avoid) hitting the disk at all. There are plenty of workloads that compression can make fit in RAM, whereas without compression, it can hit the disk a fair bit.

If you take hitting the disk as a given, as is the case with a compressed filesystem with compressed files, then the tradeoffs get much more complex. This is especially true whenyou consider that most files are small relative to disk block sizes.

But with VM, it's about dirty anonymous pages (heap and such). The only reason to push these to disk is that you're trying to use more RAM than is available. Disk acts as RAM overflow. If compression can push the "swap to disk" threshold out, then it is a clearer win.

zcache: a compressed page cache

Posted Jul 31, 2010 6:42 UTC (Sat) by saffroy (subscriber, #43999) [Link]

"There are plenty of workloads that compression can make fit in RAM"

Interesting. Got more details or pointers about this?

zcache: a compressed page cache

Posted Jul 31, 2010 14:34 UTC (Sat) by jzbiciak (subscriber, #5246) [Link]

Here's an older one about compcache: https://code.google.com/p/compcache/wiki/Performance/LTSP...

That at least illustrates the principle that compression can keep you from hitting the disk and also increase the size of workload a machine can handles effectively.

zcache: a compressed page cache

Posted Jul 31, 2010 3:04 UTC (Sat) by jzbiciak (subscriber, #5246) [Link]

And, I should add that with file-backed pages, it sound like it just compresses the in-memory copies. This defers writeback, which can lead to more contiguous allocations (in the case of new files or files that are getting extended), or for transient files, fewer temp files pushing blocks to disk.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds