|
|
Subscribe / Log in / New account

Compcache: in-memory compressed swapping

Compcache: in-memory compressed swapping

Posted May 26, 2009 20:15 UTC (Tue) by JoeF (guest, #4486)
Parent article: Compcache: in-memory compressed swapping

I have noticed compcache on my Ubuntu-based EasyPeasy installation on my EeePC, which loads it by default.
I had some issues with it wrt hibernation, though. When I tried to hibernate, the system would complain about not enough swap space being available, even though the disk-based swap is big enough. I "fixed" that for now with swapoff /dev/ramzswap.


to post comments

Compcache: in-memory compressed swapping

Posted May 26, 2009 20:17 UTC (Tue) by BrucePerens (guest, #2510) [Link] (15 responses)

Oops. You need disk-based backing store to hibernate. Not yet a feature?

Compcache: in-memory compressed swapping

Posted May 26, 2009 20:21 UTC (Tue) by BrucePerens (guest, #2510) [Link] (1 responses)

I see it is a feature. Maybe it's not configured correctly? Also, the code would have to be hibernation-aware, because it has to push its entire RAM out to backing store before hibernating.

Compcache: in-memory compressed swapping

Posted May 26, 2009 20:29 UTC (Tue) by JoeF (guest, #4486) [Link]

"Maybe it's not configured correctly?"

Could be. I haven't checked with the EasyPeasy people yet. I am using all defaults, though.
I did notice that the priority of /dev/ramzswap is being set to 100. I tried setting the priority of the on-disk swap space higher than that, but it didn't help with hibernation.

Compcache: in-memory compressed swapping

Posted May 26, 2009 20:23 UTC (Tue) by JoeF (guest, #4486) [Link] (12 responses)

For hibernate, the ram-based swap of course should be bypassed.
But I guess the hibernate code just takes the first swap device and goes with that, assuming that all swap space is disk-based.

Compcache: in-memory compressed swapping

Posted May 26, 2009 20:29 UTC (Tue) by BrucePerens (guest, #2510) [Link] (5 responses)

For hibernate, the ram-based swap of course should be bypassed.
It's worse than that. Memory belonging to the ram-based swap medium would be marked as not itself swappable. Otherwise, you would get in a loop. So, it has to back itself up to its private backing store device before hibernating, and restore itself before the OS is allowed to resume. It can't just stand by and passively allow another swap device to take care of its pages.

Compcache: in-memory compressed swapping

Posted May 27, 2009 1:44 UTC (Wed) by nitingupta (guest, #53817) [Link] (2 responses)

Yes, true. Currently, swapping compressed memory to private swap disk is under development. It can be made hibernation aware once this is done.

Compcache: in-memory compressed swapping

Posted May 27, 2009 6:17 UTC (Wed) by avik (guest, #704) [Link] (1 responses)

swapoff /dev/ramzswap

Should allow hibernation.

Compcache: in-memory compressed swapping

Posted May 27, 2009 13:41 UTC (Wed) by JoeF (guest, #4486) [Link]

Yup. That's what I do right now.

Could compcache improve restore after STD responsiveness?

Posted May 27, 2009 6:50 UTC (Wed) by rvfh (guest, #31018) [Link] (1 responses)

Interestingly, storing compressed memory to swap to hibernate reminds me a lot of TuxOnIce (and maybe now uswsusp, but that needs user space magic), which could save much more than the default swsup thanks to compression...

Could ramzswap help have a more responsive system after restore too? Maybe with a little tweaking?

Could compcache improve restore after STD responsiveness?

Posted Apr 15, 2010 10:13 UTC (Thu) by dgm (subscriber, #49227) [Link]

Also, apparently compression would add vert little overhead to the swap-to-disk case, allowing for faster i/o and better use of swap space.

Maybe it should be considered to add this feature to the generic swap code?

Compcache: in-memory compressed swapping

Posted May 27, 2009 7:23 UTC (Wed) by macc (guest, #510) [Link] (5 responses)

Shoudn't that be layered?

mem -> compressed swap|blockdev --> to disk

snitching on disk IO should work for hibernation too
as long as compression is faster than diskaccess ( true )

MACC

Compcache: in-memory compressed swapping

Posted May 28, 2009 10:24 UTC (Thu) by rvfh (guest, #31018) [Link] (4 responses)

Maybe the swap partition should be compressed too, so we don't

mem -> compressed swap -> uncompressed swap on disk

Compcache: in-memory compressed swapping

Posted May 29, 2009 4:15 UTC (Fri) by nitingupta (guest, #53817) [Link] (2 responses)

> Maybe the swap partition should be compressed too, so we don't
> mem -> compressed swap -> uncompressed swap on disk

This is the idea I'm working on. Swap-out entire xvmalloc pages -- each containing multiple compressed pages -- to swap disk. The aim here is not to save disk space but to improve performance.

Would xvmalloc and swap readahead play nice?

Posted Jun 6, 2009 7:37 UTC (Sat) by gmatht (guest, #58961) [Link] (1 responses)

Wouldn't swapping out xvmalloc pages prevent swap readahead from being of any use, given that adjacent pages are unlikely to be allocated in adjacent positions by xvmalloc? On a conventional HDD, reading an uncompressed page should take only ~0.1ms while seeking to the page should take ~10ms. My concern is that optimizing the 0.1ms while forcing a 10ms seek for every page would be a big performance loss.

There seems to be a big difference between the optimal layout for a memory allocator where seek is not a problem and the optimal layout on a conventional hard disk were seek times dwarf virtually everything else.

If, OTOH, adjacent pages were written out in adjacent positions on disk this could *halve* the cost of swap readahead; both halving the time required to read in the extra pages and also halving the memory used by pages that where read from disk but not used.

(I can see just swapping out xvmalloc pages being a win for SSD, where seek is not a problem for random reads. Also clearly if you are writing out an xvmalloc page there should be very little overhead, and you know you will get 4k of real memory back for each page swapped out. Even so, wouldn't you still have to read in the entire 4K xvmalloc page just to access one of the compress pages stored on that page?)

Would xvmalloc and swap readahead play nice?

Posted Jun 7, 2009 5:22 UTC (Sun) by nitingupta (guest, #53817) [Link]

> Wouldn't swapping out xvmalloc pages prevent swap readahead from being of any use, given that adjacent pages are unlikely to be allocated in adjacent positions by xvmalloc? On a conventional HDD, reading an uncompressed page should take only ~0.1ms while seeking to the page should take ~10ms. My concern is that optimizing the 0.1ms while forcing a 10ms seek for every page would be a big performance loss.

With compressed swapping to disk, the seek times will also reduce as pages will be spread over a smaller area on disk. Still, in general swapping out xvmalloc pages is expected to incur higher swap read overhead due to more no. of seeks involved - an xvmalloc page contains almost unrelated pages.

> There seems to be a big difference between the optimal layout for a memory allocator where seek is not a problem and the optimal layout on a conventional hard disk were seek times dwarf virtually everything else.

Yes, this the whole problem. Theoretically, this problem could be solved by first collecting together physically contiguous pages (w.r.t disk sectors) in a single memory page and then swap this page to disk. However, when pages are swapped out this way, we are not guaranteed that we will be able to free even a single page. Also, this will increase in-memory fragmentation as these pages will be taken out from random xvmalloc pages. So, after lots of such pages are swapped out, we have to do some in-memory defragmentation (not yet implemented) to bring down fragmentation and free pages.

> If, OTOH, adjacent pages were written out in adjacent positions on disk this could *halve* the cost of swap readahead; both halving the time required to read in the extra pages and also halving the memory used by pages that where read from disk but not used.

In general, swap readahead in its present state is almost meaningless in case most of the pages are in (compressed) memory. Decompressing pages is almost instant. Instead, more useful will be to implement some sort of prefetch ioctl for ramzswap so its prefetches pages from backing swap and keeps them compressed in memory. But which pages to prefectch? This will need more study and experimentation.

> (I can see just swapping out xvmalloc pages being a win for SSD, where seek is not a problem for random reads. Also clearly if you are writing out an xvmalloc page there should be very little overhead, and you know you will get 4k of real memory back for each page swapped out. Even so, wouldn't you still have to read in the entire 4K xvmalloc page just to access one of the compress pages stored on that page?)

Yes, reading in single xvmalloc page will bring in bunch of unrelated pages to memory. These additional pages may be kept/discarded based on configurable/hardcoded policy in ramzswap.

Compcache: in-memory compressed swapping

Posted May 29, 2009 5:57 UTC (Fri) by zmi (guest, #4829) [Link]

> mem -> compressed swap -> uncompressed swap on disk
> [and from the article:]
> allow swapping of xvmalloc memory to physical swap disk

That was my immediate idea when reading the article. I'd love it to be a
layer inserted just before normal swap disks, absolutely transparent. Like
this, (compressed) pages not used for a long time can be put to disk swap
at low I/O rates (or low I/O times, if that's easily measurable). And when
too much real mem is used, ramzswap can move pages to disk swap (maybe just
as a last resort to recover before OOM conditions).

The disk swap should support compressed pages directly, and you can also
drop (or at least increase) the "if not enough compression gain, store
uncompressed to disk" rule, and just store pages that are not good
compressed to disk swap, but in it's compressed state. That should help
lower I/O, which is never a failure :-)

If this feature arrives, the vm.swappiness can be increased to more quickly
swap. Currently I lower it to 10 on my desktop (8GB RAM) because the disk
swapping in the morning after nightly backup used to be a nightmare with
the default value, the system very much unresponsive for quite a long time
(at least it feels like before the first coffee *g*). And that's already on
a 10krpm VelociRaptor drive.

I wonder if ramzswap will help on my 8GB desktop, and want to test it.
(already running now)

BTW: shouldn't there be a compressed name also? Like zap or just zp ;-)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds