Compcache: in-memory compressed swapping
Compcache: in-memory compressed swapping
Posted May 26, 2009 20:15 UTC (Tue) by JoeF (guest, #4486)Parent article: Compcache: in-memory compressed swapping
I had some issues with it wrt hibernation, though. When I tried to hibernate, the system would complain about not enough swap space being available, even though the disk-based swap is big enough. I "fixed" that for now with swapoff /dev/ramzswap.
Posted May 26, 2009 20:17 UTC (Tue)
by BrucePerens (guest, #2510)
[Link] (15 responses)
Posted May 26, 2009 20:21 UTC (Tue)
by BrucePerens (guest, #2510)
[Link] (1 responses)
Posted May 26, 2009 20:29 UTC (Tue)
by JoeF (guest, #4486)
[Link]
Could be. I haven't checked with the EasyPeasy people yet. I am using all defaults, though.
Posted May 26, 2009 20:23 UTC (Tue)
by JoeF (guest, #4486)
[Link] (12 responses)
Posted May 26, 2009 20:29 UTC (Tue)
by BrucePerens (guest, #2510)
[Link] (5 responses)
Posted May 27, 2009 1:44 UTC (Wed)
by nitingupta (guest, #53817)
[Link] (2 responses)
Posted May 27, 2009 6:17 UTC (Wed)
by avik (guest, #704)
[Link] (1 responses)
Should allow hibernation.
Posted May 27, 2009 13:41 UTC (Wed)
by JoeF (guest, #4486)
[Link]
Posted May 27, 2009 6:50 UTC (Wed)
by rvfh (guest, #31018)
[Link] (1 responses)
Could ramzswap help have a more responsive system after restore too? Maybe with a little tweaking?
Posted Apr 15, 2010 10:13 UTC (Thu)
by dgm (subscriber, #49227)
[Link]
Maybe it should be considered to add this feature to the generic swap code?
Posted May 27, 2009 7:23 UTC (Wed)
by macc (guest, #510)
[Link] (5 responses)
mem -> compressed swap|blockdev --> to disk
snitching on disk IO should work for hibernation too
MACC
Posted May 28, 2009 10:24 UTC (Thu)
by rvfh (guest, #31018)
[Link] (4 responses)
mem -> compressed swap -> uncompressed swap on disk
Posted May 29, 2009 4:15 UTC (Fri)
by nitingupta (guest, #53817)
[Link] (2 responses)
This is the idea I'm working on. Swap-out entire xvmalloc pages -- each containing multiple compressed pages -- to swap disk. The aim here is not to save disk space but to improve performance.
Posted Jun 6, 2009 7:37 UTC (Sat)
by gmatht (guest, #58961)
[Link] (1 responses)
There seems to be a big difference between the optimal layout for a memory allocator where seek is not a problem and the optimal layout on a conventional hard disk were seek times dwarf virtually everything else.
If, OTOH, adjacent pages were written out in adjacent positions on disk this could *halve* the cost of swap readahead; both halving the time required to read in the extra pages and also halving the memory used by pages that where read from disk but not used.
(I can see just swapping out xvmalloc pages being a win for SSD, where seek is not a problem for random reads. Also clearly if you are writing out an xvmalloc page there should be very little overhead, and you know you will get 4k of real memory back for each page swapped out. Even so, wouldn't you still have to read in the entire 4K xvmalloc page just to access one of the compress pages stored on that page?)
Posted Jun 7, 2009 5:22 UTC (Sun)
by nitingupta (guest, #53817)
[Link]
With compressed swapping to disk, the seek times will also reduce as pages will be spread over a smaller area on disk. Still, in general swapping out xvmalloc pages is expected to incur higher swap read overhead due to more no. of seeks involved - an xvmalloc page contains almost unrelated pages.
> There seems to be a big difference between the optimal layout for a memory allocator where seek is not a problem and the optimal layout on a conventional hard disk were seek times dwarf virtually everything else.
Yes, this the whole problem. Theoretically, this problem could be solved by first collecting together physically contiguous pages (w.r.t disk sectors) in a single memory page and then swap this page to disk. However, when pages are swapped out this way, we are not guaranteed that we will be able to free even a single page. Also, this will increase in-memory fragmentation as these pages will be taken out from random xvmalloc pages. So, after lots of such pages are swapped out, we have to do some in-memory defragmentation (not yet implemented) to bring down fragmentation and free pages.
> If, OTOH, adjacent pages were written out in adjacent positions on disk this could *halve* the cost of swap readahead; both halving the time required to read in the extra pages and also halving the memory used by pages that where read from disk but not used.
In general, swap readahead in its present state is almost meaningless in case most of the pages are in (compressed) memory. Decompressing pages is almost instant. Instead, more useful will be to implement some sort of prefetch ioctl for ramzswap so its prefetches pages from backing swap and keeps them compressed in memory. But which pages to prefectch? This will need more study and experimentation.
> (I can see just swapping out xvmalloc pages being a win for SSD, where seek is not a problem for random reads. Also clearly if you are writing out an xvmalloc page there should be very little overhead, and you know you will get 4k of real memory back for each page swapped out. Even so, wouldn't you still have to read in the entire 4K xvmalloc page just to access one of the compress pages stored on that page?)
Yes, reading in single xvmalloc page will bring in bunch of unrelated pages to memory. These additional pages may be kept/discarded based on configurable/hardcoded policy in ramzswap.
Posted May 29, 2009 5:57 UTC (Fri)
by zmi (guest, #4829)
[Link]
That was my immediate idea when reading the article. I'd love it to be a
The disk swap should support compressed pages directly, and you can also
If this feature arrives, the vm.swappiness can be increased to more quickly
I wonder if ramzswap will help on my 8GB desktop, and want to test it.
BTW: shouldn't there be a compressed name also? Like zap or just zp ;-)
Oops. You need disk-based backing store to hibernate. Not yet a feature?
Compcache: in-memory compressed swapping
I see it is a feature. Maybe it's not configured correctly? Also, the code would have to be hibernation-aware, because it has to push its entire RAM out to backing store before hibernating.
Compcache: in-memory compressed swapping
Compcache: in-memory compressed swapping
I did notice that the priority of /dev/ramzswap is being set to 100. I tried setting the priority of the on-disk swap space higher than that, but it didn't help with hibernation.
Compcache: in-memory compressed swapping
But I guess the hibernate code just takes the first swap device and goes with that, assuming that all swap space is disk-based.
Compcache: in-memory compressed swapping
For hibernate, the ram-based swap of course should be bypassed.
It's worse than that. Memory belonging to the ram-based swap medium would be marked as not itself swappable. Otherwise, you would get in a loop. So, it has to back itself up to its private backing store device before hibernating, and restore itself before the OS is allowed to resume. It can't just stand by and passively allow another swap device to take care of its pages.
Compcache: in-memory compressed swapping
Compcache: in-memory compressed swapping
Compcache: in-memory compressed swapping
Could compcache improve restore after STD responsiveness?
Could compcache improve restore after STD responsiveness?
Compcache: in-memory compressed swapping
as long as compression is faster than diskaccess ( true )
Compcache: in-memory compressed swapping
Compcache: in-memory compressed swapping
> mem -> compressed swap -> uncompressed swap on disk
Would xvmalloc and swap readahead play nice?
Would xvmalloc and swap readahead play nice?
Compcache: in-memory compressed swapping
> [and from the article:]
> allow swapping of xvmalloc memory to physical swap disk
layer inserted just before normal swap disks, absolutely transparent. Like
this, (compressed) pages not used for a long time can be put to disk swap
at low I/O rates (or low I/O times, if that's easily measurable). And when
too much real mem is used, ramzswap can move pages to disk swap (maybe just
as a last resort to recover before OOM conditions).
drop (or at least increase) the "if not enough compression gain, store
uncompressed to disk" rule, and just store pages that are not good
compressed to disk swap, but in it's compressed state. That should help
lower I/O, which is never a failure :-)
swap. Currently I lower it to 10 on my desktop (8GB RAM) because the disk
swapping in the morning after nightly backup used to be a nightmare with
the default value, the system very much unresponsive for quite a long time
(at least it feels like before the first coffee *g*). And that's already on
a 10krpm VelociRaptor drive.
(already running now)