User: Password:
|
|
Subscribe / Log in / New account

Huge pages, slow drives, and long delays

Huge pages, slow drives, and long delays

Posted Nov 17, 2011 2:21 UTC (Thu) by naptastic (guest, #60139)
Parent article: Huge pages, slow drives, and long delays

It sounds like another case of desktop users needing different things than server administrators.

Why not tie this behavior to the kernel preemption setting? If it's set to anything higher than voluntary, then Mel's change (and perhaps some others?) should be in place; if it's set to no preemption, then go ahead and stall processes while you're making room for some hugepages.


(Log in to post comments)

It's not server vs desktop

Posted Nov 17, 2011 8:10 UTC (Thu) by khim (subscriber, #9252) [Link]

Actually it does not always make sense on server either. If you have some batch-processing operation (slocate indexer on desktop, map-reduce on server) then it's Ok to wait for the compaction - even if it'll take a few minutes.

But if you need response right away (most desktop operations, live request in server's case) then latency is paramount.

It's not server vs desktop

Posted Nov 17, 2011 13:21 UTC (Thu) by mennucc1 (subscriber, #14730) [Link]

So the decision should depend on niceness. Usual process should not wait for compaction. Nice process should. Just my 2€.

It's not server vs desktop

Posted Nov 17, 2011 21:26 UTC (Thu) by lordsutch (guest, #53) [Link]

But isn't part of the problem that even if you're only compacting when nice processes try to get a huge page, non-nice processes may be queue up waiting for the compaction to complete before they can do things, due to resource contention (for example, non-nice process may want to read from the USB drive to play back video, while you're doing a nice'd backup of $HOME to it).

It's not server vs desktop

Posted Nov 18, 2011 2:16 UTC (Fri) by naptastic (guest, #60139) [Link]

Hmm.

I see two questions. First; can we infer from a process's niceness or scheduler class whether it would prefer waiting for a hugepage or taking what's available now? Second; are memory compaction passes preemptible? Is this the behavior you're looking for?

1. A low-priority, sched_idle process (1) tries to allocate memory. The kernel starts compacting memory to provide it with a hugepage.
2. A higher-priority, sched_fifo process (2) becomes runnable and tries to allocate. Because it's higher priority, the kernel puts the request for (1) on the back burner. Because (2) is sched_fifo, the kernel doesn't wait for compaction but just gives it what's available now
3. With that request satisfied, the kernel goes back to compacting in order to satisfy (1)'s needs.

As someone who only uses -rt kernels, this is the behavior I think I would want. The network can get hugepages, and it can wait for them; but jackd and friends better get absolute preferential treatment for memory Right Now.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds