Con Kolivas returns with a new scheduler
Con Kolivas returns with a new scheduler
Posted Sep 7, 2009 8:56 UTC (Mon) by jbh (guest, #494)In reply to: Con Kolivas returns with a new scheduler by kragil
Parent article: Con Kolivas returns with a new scheduler
ssd for the system of course, and save the 8GB for data. Three more things:
- barrier=0 mount option
- elevator=deadline boot option
- tune2fs -o journal_data_writeback
The first one (mount -o barrier=0) is subjectively the one that makes the
most real difference. Ext4 has barriers enabled by default, as opposed to
ext3.
I have a slow-only 16GB 900. With your setup, it might be effective to put
all journals on the fast disk (external journal), but I guess that's if you
actually enjoy tweaking :)
Finally it's possible to replace that slow SSD with a faster one from
RunCore for example. But that doesn't explain why windows does better
(which surprises me a bit as I've heard others complain that windows is
very bad at least on the 16BG --- but maybe it's ok if it's installed on
the fast 4GB disk).
Posted Sep 8, 2009 20:11 UTC (Tue)
by realnc (guest, #60393)
[Link] (3 responses)
Posted Sep 9, 2009 7:43 UTC (Wed)
by jbh (guest, #494)
[Link] (2 responses)
What CAN fix it is a combination of: (i) write less, (ii) don't wait for writes. That's the point of the tuning. Nothing to do with process scheduling.
Posted Sep 9, 2009 8:41 UTC (Wed)
by realnc (guest, #60393)
[Link] (1 responses)
Posted Sep 9, 2009 8:47 UTC (Wed)
by jbh (guest, #494)
[Link]
;-)
Posted Sep 11, 2009 14:54 UTC (Fri)
by SEMW (guest, #52697)
[Link] (1 responses)
Regarding the Deadline i/o scheduler, according to Wikipedia, "The kernel docs suggest this is the preferred scheduler for database systems, especially if you have TCQ aware disks, or any system with high disk performance.". Since an ssd on an eeepc is neither a database system, nor TCQ aware (TCQ being meaningless for SSDs), nor has high disk performance, what property makes it good for that workload?
Posted Sep 11, 2009 15:45 UTC (Fri)
by jbh (guest, #494)
[Link]
The noop scheduler is also sometimes recommended, but the linked thread
[ it also recommends the following, which I haven't tried:
Con Kolivas returns with a new scheduler
Con Kolivas returns with a new scheduler
Con Kolivas returns with a new scheduler
Con Kolivas returns with a new scheduler
described in this thread.
Con Kolivas returns with a new scheduler
> - elevator=deadline boot option
>- tune2fs -o journal_data_writeback
Con Kolivas returns with a new scheduler
default CFQ scheduler seems to behave badly (wrt IO latency) with cheap
SSDs. See for example http://forum.eeeuser.com/viewtopic.php?id=23580 .
suggests it may suffer from IO starvation since it doesn't do any balancing
between processes.
echo 1 > /sys/block/sda/queue/iosched/fifo_batch
]