|| ||Ingo Molnar <email@example.com>|
|| ||Andrew Morton <firstname.lastname@example.org>|
|| ||[patch] inode-lock-break.patch, 2.6.8-rc3-mm2|
|| ||Mon, 9 Aug 2004 12:21:25 +0200|
|| ||Alexander Viro <email@example.com>,
the attached patch does a scheduling-latency lock-break of two functions
within the VFS: prune_icache() [typically triggered by VM load] and
invalidate_inodes() [triggered by e.g. CDROM auto-umounts - reported by
prune_icache() was easy - it works off a global list head so adding
voluntary_resched_lock() solves the latency.
invalidate_inodes() was trickier - we scan a list filtering for specific
inodes - simple lock-break is incorrect because the list might change at
the cursor, and retrying opens up the potential for livelocks.
The solution i found was to insert a private marker into the list and to
start off that point - the inodes of the superblock in question wont get
reordered within the list because the filesystem is quiet already at
this point. (other inodes of other filesystems might get reordered but
that doesnt matter.)
tested on x86, the patch solves these particular latencies.
[2. text/plain; inode-lock-break.patch]...