Re: [PATCH v7 1/4] spinlock: A new lockref structure for lockless
update of refcount
[Posted September 4, 2013 by corbet]
| From: |
| Al Viro <viro-AT-ZenIV.linux.org.uk> |
| To: |
| Waiman Long <waiman.long-AT-hp.com> |
| Subject: |
| Re: [PATCH v7 1/4] spinlock: A new lockref structure for lockless update of refcount |
| Date: |
| Fri, 30 Aug 2013 20:40:59 +0100 |
| Message-ID: |
| <20130830194059.GC13318@ZenIV.linux.org.uk> |
| Cc: |
| Linus Torvalds <torvalds-AT-linux-foundation.org>, Ingo Molnar <mingo-AT-kernel.org>, Benjamin Herrenschmidt <benh-AT-kernel.crashing.org>, Jeff Layton <jlayton-AT-redhat.com>, Miklos Szeredi <mszeredi-AT-suse.cz>, Ingo Molnar <mingo-AT-redhat.com>, Thomas Gleixner <tglx-AT-linutronix.de>, linux-fsdevel <linux-fsdevel-AT-vger.kernel.org>, Linux Kernel Mailing List <linux-kernel-AT-vger.kernel.org>, Peter Zijlstra <peterz-AT-infradead.org>, Steven Rostedt <rostedt-AT-goodmis.org>, Andi Kleen <andi-AT-firstfloor.org>, "Chandramouleeswaran, Aswin" <aswin-AT-hp.com>, "Norton, Scott J" <scott.norton-AT-hp.com> |
| Archive-link: |
| Article, Thread
|
On Fri, Aug 30, 2013 at 03:20:48PM -0400, Waiman Long wrote:
> There are more contention in the lglock than I remember for the run
> in 3.10. This is an area that I need to look at. In fact, lglock is
> becoming a problem for really large machine with a lot of cores. We
> have a prototype 16-socket machine with 240 cores under development.
> The cost of doing a lg_global_lock will be very high in that type of
> machine given that it is already high in this 80-core machine. I
> have been thinking about instead of per-cpu spinlocks, we could
> change the locking to per-node level. While there will be more
> contention for lg_local_lock, the cost of doing a lg_global_lock
> will be much lower and contention within the local die should not be
> too bad. That will require either a per-node variable infrastructure
> or simulated with the existing per-cpu subsystem.
Speaking of lglock, there's a low-hanging fruit in that area: we have
no reason whatsoever to put anything but regular files with FMODE_WRITE
on the damn per-superblock list - the *only* thing it's used for is
mark_files_ro(), which will skip everything except those. And since
read opens normally outnumber the writes quite a bit... Could you
try the diff below and see if it changes the picture? files_lglock
situation ought to get better...
diff --git a/fs/file_table.c b/fs/file_table.c
index b44e4c5..322cd37 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -385,6 +385,10 @@ static inline void __file_sb_list_add(struct file *file, struct super_block
*sb)
*/
void file_sb_list_add(struct file *file, struct super_block *sb)
{
+ if (likely(!(file->f_mode & FMODE_WRITE)))
+ return;
+ if (!S_ISREG(file_inode(file)->i_mode))
+ return;
lg_local_lock(&files_lglock);
__file_sb_list_add(file, sb);
lg_local_unlock(&files_lglock);
@@ -450,8 +454,6 @@ void mark_files_ro(struct super_block *sb)
lg_global_lock(&files_lglock);
do_file_list_for_each_entry(sb, f) {
- if (!S_ISREG(file_inode(f)->i_mode))
- continue;
if (!file_count(f))
continue;
if (!(f->f_mode & FMODE_WRITE))
(
Log in to post comments)