summary refs log tree commit diff
path: root/mm/percpu.c
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2018-02-23 08:12:42 -0800
committerTejun Heo <tj@kernel.org>2018-02-23 08:52:34 -0800
commitaccd4f36a7d11c2d54544007eb65e10604dcf2f5 (patch)
tree49498cc42444c9e2295a895d518cee0597513806 /mm/percpu.c
parent554fef1c39ee148623a496e04569dabb11463406 (diff)
downloadlinux-accd4f36a7d11c2d54544007eb65e10604dcf2f5.tar.gz
percpu: add a schedule point in pcpu_balance_workfn()
When a large BPF percpu map is destroyed, I have seen
pcpu_balance_workfn() holding cpu for hundreds of milliseconds.

On KASAN config and 112 hyperthreads, average time to destroy a chunk
is ~4 ms.

[ 2489.841376] destroy chunk 1 in 4148689 ns
...
[ 2490.093428] destroy chunk 32 in 4072718 ns

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'mm/percpu.c')
-rw-r--r--mm/percpu.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/mm/percpu.c b/mm/percpu.c
index fa3f854634a1..36e7b65ba6cf 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1610,6 +1610,7 @@ static void pcpu_balance_workfn(struct work_struct *work)
 			spin_unlock_irq(&pcpu_lock);
 		}
 		pcpu_destroy_chunk(chunk);
+		cond_resched();
 	}
 
 	/*