summary refs log tree commit diff
path: root/mm/ptdump.c
diff options
context:
space:
mode:
authorSteven Price <steven.price@arm.com>2022-09-02 12:26:12 +0100
committerLinus Torvalds <torvalds@linux-foundation.org>2022-09-03 10:13:13 -0700
commit8782fb61cc848364e1e1599d76d3c9dd58a1cc06 (patch)
tree6177e2fedcece02fbb40952e04946fbe6cabdd30 /mm/ptdump.c
parentd895ec7938c431fe61a731939da76a6461bc6133 (diff)
downloadlinux-8782fb61cc848364e1e1599d76d3c9dd58a1cc06.tar.gz
mm: pagewalk: Fix race between unmap and page walker
The mmap lock protects the page walker from changes to the page tables
during the walk.  However a read lock is insufficient to protect those
areas which don't have a VMA as munmap() detaches the VMAs before
downgrading to a read lock and actually tearing down PTEs/page tables.

For users of walk_page_range() the solution is to simply call pte_hole()
immediately without checking the actual page tables when a VMA is not
present. We now never call __walk_page_range() without a valid vma.

For walk_page_range_novma() the locking requirements are tightened to
require the mmap write lock to be taken, and then walking the pgd
directly with 'no_vma' set.

This in turn means that all page walkers either have a valid vma, or
it's that special 'novma' case for page table debugging.  As a result,
all the odd '(!walk->vma && !walk->no_vma)' tests can be removed.

Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Steven Price <steven.price@arm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/ptdump.c')
-rw-r--r--mm/ptdump.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/ptdump.c b/mm/ptdump.c
index eea3d28d173c..8adab455a68b 100644
--- a/mm/ptdump.c
+++ b/mm/ptdump.c
@@ -152,13 +152,13 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd)
 {
 	const struct ptdump_range *range = st->range;
 
-	mmap_read_lock(mm);
+	mmap_write_lock(mm);
 	while (range->start != range->end) {
 		walk_page_range_novma(mm, range->start, range->end,
 				      &ptdump_ops, pgd, st);
 		range++;
 	}
-	mmap_read_unlock(mm);
+	mmap_write_unlock(mm);
 
 	/* Flush out the last page */
 	st->note_page(st, 0, -1, 0);