summary refs log tree commit diff
diff options
context:
space:
mode:
authorAlistair Popple <apopple@nvidia.com>2022-06-20 19:05:36 +1000
committerMatthew Wilcox (Oracle) <willy@infradead.org>2022-06-23 12:22:00 -0400
commit00fa15e0d56482e32d8ca1f51d76b0ee00afb16b (patch)
treea865c40d2856c6dea026bc6b3f23a92e366a81be
parentb653db77350c7307a513b81856fe53e94cf42446 (diff)
downloadlinux-00fa15e0d56482e32d8ca1f51d76b0ee00afb16b.tar.gz
filemap: Fix serialization adding transparent huge pages to page cache
Commit 793917d997df ("mm/readahead: Add large folio readahead")
introduced support for using large folios for filebacked pages if the
filesystem supports it.

page_cache_ra_order() was introduced to allocate and add these large
folios to the page cache. However adding pages to the page cache should
be serialized against truncation and hole punching by taking
invalidate_lock. Not doing so can lead to data races resulting in stale
data getting added to the page cache and marked up-to-date. See commit
730633f0b7f9 ("mm: Protect operations adding pages to page cache with
invalidate_lock") for more details.

This issue was found by inspection but a testcase revealed it was
possible to observe in practice on XFS. Fix this by taking
invalidate_lock in page_cache_ra_order(), to mirror what is done for the
non-thp case in page_cache_ra_unbounded().

Signed-off-by: Alistair Popple <apopple@nvidia.com>
Fixes: 793917d997df ("mm/readahead: Add large folio readahead")
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
-rw-r--r--mm/readahead.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/readahead.c b/mm/readahead.c
index 57a015108254..fdcd28cbd92d 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -510,6 +510,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
 			new_order--;
 	}
 
+	filemap_invalidate_lock_shared(mapping);
 	while (index <= limit) {
 		unsigned int order = new_order;
 
@@ -536,6 +537,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
 	}
 
 	read_pages(ractl);
+	filemap_invalidate_unlock_shared(mapping);
 
 	/*
 	 * If there were already pages in the page cache, then we may have