Skip to content

Commit fae05b2

Browse files
kerneltoastgregkh
authored andcommitted
zsmalloc: fix races between asynchronous zspage free and page migration
commit 2505a98 upstream. The asynchronous zspage free worker tries to lock a zspage's entire page list without defending against page migration. Since pages which haven't yet been locked can concurrently migrate off the zspage page list while lock_zspage() churns away, lock_zspage() can suffer from a few different lethal races. It can lock a page which no longer belongs to the zspage and unsafely dereference page_private(), it can unsafely dereference a torn pointer to the next page (since there's a data race), and it can observe a spurious NULL pointer to the next page and thus not lock all of the zspage's pages (since a single page migration will reconstruct the entire page list, and create_page_chain() unconditionally zeroes out each list pointer in the process). Fix the races by using migrate_read_lock() in lock_zspage() to synchronize with page migration. Link: https://lkml.kernel.org/r/20220509024703.243847-1-sultan@kerneltoast.com Fixes: 77ff465 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not return -EBUSY if zspage is not inuse") Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 6a1cc25 commit fae05b2

File tree

1 file changed

+33
-4
lines changed

1 file changed

+33
-4
lines changed

mm/zsmalloc.c

Lines changed: 33 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1748,11 +1748,40 @@ static enum fullness_group putback_zspage(struct size_class *class,
17481748
*/
17491749
static void lock_zspage(struct zspage *zspage)
17501750
{
1751-
struct page *page = get_first_page(zspage);
1751+
struct page *curr_page, *page;
17521752

1753-
do {
1754-
lock_page(page);
1755-
} while ((page = get_next_page(page)) != NULL);
1753+
/*
1754+
* Pages we haven't locked yet can be migrated off the list while we're
1755+
* trying to lock them, so we need to be careful and only attempt to
1756+
* lock each page under migrate_read_lock(). Otherwise, the page we lock
1757+
* may no longer belong to the zspage. This means that we may wait for
1758+
* the wrong page to unlock, so we must take a reference to the page
1759+
* prior to waiting for it to unlock outside migrate_read_lock().
1760+
*/
1761+
while (1) {
1762+
migrate_read_lock(zspage);
1763+
page = get_first_page(zspage);
1764+
if (trylock_page(page))
1765+
break;
1766+
get_page(page);
1767+
migrate_read_unlock(zspage);
1768+
wait_on_page_locked(page);
1769+
put_page(page);
1770+
}
1771+
1772+
curr_page = page;
1773+
while ((page = get_next_page(curr_page))) {
1774+
if (trylock_page(page)) {
1775+
curr_page = page;
1776+
} else {
1777+
get_page(page);
1778+
migrate_read_unlock(zspage);
1779+
wait_on_page_locked(page);
1780+
put_page(page);
1781+
migrate_read_lock(zspage);
1782+
}
1783+
}
1784+
migrate_read_unlock(zspage);
17561785
}
17571786

17581787
static int zs_init_fs_context(struct fs_context *fc)

0 commit comments

Comments
 (0)