1

mm: page_alloc: fix missed updates of PGFREE in free_unref_{page/folios}

PGFREE is currently updated in two code paths:

- __free_pages_ok(): for pages freed to the buddy allocator.
- free_unref_page_commit(): for pages freed to the pcplists.

Before commit df1acc8569 ("mm/page_alloc: avoid conflating IRQs disabled
with zone->lock"), free_unref_page_commit() used to fallback to freeing
isolated pages directly to the buddy allocator through free_one_page(). 
This was done _after_ updating PGFREE, so the counter was correctly
updated.

However, that commit moved the fallback logic to its callers (now called
free_unref_page() and free_unref_folios()), so PGFREE was no longer
updated in this fallback case.

Now that the code has developed, there are more cases in free_unref_page()
and free_unref_folios() where we fallback to calling free_one_page() (e.g.
!pcp_allowed_order(), pcp_spin_trylock() fails).  These cases also miss
updating PGFREE.

To make sure PGFREE is updated in all cases where pages are freed to the
buddy allocator, move the update down the stack to free_one_page().

This was noticed through code inspection, although it should be noticeable
at runtime (at least with some workloads).

Link: https://lkml.kernel.org/r/20240904205419.821776-1-yosryahmed@google.com
Fixes: df1acc8569 ("mm/page_alloc: avoid conflating IRQs disabled with zone->lock")
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Cc: Brendan Jackman <jackmanb@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Yosry Ahmed 2024-09-04 20:54:19 +00:00 committed by Andrew Morton
parent df7e1286b1
commit ec867977fe

View File

@ -1231,6 +1231,8 @@ static void free_one_page(struct zone *zone, struct page *page,
spin_lock_irqsave(&zone->lock, flags); spin_lock_irqsave(&zone->lock, flags);
split_large_buddy(zone, page, pfn, order, fpi_flags); split_large_buddy(zone, page, pfn, order, fpi_flags);
spin_unlock_irqrestore(&zone->lock, flags); spin_unlock_irqrestore(&zone->lock, flags);
__count_vm_events(PGFREE, 1 << order);
} }
static void __free_pages_ok(struct page *page, unsigned int order, static void __free_pages_ok(struct page *page, unsigned int order,
@ -1239,12 +1241,8 @@ static void __free_pages_ok(struct page *page, unsigned int order,
unsigned long pfn = page_to_pfn(page); unsigned long pfn = page_to_pfn(page);
struct zone *zone = page_zone(page); struct zone *zone = page_zone(page);
if (!free_pages_prepare(page, order)) if (free_pages_prepare(page, order))
return;
free_one_page(zone, page, pfn, order, fpi_flags); free_one_page(zone, page, pfn, order, fpi_flags);
__count_vm_events(PGFREE, 1 << order);
} }
void __meminit __free_pages_core(struct page *page, unsigned int order, void __meminit __free_pages_core(struct page *page, unsigned int order,