mm: huge_memory: fix misused mapping_large_folio_support() for anon folios
When I did a large folios split test, a WARNING "[ 5059.122759][ T166]
Cannot split file folio to non-0 order" was triggered. But the test cases
are only for anonmous folios. while mapping_large_folio_support() is only
reasonable for page cache folios.
In split_huge_page_to_list_to_order(), the folio passed to
mapping_large_folio_support() maybe anonmous folio. The folio_test_anon()
check is missing. So the split of the anonmous THP is failed. This is
also the same for shmem_mapping(). We'd better add a check for both. But
the shmem_mapping() in __split_huge_page() is not involved, as for
anonmous folios, the end parameter is set to -1, so (head[i].index >= end)
is always false. shmem_mapping() is not called.
Also add a VM_WARN_ON_ONCE() in mapping_large_folio_support() for anon
mapping, So we can detect the wrong use more easily.
THP folios maybe exist in the pagecache even the file system doesn't
support large folio, it is because when CONFIG_TRANSPARENT_HUGEPAGE is
enabled, khugepaged will try to collapse read-only file-backed pages to
THP. But the mapping does not actually support multi order large folios
properly.
Using /sys/kernel/debug/split_huge_pages to verify this, with this patch,
large anon THP is successfully split and the warning is ceased.
Link: https://lkml.kernel.org/r/202406071740485174hcFl7jRxncsHDtI-Pz-o@zte.com.cn
Fixes: c010d47f10
("mm: thp: split huge page to any lower order pages")
Reviewed-by: Barry Song <baohua@kernel.org>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: xu xin <xu.xin16@zte.com.cn>
Cc: Yang Yang <yang.yang29@zte.com.cn>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
a273559e9e
commit
6a50c9b512
@ -381,6 +381,10 @@ static inline void mapping_set_large_folios(struct address_space *mapping)
|
||||
*/
|
||||
static inline bool mapping_large_folio_support(struct address_space *mapping)
|
||||
{
|
||||
/* AS_LARGE_FOLIO_SUPPORT is only reasonable for pagecache folios */
|
||||
VM_WARN_ONCE((unsigned long)mapping & PAGE_MAPPING_ANON,
|
||||
"Anonymous mapping always supports large folio");
|
||||
|
||||
return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
|
||||
test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
|
||||
}
|
||||
|
@ -3009,30 +3009,36 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
|
||||
if (new_order >= folio_order(folio))
|
||||
return -EINVAL;
|
||||
|
||||
/* Cannot split anonymous THP to order-1 */
|
||||
if (new_order == 1 && folio_test_anon(folio)) {
|
||||
if (folio_test_anon(folio)) {
|
||||
/* order-1 is not supported for anonymous THP. */
|
||||
if (new_order == 1) {
|
||||
VM_WARN_ONCE(1, "Cannot split to order-1 folio");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (new_order) {
|
||||
/* Only swapping a whole PMD-mapped folio is supported */
|
||||
if (folio_test_swapcache(folio))
|
||||
return -EINVAL;
|
||||
} else if (new_order) {
|
||||
/* Split shmem folio to non-zero order not supported */
|
||||
if (shmem_mapping(folio->mapping)) {
|
||||
VM_WARN_ONCE(1,
|
||||
"Cannot split shmem folio to non-0 order");
|
||||
return -EINVAL;
|
||||
}
|
||||
/* No split if the file system does not support large folio */
|
||||
if (!mapping_large_folio_support(folio->mapping)) {
|
||||
/*
|
||||
* No split if the file system does not support large folio.
|
||||
* Note that we might still have THPs in such mappings due to
|
||||
* CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
|
||||
* does not actually support large folios properly.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
|
||||
!mapping_large_folio_support(folio->mapping)) {
|
||||
VM_WARN_ONCE(1,
|
||||
"Cannot split file folio to non-0 order");
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
/* Only swapping a whole PMD-mapped folio is supported */
|
||||
if (folio_test_swapcache(folio) && new_order)
|
||||
return -EINVAL;
|
||||
|
||||
is_hzp = is_huge_zero_folio(folio);
|
||||
if (is_hzp) {
|
||||
|
Loading…
Reference in New Issue
Block a user