1

mm: mark mas allocation in vms_abort_munmap_vmas as __GFP_NOFAIL

vms_abort_munmap_vmas() is a recovery path where, on entry, some VMAs have
already been torn down halfway (in a way we can't undo) but are still
present in the maple tree.

At this point, we *must* remove the VMAs from the VMA tree, otherwise we
get UAF.

Because removing VMA tree nodes can require memory allocation, the
existing code has an error path which tries to handle this by reattaching
the VMAs; but that can't be done safely.

A nicer way to fix it would probably be to preallocate enough maple tree
nodes for the removal before the point of no return, or something like
that; but for now, fix it the easy and kinda ugly way, by marking this
allocation __GFP_NOFAIL.

Link: https://lkml.kernel.org/r/20241016-fix-munmap-abort-v1-1-601c94b2240d@google.com
Fixes: 4f87153e82 ("mm: change failure of MAP_FIXED to restoring the gap on failure")
Signed-off-by: Jann Horn <jannh@google.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Jann Horn 2024-10-16 17:07:53 +02:00 committed by Andrew Morton
parent 1db272864f
commit 14611508cb

View File

@ -241,16 +241,10 @@ static inline void vms_abort_munmap_vmas(struct vma_munmap_struct *vms,
* failure method of leaving a gap where the MAP_FIXED mapping failed. * failure method of leaving a gap where the MAP_FIXED mapping failed.
*/ */
mas_set_range(mas, vms->start, vms->end - 1); mas_set_range(mas, vms->start, vms->end - 1);
if (unlikely(mas_store_gfp(mas, NULL, GFP_KERNEL))) { mas_store_gfp(mas, NULL, GFP_KERNEL|__GFP_NOFAIL);
pr_warn_once("%s: (%d) Unable to abort munmap() operation\n",
current->comm, current->pid);
/* Leaving vmas detached and in-tree may hamper recovery */
reattach_vmas(mas_detach);
} else {
/* Clean up the insertion of the unfortunate gap */ /* Clean up the insertion of the unfortunate gap */
vms_complete_munmap_vmas(vms, mas_detach); vms_complete_munmap_vmas(vms, mas_detach);
} }
}
int int
do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,