1

fork: do not invoke uffd on fork if error occurs

Patch series "fork: do not expose incomplete mm on fork".

During fork we may place the virtual memory address space into an
inconsistent state before the fork operation is complete.

In addition, we may encounter an error during the fork operation that
indicates that the virtual memory address space is invalidated.

As a result, we should not be exposing it in any way to external machinery
that might interact with the mm or VMAs, machinery that is not designed to
deal with incomplete state.

We specifically update the fork logic to defer khugepaged and ksm to the
end of the operation and only to be invoked if no error arose, and
disallow uffd from observing fork events should an error have occurred.


This patch (of 2):

Currently on fork we expose the virtual address space of a process to
userland unconditionally if uffd is registered in VMAs, regardless of
whether an error arose in the fork.

This is performed in dup_userfaultfd_complete() which is invoked
unconditionally, and performs two duties - invoking registered handlers
for the UFFD_EVENT_FORK event via dup_fctx(), and clearing down
userfaultfd_fork_ctx objects established in dup_userfaultfd().

This is problematic, because the virtual address space may not yet be
correctly initialised if an error arose.

The change in commit d240629148 ("fork: use __mt_dup() to duplicate
maple tree in dup_mmap()") makes this more pertinent as we may be in a
state where entries in the maple tree are not yet consistent.

We address this by, on fork error, ensuring that we roll back state that
we would otherwise expect to clean up through the event being handled by
userland and perform the memory freeing duty otherwise performed by
dup_userfaultfd_complete().

We do this by implementing a new function, dup_userfaultfd_fail(), which
performs the same loop, only decrementing reference counts.

Note that we perform mmgrab() on the parent and child mm's, however
userfaultfd_ctx_put() will mmdrop() this once the reference count drops to
zero, so we will avoid memory leaks correctly here.

Link: https://lkml.kernel.org/r/cover.1729014377.git.lorenzo.stoakes@oracle.com
Link: https://lkml.kernel.org/r/d3691d58bb58712b6fb3df2be441d175bd3cdf07.1729014377.git.lorenzo.stoakes@oracle.com
Fixes: d240629148 ("fork: use __mt_dup() to duplicate maple tree in dup_mmap()")
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reported-by: Jann Horn <jannh@google.com>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Linus Torvalds <torvalds@linuxfoundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Lorenzo Stoakes 2024-10-15 18:56:05 +01:00 committed by Andrew Morton
parent 7c18d48110
commit f64e67e5d3
3 changed files with 37 additions and 1 deletions

View File

@ -692,6 +692,34 @@ void dup_userfaultfd_complete(struct list_head *fcs)
} }
} }
void dup_userfaultfd_fail(struct list_head *fcs)
{
struct userfaultfd_fork_ctx *fctx, *n;
/*
* An error has occurred on fork, we will tear memory down, but have
* allocated memory for fctx's and raised reference counts for both the
* original and child contexts (and on the mm for each as a result).
*
* These would ordinarily be taken care of by a user handling the event,
* but we are no longer doing so, so manually clean up here.
*
* mm tear down will take care of cleaning up VMA contexts.
*/
list_for_each_entry_safe(fctx, n, fcs, list) {
struct userfaultfd_ctx *octx = fctx->orig;
struct userfaultfd_ctx *ctx = fctx->new;
atomic_dec(&octx->mmap_changing);
VM_BUG_ON(atomic_read(&octx->mmap_changing) < 0);
userfaultfd_ctx_put(octx);
userfaultfd_ctx_put(ctx);
list_del(&fctx->list);
kfree(fctx);
}
}
void mremap_userfaultfd_prep(struct vm_area_struct *vma, void mremap_userfaultfd_prep(struct vm_area_struct *vma,
struct vm_userfaultfd_ctx *vm_ctx) struct vm_userfaultfd_ctx *vm_ctx)
{ {

View File

@ -249,6 +249,7 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma,
extern int dup_userfaultfd(struct vm_area_struct *, struct list_head *); extern int dup_userfaultfd(struct vm_area_struct *, struct list_head *);
extern void dup_userfaultfd_complete(struct list_head *); extern void dup_userfaultfd_complete(struct list_head *);
void dup_userfaultfd_fail(struct list_head *);
extern void mremap_userfaultfd_prep(struct vm_area_struct *, extern void mremap_userfaultfd_prep(struct vm_area_struct *,
struct vm_userfaultfd_ctx *); struct vm_userfaultfd_ctx *);
@ -351,6 +352,10 @@ static inline void dup_userfaultfd_complete(struct list_head *l)
{ {
} }
static inline void dup_userfaultfd_fail(struct list_head *l)
{
}
static inline void mremap_userfaultfd_prep(struct vm_area_struct *vma, static inline void mremap_userfaultfd_prep(struct vm_area_struct *vma,
struct vm_userfaultfd_ctx *ctx) struct vm_userfaultfd_ctx *ctx)
{ {

View File

@ -775,7 +775,10 @@ out:
mmap_write_unlock(mm); mmap_write_unlock(mm);
flush_tlb_mm(oldmm); flush_tlb_mm(oldmm);
mmap_write_unlock(oldmm); mmap_write_unlock(oldmm);
dup_userfaultfd_complete(&uf); if (!retval)
dup_userfaultfd_complete(&uf);
else
dup_userfaultfd_fail(&uf);
fail_uprobe_end: fail_uprobe_end:
uprobe_end_dup_mmap(); uprobe_end_dup_mmap();
return retval; return retval;