License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 07:07:57 -07:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2007-10-16 01:24:13 -07:00
|
|
|
/*
|
|
|
|
* Virtual Memory Map support
|
|
|
|
*
|
2008-07-04 09:59:22 -07:00
|
|
|
* (C) 2007 sgi. Christoph Lameter.
|
2007-10-16 01:24:13 -07:00
|
|
|
*
|
|
|
|
* Virtual memory maps allow VM primitives pfn_to_page, page_to_pfn,
|
|
|
|
* virt_to_page, page_address() to be implemented as a base offset
|
|
|
|
* calculation without memory access.
|
|
|
|
*
|
|
|
|
* However, virtual mappings need a page table and TLBs. Many Linux
|
|
|
|
* architectures already map their physical space using 1-1 mappings
|
tree-wide: fix comment/printk typos
"gadget", "through", "command", "maintain", "maintain", "controller", "address",
"between", "initiali[zs]e", "instead", "function", "select", "already",
"equal", "access", "management", "hierarchy", "registration", "interest",
"relative", "memory", "offset", "already",
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2010-11-01 12:38:34 -07:00
|
|
|
* via TLBs. For those arches the virtual memory map is essentially
|
2007-10-16 01:24:13 -07:00
|
|
|
* for free if we use the same page size as the 1-1 mappings. In that
|
|
|
|
* case the overhead consists of a few additional pages that are
|
|
|
|
* allocated to create a view of memory for vmemmap.
|
|
|
|
*
|
2007-10-16 01:24:14 -07:00
|
|
|
* The architecture is expected to provide a vmemmap_populate() function
|
|
|
|
* to instantiate the mapping.
|
2007-10-16 01:24:13 -07:00
|
|
|
*/
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/mmzone.h>
|
2018-10-30 15:09:44 -07:00
|
|
|
#include <linux/memblock.h>
|
2016-01-15 17:56:22 -07:00
|
|
|
#include <linux/memremap.h>
|
2007-10-16 01:24:13 -07:00
|
|
|
#include <linux/highmem.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 01:04:11 -07:00
|
|
|
#include <linux/slab.h>
|
2007-10-16 01:24:13 -07:00
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/vmalloc.h>
|
2007-10-29 14:37:19 -07:00
|
|
|
#include <linux/sched.h>
|
2021-06-30 18:47:13 -07:00
|
|
|
|
2007-10-16 01:24:13 -07:00
|
|
|
#include <asm/dma.h>
|
|
|
|
#include <asm/pgalloc.h>
|
2021-06-30 18:47:21 -07:00
|
|
|
|
2007-10-16 01:24:13 -07:00
|
|
|
/*
|
|
|
|
* Allocate a block of memory to be used to back the virtual memory map
|
|
|
|
* or to back the page tables that are used to create the mapping.
|
|
|
|
* Uses the main allocators if they are available, else bootmem.
|
|
|
|
*/
|
2007-11-28 17:21:57 -07:00
|
|
|
|
2016-08-02 14:03:33 -07:00
|
|
|
static void * __ref __earlyonly_bootmem_alloc(int node,
|
2007-11-28 17:21:57 -07:00
|
|
|
unsigned long size,
|
|
|
|
unsigned long align,
|
|
|
|
unsigned long goal)
|
|
|
|
{
|
2018-10-30 15:08:04 -07:00
|
|
|
return memblock_alloc_try_nid_raw(size, align, goal,
|
2018-10-30 15:09:44 -07:00
|
|
|
MEMBLOCK_ALLOC_ACCESSIBLE, node);
|
2007-11-28 17:21:57 -07:00
|
|
|
}
|
|
|
|
|
2007-10-16 01:24:13 -07:00
|
|
|
void * __meminit vmemmap_alloc_block(unsigned long size, int node)
|
|
|
|
{
|
|
|
|
/* If the main allocator is up use that, fallback to bootmem. */
|
|
|
|
if (slab_is_available()) {
|
2017-11-15 18:38:56 -07:00
|
|
|
gfp_t gfp_mask = GFP_KERNEL|__GFP_RETRY_MAYFAIL|__GFP_NOWARN;
|
|
|
|
int order = get_order(size);
|
|
|
|
static bool warned;
|
2009-09-21 17:01:19 -07:00
|
|
|
struct page *page;
|
|
|
|
|
2017-11-15 18:38:56 -07:00
|
|
|
page = alloc_pages_node(node, gfp_mask, order);
|
2007-10-16 01:24:13 -07:00
|
|
|
if (page)
|
|
|
|
return page_address(page);
|
2017-11-15 18:38:56 -07:00
|
|
|
|
|
|
|
if (!warned) {
|
|
|
|
warn_alloc(gfp_mask & ~__GFP_NOWARN, NULL,
|
|
|
|
"vmemmap alloc failure: order:%u", order);
|
|
|
|
warned = true;
|
|
|
|
}
|
2007-10-16 01:24:13 -07:00
|
|
|
return NULL;
|
|
|
|
} else
|
2007-11-28 17:21:57 -07:00
|
|
|
return __earlyonly_bootmem_alloc(node, size, size,
|
2007-10-16 01:24:13 -07:00
|
|
|
__pa(MAX_DMA_ADDRESS));
|
|
|
|
}
|
|
|
|
|
2020-08-06 23:23:24 -07:00
|
|
|
static void * __meminit altmap_alloc_block_buf(unsigned long size,
|
|
|
|
struct vmem_altmap *altmap);
|
|
|
|
|
2010-02-10 02:20:22 -07:00
|
|
|
/* need to make sure size is all the same during early stage */
|
2020-08-06 23:23:24 -07:00
|
|
|
void * __meminit vmemmap_alloc_block_buf(unsigned long size, int node,
|
|
|
|
struct vmem_altmap *altmap)
|
2010-02-10 02:20:22 -07:00
|
|
|
{
|
2020-08-06 23:23:24 -07:00
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
if (altmap)
|
|
|
|
return altmap_alloc_block_buf(size, altmap);
|
2010-02-10 02:20:22 -07:00
|
|
|
|
2020-08-06 23:23:24 -07:00
|
|
|
ptr = sparse_buffer_alloc(size);
|
2018-08-17 15:49:21 -07:00
|
|
|
if (!ptr)
|
|
|
|
ptr = vmemmap_alloc_block(size, node);
|
2010-02-10 02:20:22 -07:00
|
|
|
return ptr;
|
|
|
|
}
|
|
|
|
|
2016-01-15 17:56:22 -07:00
|
|
|
static unsigned long __meminit vmem_altmap_next_pfn(struct vmem_altmap *altmap)
|
|
|
|
{
|
|
|
|
return altmap->base_pfn + altmap->reserve + altmap->alloc
|
|
|
|
+ altmap->align;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned long __meminit vmem_altmap_nr_free(struct vmem_altmap *altmap)
|
|
|
|
{
|
|
|
|
unsigned long allocated = altmap->alloc + altmap->align;
|
|
|
|
|
|
|
|
if (altmap->free > allocated)
|
|
|
|
return altmap->free - allocated;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-08-06 23:23:24 -07:00
|
|
|
static void * __meminit altmap_alloc_block_buf(unsigned long size,
|
|
|
|
struct vmem_altmap *altmap)
|
2016-01-15 17:56:22 -07:00
|
|
|
{
|
2017-12-29 00:53:59 -07:00
|
|
|
unsigned long pfn, nr_pfns, nr_align;
|
2016-01-15 17:56:22 -07:00
|
|
|
|
|
|
|
if (size & ~PAGE_MASK) {
|
|
|
|
pr_warn_once("%s: allocations must be multiple of PAGE_SIZE (%ld)\n",
|
|
|
|
__func__, size);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2017-12-29 00:53:59 -07:00
|
|
|
pfn = vmem_altmap_next_pfn(altmap);
|
2016-01-15 17:56:22 -07:00
|
|
|
nr_pfns = size >> PAGE_SHIFT;
|
2017-12-29 00:53:59 -07:00
|
|
|
nr_align = 1UL << find_first_bit(&nr_pfns, BITS_PER_LONG);
|
|
|
|
nr_align = ALIGN(pfn, nr_align) - pfn;
|
|
|
|
if (nr_pfns + nr_align > vmem_altmap_nr_free(altmap))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
altmap->alloc += nr_pfns;
|
|
|
|
altmap->align += nr_align;
|
|
|
|
pfn += nr_align;
|
|
|
|
|
2016-01-15 17:56:22 -07:00
|
|
|
pr_debug("%s: pfn: %#lx alloc: %ld align: %ld nr: %#lx\n",
|
|
|
|
__func__, pfn, altmap->alloc, altmap->align, nr_pfns);
|
2017-12-29 00:53:59 -07:00
|
|
|
return __va(__pfn_to_phys(pfn));
|
2016-01-15 17:56:22 -07:00
|
|
|
}
|
|
|
|
|
2007-10-16 01:24:13 -07:00
|
|
|
void __meminit vmemmap_verify(pte_t *pte, int node,
|
|
|
|
unsigned long start, unsigned long end)
|
|
|
|
{
|
mm: ptep_get() conversion
Convert all instances of direct pte_t* dereferencing to instead use
ptep_get() helper. This means that by default, the accesses change from a
C dereference to a READ_ONCE(). This is technically the correct thing to
do since where pgtables are modified by HW (for access/dirty) they are
volatile and therefore we should always ensure READ_ONCE() semantics.
But more importantly, by always using the helper, it can be overridden by
the architecture to fully encapsulate the contents of the pte. Arch code
is deliberately not converted, as the arch code knows best. It is
intended that arch code (arm64) will override the default with its own
implementation that can (e.g.) hide certain bits from the core code, or
determine young/dirty status by mixing in state from another source.
Conversion was done using Coccinelle:
----
// $ make coccicheck \
// COCCI=ptepget.cocci \
// SPFLAGS="--include-headers" \
// MODE=patch
virtual patch
@ depends on patch @
pte_t *v;
@@
- *v
+ ptep_get(v)
----
Then reviewed and hand-edited to avoid multiple unnecessary calls to
ptep_get(), instead opting to store the result of a single call in a
variable, where it is correct to do so. This aims to negate any cost of
READ_ONCE() and will benefit arch-overrides that may be more complex.
Included is a fix for an issue in an earlier version of this patch that
was pointed out by kernel test robot. The issue arose because config
MMU=n elides definition of the ptep helper functions, including
ptep_get(). HUGETLB_PAGE=n configs still define a simple
huge_ptep_clear_flush() for linking purposes, which dereferences the ptep.
So when both configs are disabled, this caused a build error because
ptep_get() is not defined. Fix by continuing to do a direct dereference
when MMU=n. This is safe because for this config the arch code cannot be
trying to virtualize the ptes because none of the ptep helpers are
defined.
Link: https://lkml.kernel.org/r/20230612151545.3317766-4-ryan.roberts@arm.com
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/oe-kbuild-all/202305120142.yXsNEo6H-lkp@intel.com/
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Dave Airlie <airlied@gmail.com>
Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: SeongJae Park <sj@kernel.org>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-12 08:15:45 -07:00
|
|
|
unsigned long pfn = pte_pfn(ptep_get(pte));
|
2007-10-16 01:24:13 -07:00
|
|
|
int actual_node = early_pfn_to_nid(pfn);
|
|
|
|
|
2008-11-06 13:53:31 -07:00
|
|
|
if (node_distance(actual_node, node) > LOCAL_DISTANCE)
|
2022-06-14 02:21:54 -07:00
|
|
|
pr_warn_once("[%lx-%lx] potential offnode page_structs\n",
|
2016-03-17 14:19:50 -07:00
|
|
|
start, end - 1);
|
2007-10-16 01:24:13 -07:00
|
|
|
}
|
|
|
|
|
2020-08-06 23:23:19 -07:00
|
|
|
pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node,
|
2022-04-28 23:16:16 -07:00
|
|
|
struct vmem_altmap *altmap,
|
|
|
|
struct page *reuse)
|
2007-10-16 01:24:13 -07:00
|
|
|
{
|
2007-10-16 01:24:14 -07:00
|
|
|
pte_t *pte = pte_offset_kernel(pmd, addr);
|
mm: ptep_get() conversion
Convert all instances of direct pte_t* dereferencing to instead use
ptep_get() helper. This means that by default, the accesses change from a
C dereference to a READ_ONCE(). This is technically the correct thing to
do since where pgtables are modified by HW (for access/dirty) they are
volatile and therefore we should always ensure READ_ONCE() semantics.
But more importantly, by always using the helper, it can be overridden by
the architecture to fully encapsulate the contents of the pte. Arch code
is deliberately not converted, as the arch code knows best. It is
intended that arch code (arm64) will override the default with its own
implementation that can (e.g.) hide certain bits from the core code, or
determine young/dirty status by mixing in state from another source.
Conversion was done using Coccinelle:
----
// $ make coccicheck \
// COCCI=ptepget.cocci \
// SPFLAGS="--include-headers" \
// MODE=patch
virtual patch
@ depends on patch @
pte_t *v;
@@
- *v
+ ptep_get(v)
----
Then reviewed and hand-edited to avoid multiple unnecessary calls to
ptep_get(), instead opting to store the result of a single call in a
variable, where it is correct to do so. This aims to negate any cost of
READ_ONCE() and will benefit arch-overrides that may be more complex.
Included is a fix for an issue in an earlier version of this patch that
was pointed out by kernel test robot. The issue arose because config
MMU=n elides definition of the ptep helper functions, including
ptep_get(). HUGETLB_PAGE=n configs still define a simple
huge_ptep_clear_flush() for linking purposes, which dereferences the ptep.
So when both configs are disabled, this caused a build error because
ptep_get() is not defined. Fix by continuing to do a direct dereference
when MMU=n. This is safe because for this config the arch code cannot be
trying to virtualize the ptes because none of the ptep helpers are
defined.
Link: https://lkml.kernel.org/r/20230612151545.3317766-4-ryan.roberts@arm.com
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/oe-kbuild-all/202305120142.yXsNEo6H-lkp@intel.com/
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Dave Airlie <airlied@gmail.com>
Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: SeongJae Park <sj@kernel.org>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-12 08:15:45 -07:00
|
|
|
if (pte_none(ptep_get(pte))) {
|
2007-10-16 01:24:14 -07:00
|
|
|
pte_t entry;
|
2020-08-06 23:23:19 -07:00
|
|
|
void *p;
|
|
|
|
|
2022-04-28 23:16:16 -07:00
|
|
|
if (!reuse) {
|
|
|
|
p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap);
|
|
|
|
if (!p)
|
|
|
|
return NULL;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* When a PTE/PMD entry is freed from the init_mm
|
2022-06-25 01:51:35 -07:00
|
|
|
* there's a free_pages() call to this page allocated
|
2022-04-28 23:16:16 -07:00
|
|
|
* above. Thus this get_page() is paired with the
|
|
|
|
* put_page_testzero() on the freeing path.
|
|
|
|
* This can only called by certain ZONE_DEVICE path,
|
|
|
|
* and through vmemmap_populate_compound_pages() when
|
|
|
|
* slab is available.
|
|
|
|
*/
|
|
|
|
get_page(reuse);
|
|
|
|
p = page_to_virt(reuse);
|
|
|
|
}
|
2007-10-16 01:24:14 -07:00
|
|
|
entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL);
|
|
|
|
set_pte_at(&init_mm, addr, pte, entry);
|
|
|
|
}
|
|
|
|
return pte;
|
2007-10-16 01:24:13 -07:00
|
|
|
}
|
|
|
|
|
2017-11-15 18:36:44 -07:00
|
|
|
static void * __meminit vmemmap_alloc_block_zero(unsigned long size, int node)
|
|
|
|
{
|
|
|
|
void *p = vmemmap_alloc_block(size, node);
|
|
|
|
|
|
|
|
if (!p)
|
|
|
|
return NULL;
|
|
|
|
memset(p, 0, size);
|
|
|
|
|
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
LoongArch: Set initial pte entry with PAGE_GLOBAL for kernel space
There are two pages in one TLB entry on LoongArch system. For kernel
space, it requires both two pte entries (buddies) with PAGE_GLOBAL bit
set, otherwise HW treats it as non-global tlb, there will be potential
problems if tlb entry for kernel space is not global. Such as fail to
flush kernel tlb with the function local_flush_tlb_kernel_range() which
supposed only flush tlb with global bit.
Kernel address space areas include percpu, vmalloc, vmemmap, fixmap and
kasan areas. For these areas both two consecutive page table entries
should be enabled with PAGE_GLOBAL bit. So with function set_pte() and
pte_clear(), pte buddy entry is checked and set besides its own pte
entry. However it is not atomic operation to set both two pte entries,
there is problem with test_vmalloc test case.
So function kernel_pte_init() is added to init a pte table when it is
created for kernel address space, and the default initial pte value is
PAGE_GLOBAL rather than zero at beginning. Then only its own pte entry
need update with function set_pte() and pte_clear(), nothing to do with
the pte buddy entry.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2024-10-21 07:11:19 -07:00
|
|
|
void __weak __meminit kernel_pte_init(void *addr)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2007-10-16 01:24:14 -07:00
|
|
|
pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node)
|
2007-10-16 01:24:13 -07:00
|
|
|
{
|
2007-10-16 01:24:14 -07:00
|
|
|
pmd_t *pmd = pmd_offset(pud, addr);
|
|
|
|
if (pmd_none(*pmd)) {
|
2017-11-15 18:36:44 -07:00
|
|
|
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
|
2007-10-16 01:24:14 -07:00
|
|
|
if (!p)
|
2008-03-28 20:07:28 -07:00
|
|
|
return NULL;
|
LoongArch: Set initial pte entry with PAGE_GLOBAL for kernel space
There are two pages in one TLB entry on LoongArch system. For kernel
space, it requires both two pte entries (buddies) with PAGE_GLOBAL bit
set, otherwise HW treats it as non-global tlb, there will be potential
problems if tlb entry for kernel space is not global. Such as fail to
flush kernel tlb with the function local_flush_tlb_kernel_range() which
supposed only flush tlb with global bit.
Kernel address space areas include percpu, vmalloc, vmemmap, fixmap and
kasan areas. For these areas both two consecutive page table entries
should be enabled with PAGE_GLOBAL bit. So with function set_pte() and
pte_clear(), pte buddy entry is checked and set besides its own pte
entry. However it is not atomic operation to set both two pte entries,
there is problem with test_vmalloc test case.
So function kernel_pte_init() is added to init a pte table when it is
created for kernel address space, and the default initial pte value is
PAGE_GLOBAL rather than zero at beginning. Then only its own pte entry
need update with function set_pte() and pte_clear(), nothing to do with
the pte buddy entry.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2024-10-21 07:11:19 -07:00
|
|
|
kernel_pte_init(p);
|
2007-10-16 01:24:14 -07:00
|
|
|
pmd_populate_kernel(&init_mm, pmd, p);
|
2007-10-16 01:24:13 -07:00
|
|
|
}
|
2007-10-16 01:24:14 -07:00
|
|
|
return pmd;
|
2007-10-16 01:24:13 -07:00
|
|
|
}
|
|
|
|
|
2022-10-27 05:52:51 -07:00
|
|
|
void __weak __meminit pmd_init(void *addr)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2017-03-09 07:24:07 -07:00
|
|
|
pud_t * __meminit vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node)
|
2007-10-16 01:24:13 -07:00
|
|
|
{
|
2017-03-09 07:24:07 -07:00
|
|
|
pud_t *pud = pud_offset(p4d, addr);
|
2007-10-16 01:24:14 -07:00
|
|
|
if (pud_none(*pud)) {
|
2017-11-15 18:36:44 -07:00
|
|
|
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
|
2007-10-16 01:24:14 -07:00
|
|
|
if (!p)
|
2008-03-28 20:07:28 -07:00
|
|
|
return NULL;
|
2022-10-27 05:52:51 -07:00
|
|
|
pmd_init(p);
|
2007-10-16 01:24:14 -07:00
|
|
|
pud_populate(&init_mm, pud, p);
|
|
|
|
}
|
|
|
|
return pud;
|
|
|
|
}
|
2007-10-16 01:24:13 -07:00
|
|
|
|
2022-10-27 05:52:51 -07:00
|
|
|
void __weak __meminit pud_init(void *addr)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2017-03-09 07:24:07 -07:00
|
|
|
p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
|
|
|
|
{
|
|
|
|
p4d_t *p4d = p4d_offset(pgd, addr);
|
|
|
|
if (p4d_none(*p4d)) {
|
2017-11-15 18:36:44 -07:00
|
|
|
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
|
2017-03-09 07:24:07 -07:00
|
|
|
if (!p)
|
|
|
|
return NULL;
|
2022-10-27 05:52:51 -07:00
|
|
|
pud_init(p);
|
2017-03-09 07:24:07 -07:00
|
|
|
p4d_populate(&init_mm, p4d, p);
|
|
|
|
}
|
|
|
|
return p4d;
|
|
|
|
}
|
|
|
|
|
2007-10-16 01:24:14 -07:00
|
|
|
pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
|
|
|
|
{
|
|
|
|
pgd_t *pgd = pgd_offset_k(addr);
|
|
|
|
if (pgd_none(*pgd)) {
|
2017-11-15 18:36:44 -07:00
|
|
|
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
|
2007-10-16 01:24:14 -07:00
|
|
|
if (!p)
|
2008-03-28 20:07:28 -07:00
|
|
|
return NULL;
|
2007-10-16 01:24:14 -07:00
|
|
|
pgd_populate(&init_mm, pgd, p);
|
2007-10-16 01:24:13 -07:00
|
|
|
}
|
2007-10-16 01:24:14 -07:00
|
|
|
return pgd;
|
2007-10-16 01:24:13 -07:00
|
|
|
}
|
|
|
|
|
2022-04-28 23:16:15 -07:00
|
|
|
static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node,
|
2022-04-28 23:16:16 -07:00
|
|
|
struct vmem_altmap *altmap,
|
|
|
|
struct page *reuse)
|
2007-10-16 01:24:13 -07:00
|
|
|
{
|
2007-10-16 01:24:14 -07:00
|
|
|
pgd_t *pgd;
|
2017-03-09 07:24:07 -07:00
|
|
|
p4d_t *p4d;
|
2007-10-16 01:24:14 -07:00
|
|
|
pud_t *pud;
|
|
|
|
pmd_t *pmd;
|
|
|
|
pte_t *pte;
|
2007-10-16 01:24:13 -07:00
|
|
|
|
2022-04-28 23:16:15 -07:00
|
|
|
pgd = vmemmap_pgd_populate(addr, node);
|
|
|
|
if (!pgd)
|
|
|
|
return NULL;
|
|
|
|
p4d = vmemmap_p4d_populate(pgd, addr, node);
|
|
|
|
if (!p4d)
|
|
|
|
return NULL;
|
|
|
|
pud = vmemmap_pud_populate(p4d, addr, node);
|
|
|
|
if (!pud)
|
|
|
|
return NULL;
|
|
|
|
pmd = vmemmap_pmd_populate(pud, addr, node);
|
|
|
|
if (!pmd)
|
|
|
|
return NULL;
|
2022-04-28 23:16:16 -07:00
|
|
|
pte = vmemmap_pte_populate(pmd, addr, node, altmap, reuse);
|
2022-04-28 23:16:15 -07:00
|
|
|
if (!pte)
|
|
|
|
return NULL;
|
|
|
|
vmemmap_verify(pte, node, addr, addr + PAGE_SIZE);
|
|
|
|
|
|
|
|
return pte;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __meminit vmemmap_populate_range(unsigned long start,
|
|
|
|
unsigned long end, int node,
|
2022-04-28 23:16:16 -07:00
|
|
|
struct vmem_altmap *altmap,
|
|
|
|
struct page *reuse)
|
2022-04-28 23:16:15 -07:00
|
|
|
{
|
|
|
|
unsigned long addr = start;
|
|
|
|
pte_t *pte;
|
|
|
|
|
2007-10-16 01:24:14 -07:00
|
|
|
for (; addr < end; addr += PAGE_SIZE) {
|
2022-04-28 23:16:16 -07:00
|
|
|
pte = vmemmap_populate_address(addr, node, altmap, reuse);
|
2007-10-16 01:24:14 -07:00
|
|
|
if (!pte)
|
|
|
|
return -ENOMEM;
|
2007-10-16 01:24:13 -07:00
|
|
|
}
|
2007-10-16 01:24:14 -07:00
|
|
|
|
|
|
|
return 0;
|
2007-10-16 01:24:13 -07:00
|
|
|
}
|
|
|
|
|
2022-04-28 23:16:15 -07:00
|
|
|
int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
|
|
|
|
int node, struct vmem_altmap *altmap)
|
|
|
|
{
|
2022-04-28 23:16:16 -07:00
|
|
|
return vmemmap_populate_range(start, end, node, altmap, NULL);
|
|
|
|
}
|
|
|
|
|
2022-10-27 05:52:52 -07:00
|
|
|
void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
|
|
|
|
unsigned long addr, unsigned long next)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
int __weak __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
|
|
|
|
unsigned long addr, unsigned long next)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,
|
|
|
|
int node, struct vmem_altmap *altmap)
|
|
|
|
{
|
|
|
|
unsigned long addr;
|
|
|
|
unsigned long next;
|
|
|
|
pgd_t *pgd;
|
|
|
|
p4d_t *p4d;
|
|
|
|
pud_t *pud;
|
|
|
|
pmd_t *pmd;
|
|
|
|
|
|
|
|
for (addr = start; addr < end; addr = next) {
|
|
|
|
next = pmd_addr_end(addr, end);
|
|
|
|
|
|
|
|
pgd = vmemmap_pgd_populate(addr, node);
|
|
|
|
if (!pgd)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
p4d = vmemmap_p4d_populate(pgd, addr, node);
|
|
|
|
if (!p4d)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
pud = vmemmap_pud_populate(p4d, addr, node);
|
|
|
|
if (!pud)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
pmd = pmd_offset(pud, addr);
|
|
|
|
if (pmd_none(READ_ONCE(*pmd))) {
|
|
|
|
void *p;
|
|
|
|
|
|
|
|
p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap);
|
|
|
|
if (p) {
|
|
|
|
vmemmap_set_pmd(pmd, p, node, addr, next);
|
|
|
|
continue;
|
|
|
|
} else if (altmap) {
|
|
|
|
/*
|
|
|
|
* No fallback: In any case we care about, the
|
|
|
|
* altmap should be reasonably sized and aligned
|
|
|
|
* such that vmemmap_alloc_block_buf() will always
|
|
|
|
* succeed. For consistency with the PTE case,
|
|
|
|
* return an error here as failure could indicate
|
|
|
|
* a configuration issue with the size of the altmap.
|
|
|
|
*/
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
} else if (vmemmap_check_pmd(pmd, node, addr, next))
|
|
|
|
continue;
|
|
|
|
if (vmemmap_populate_basepages(addr, next, node, altmap))
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-07-24 12:07:50 -07:00
|
|
|
#ifndef vmemmap_populate_compound_pages
|
2022-04-28 23:16:16 -07:00
|
|
|
/*
|
|
|
|
* For compound pages bigger than section size (e.g. x86 1G compound
|
|
|
|
* pages with 2M subsection size) fill the rest of sections as tail
|
|
|
|
* pages.
|
|
|
|
*
|
|
|
|
* Note that memremap_pages() resets @nr_range value and will increment
|
|
|
|
* it after each range successful onlining. Thus the value or @nr_range
|
|
|
|
* at section memmap populate corresponds to the in-progress range
|
|
|
|
* being onlined here.
|
|
|
|
*/
|
|
|
|
static bool __meminit reuse_compound_section(unsigned long start_pfn,
|
|
|
|
struct dev_pagemap *pgmap)
|
|
|
|
{
|
|
|
|
unsigned long nr_pages = pgmap_vmemmap_nr(pgmap);
|
|
|
|
unsigned long offset = start_pfn -
|
|
|
|
PHYS_PFN(pgmap->ranges[pgmap->nr_range].start);
|
|
|
|
|
|
|
|
return !IS_ALIGNED(offset, nr_pages) && nr_pages > PAGES_PER_SUBSECTION;
|
|
|
|
}
|
|
|
|
|
|
|
|
static pte_t * __meminit compound_section_tail_page(unsigned long addr)
|
|
|
|
{
|
|
|
|
pte_t *pte;
|
|
|
|
|
|
|
|
addr -= PAGE_SIZE;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Assuming sections are populated sequentially, the previous section's
|
|
|
|
* page data can be reused.
|
|
|
|
*/
|
|
|
|
pte = pte_offset_kernel(pmd_off_k(addr), addr);
|
|
|
|
if (!pte)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return pte;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn,
|
|
|
|
unsigned long start,
|
|
|
|
unsigned long end, int node,
|
|
|
|
struct dev_pagemap *pgmap)
|
|
|
|
{
|
|
|
|
unsigned long size, addr;
|
|
|
|
pte_t *pte;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
if (reuse_compound_section(start_pfn, pgmap)) {
|
|
|
|
pte = compound_section_tail_page(start);
|
|
|
|
if (!pte)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reuse the page that was populated in the prior iteration
|
|
|
|
* with just tail struct pages.
|
|
|
|
*/
|
|
|
|
return vmemmap_populate_range(start, end, node, NULL,
|
mm: ptep_get() conversion
Convert all instances of direct pte_t* dereferencing to instead use
ptep_get() helper. This means that by default, the accesses change from a
C dereference to a READ_ONCE(). This is technically the correct thing to
do since where pgtables are modified by HW (for access/dirty) they are
volatile and therefore we should always ensure READ_ONCE() semantics.
But more importantly, by always using the helper, it can be overridden by
the architecture to fully encapsulate the contents of the pte. Arch code
is deliberately not converted, as the arch code knows best. It is
intended that arch code (arm64) will override the default with its own
implementation that can (e.g.) hide certain bits from the core code, or
determine young/dirty status by mixing in state from another source.
Conversion was done using Coccinelle:
----
// $ make coccicheck \
// COCCI=ptepget.cocci \
// SPFLAGS="--include-headers" \
// MODE=patch
virtual patch
@ depends on patch @
pte_t *v;
@@
- *v
+ ptep_get(v)
----
Then reviewed and hand-edited to avoid multiple unnecessary calls to
ptep_get(), instead opting to store the result of a single call in a
variable, where it is correct to do so. This aims to negate any cost of
READ_ONCE() and will benefit arch-overrides that may be more complex.
Included is a fix for an issue in an earlier version of this patch that
was pointed out by kernel test robot. The issue arose because config
MMU=n elides definition of the ptep helper functions, including
ptep_get(). HUGETLB_PAGE=n configs still define a simple
huge_ptep_clear_flush() for linking purposes, which dereferences the ptep.
So when both configs are disabled, this caused a build error because
ptep_get() is not defined. Fix by continuing to do a direct dereference
when MMU=n. This is safe because for this config the arch code cannot be
trying to virtualize the ptes because none of the ptep helpers are
defined.
Link: https://lkml.kernel.org/r/20230612151545.3317766-4-ryan.roberts@arm.com
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/oe-kbuild-all/202305120142.yXsNEo6H-lkp@intel.com/
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Dave Airlie <airlied@gmail.com>
Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: SeongJae Park <sj@kernel.org>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-12 08:15:45 -07:00
|
|
|
pte_page(ptep_get(pte)));
|
2022-04-28 23:16:16 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
size = min(end - start, pgmap_vmemmap_nr(pgmap) * sizeof(struct page));
|
|
|
|
for (addr = start; addr < end; addr += size) {
|
2022-06-12 11:23:20 -07:00
|
|
|
unsigned long next, last = addr + size;
|
2022-04-28 23:16:16 -07:00
|
|
|
|
|
|
|
/* Populate the head page vmemmap page */
|
|
|
|
pte = vmemmap_populate_address(addr, node, NULL, NULL);
|
|
|
|
if (!pte)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/* Populate the tail pages vmemmap page */
|
|
|
|
next = addr + PAGE_SIZE;
|
|
|
|
pte = vmemmap_populate_address(next, node, NULL, NULL);
|
|
|
|
if (!pte)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reuse the previous page for the rest of tail pages
|
2022-06-26 23:00:26 -07:00
|
|
|
* See layout diagram in Documentation/mm/vmemmap_dedup.rst
|
2022-04-28 23:16:16 -07:00
|
|
|
*/
|
|
|
|
next += PAGE_SIZE;
|
|
|
|
rc = vmemmap_populate_range(next, last, node, NULL,
|
mm: ptep_get() conversion
Convert all instances of direct pte_t* dereferencing to instead use
ptep_get() helper. This means that by default, the accesses change from a
C dereference to a READ_ONCE(). This is technically the correct thing to
do since where pgtables are modified by HW (for access/dirty) they are
volatile and therefore we should always ensure READ_ONCE() semantics.
But more importantly, by always using the helper, it can be overridden by
the architecture to fully encapsulate the contents of the pte. Arch code
is deliberately not converted, as the arch code knows best. It is
intended that arch code (arm64) will override the default with its own
implementation that can (e.g.) hide certain bits from the core code, or
determine young/dirty status by mixing in state from another source.
Conversion was done using Coccinelle:
----
// $ make coccicheck \
// COCCI=ptepget.cocci \
// SPFLAGS="--include-headers" \
// MODE=patch
virtual patch
@ depends on patch @
pte_t *v;
@@
- *v
+ ptep_get(v)
----
Then reviewed and hand-edited to avoid multiple unnecessary calls to
ptep_get(), instead opting to store the result of a single call in a
variable, where it is correct to do so. This aims to negate any cost of
READ_ONCE() and will benefit arch-overrides that may be more complex.
Included is a fix for an issue in an earlier version of this patch that
was pointed out by kernel test robot. The issue arose because config
MMU=n elides definition of the ptep helper functions, including
ptep_get(). HUGETLB_PAGE=n configs still define a simple
huge_ptep_clear_flush() for linking purposes, which dereferences the ptep.
So when both configs are disabled, this caused a build error because
ptep_get() is not defined. Fix by continuing to do a direct dereference
when MMU=n. This is safe because for this config the arch code cannot be
trying to virtualize the ptes because none of the ptep helpers are
defined.
Link: https://lkml.kernel.org/r/20230612151545.3317766-4-ryan.roberts@arm.com
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/oe-kbuild-all/202305120142.yXsNEo6H-lkp@intel.com/
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Dave Airlie <airlied@gmail.com>
Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: SeongJae Park <sj@kernel.org>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-12 08:15:45 -07:00
|
|
|
pte_page(ptep_get(pte)));
|
2022-04-28 23:16:16 -07:00
|
|
|
if (rc)
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
2022-04-28 23:16:15 -07:00
|
|
|
}
|
|
|
|
|
2023-07-24 12:07:50 -07:00
|
|
|
#endif
|
|
|
|
|
2019-07-18 15:58:11 -07:00
|
|
|
struct page * __meminit __populate_section_memmap(unsigned long pfn,
|
mm/sparse-vmemmap: add a pgmap argument to section activation
Patch series "sparse-vmemmap: memory savings for compound devmaps (device-dax)", v9.
This series minimizes 'struct page' overhead by pursuing a similar
approach as Muchun Song series "Free some vmemmap pages of hugetlb page"
(now merged since v5.14), but applied to devmap with @vmemmap_shift
(device-dax).
The vmemmap dedpulication original idea (already used in HugeTLB) is to
reuse/deduplicate tail page vmemmap areas, particular the area which only
describes tail pages. So a vmemmap page describes 64 struct pages, and
the first page for a given ZONE_DEVICE vmemmap would contain the head page
and 63 tail pages. The second vmemmap page would contain only tail pages,
and that's what gets reused across the rest of the subsection/section.
The bigger the page size, the bigger the savings (2M hpage -> save 6
vmemmap pages; 1G hpage -> save 4094 vmemmap pages).
This is done for PMEM /specifically only/ on device-dax configured
namespaces, not fsdax. In other words, a devmap with a @vmemmap_shift.
In terms of savings, per 1Tb of memory, the struct page cost would go down
with compound devmap:
* with 2M pages we lose 4G instead of 16G (0.39% instead of 1.5% of
total memory)
* with 1G pages we lose 40MB instead of 16G (0.0014% instead of 1.5% of
total memory)
The series is mostly summed up by patch 4, and to summarize what the
series does:
Patches 1 - 3: Minor cleanups in preparation for patch 4. Move the very
nice docs of hugetlb_vmemmap.c into a Documentation/vm/ entry.
Patch 4: Patch 4 is the one that takes care of the struct page savings
(also referred to here as tail-page/vmemmap deduplication). Much like
Muchun series, we reuse the second PTE tail page vmemmap areas across a
given @vmemmap_shift On important difference though, is that contrary to
the hugetlbfs series, there's no vmemmap for the area because we are
late-populating it as opposed to remapping a system-ram range. IOW no
freeing of pages of already initialized vmemmap like the case for
hugetlbfs, which greatly simplifies the logic (besides not being
arch-specific). altmap case unchanged and still goes via the
vmemmap_populate(). Also adjust the newly added docs to the device-dax
case.
[Note that device-dax is still a little behind HugeTLB in terms of
savings. I have an additional simple patch that reuses the head vmemmap
page too, as a follow-up. That will double the savings and namespaces
initialization.]
Patch 5: Initialize fewer struct pages depending on the page size with
DRAM backed struct pages -- because fewer pages are unique and most tail
pages (with bigger vmemmap_shift).
NVDIMM namespace bootstrap improves from ~268-358 ms to
~80-110/<1ms on 128G NVDIMMs with 2M and 1G respectivally. And struct
page needed capacity will be 3.8x / 1071x smaller for 2M and 1G
respectivelly. Tested on x86 with 1.5Tb of pmem (including pinning,
and RDMA registration/deregistration scalability with 2M MRs)
This patch (of 5):
In support of using compound pages for devmap mappings, plumb the pgmap
down to the vmemmap_populate implementation. Note that while altmap is
retrievable from pgmap the memory hotplug code passes altmap without
pgmap[*], so both need to be independently plumbed.
So in addition to @altmap, pass @pgmap to sparse section populate
functions namely:
sparse_add_section
section_activate
populate_section_memmap
__populate_section_memmap
Passing @pgmap allows __populate_section_memmap() to both fetch the
vmemmap_shift in which memmap metadata is created for and also to let
sparse-vmemmap fetch pgmap ranges to co-relate to a given section and pick
whether to just reuse tail pages from past onlined sections.
While at it, fix the kdoc for @altmap for sparse_add_section().
[*] https://lore.kernel.org/linux-mm/20210319092635.6214-1-osalvador@suse.de/
Link: https://lkml.kernel.org/r/20220420155310.9712-1-joao.m.martins@oracle.com
Link: https://lkml.kernel.org/r/20220420155310.9712-2-joao.m.martins@oracle.com
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jane Chu <jane.chu@oracle.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-04-28 23:16:15 -07:00
|
|
|
unsigned long nr_pages, int nid, struct vmem_altmap *altmap,
|
|
|
|
struct dev_pagemap *pgmap)
|
2007-10-16 01:24:13 -07:00
|
|
|
{
|
2020-08-06 23:23:59 -07:00
|
|
|
unsigned long start = (unsigned long) pfn_to_page(pfn);
|
|
|
|
unsigned long end = start + nr_pages * sizeof(struct page);
|
2022-04-28 23:16:16 -07:00
|
|
|
int r;
|
2020-08-06 23:23:59 -07:00
|
|
|
|
|
|
|
if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
|
|
|
|
!IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
|
|
|
|
return NULL;
|
2013-04-29 15:07:50 -07:00
|
|
|
|
mm/vmemmap/devdax: fix kernel crash when probing devdax devices
commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory savings for
compound devmaps") added support for using optimized vmmemap for devdax
devices. But how vmemmap mappings are created are architecture specific.
For example, powerpc with hash translation doesn't have vmemmap mappings
in init_mm page table instead they are bolted table entries in the
hardware page table
vmemmap_populate_compound_pages() used by vmemmap optimization code is not
aware of these architecture-specific mapping. Hence allow architecture to
opt for this feature. I selected architectures supporting
HUGETLB_PAGE_OPTIMIZE_VMEMMAP option as also supporting this feature.
This patch fixes the below crash on ppc64.
BUG: Unable to handle kernel data access on write at 0xc00c000100400038
Faulting instruction address: 0xc000000001269d90
Oops: Kernel access of bad area, sig: 11 [#1]
LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
Modules linked in:
CPU: 7 PID: 1 Comm: swapper/0 Not tainted 6.3.0-rc5-150500.34-default+ #2 5c90a668b6bbd142599890245c2fb5de19d7d28a
Hardware name: IBM,9009-42G POWER9 (raw) 0x4e0202 0xf000005 of:IBM,FW950.40 (VL950_099) hv:phyp pSeries
NIP: c000000001269d90 LR: c0000000004c57d4 CTR: 0000000000000000
REGS: c000000003632c30 TRAP: 0300 Not tainted (6.3.0-rc5-150500.34-default+)
MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE> CR: 24842228 XER: 00000000
CFAR: c0000000004c57d0 DAR: c00c000100400038 DSISR: 42000000 IRQMASK: 0
....
NIP [c000000001269d90] __init_single_page.isra.74+0x14/0x4c
LR [c0000000004c57d4] __init_zone_device_page+0x44/0xd0
Call Trace:
[c000000003632ed0] [c000000003632f60] 0xc000000003632f60 (unreliable)
[c000000003632f10] [c0000000004c5ca0] memmap_init_zone_device+0x170/0x250
[c000000003632fe0] [c0000000005575f8] memremap_pages+0x2c8/0x7f0
[c0000000036330c0] [c000000000557b5c] devm_memremap_pages+0x3c/0xa0
[c000000003633100] [c000000000d458a8] dev_dax_probe+0x108/0x3e0
[c0000000036331a0] [c000000000d41430] dax_bus_probe+0xb0/0x140
[c0000000036331d0] [c000000000cef27c] really_probe+0x19c/0x520
[c000000003633260] [c000000000cef6b4] __driver_probe_device+0xb4/0x230
[c0000000036332e0] [c000000000cef888] driver_probe_device+0x58/0x120
[c000000003633320] [c000000000cefa6c] __device_attach_driver+0x11c/0x1e0
[c0000000036333a0] [c000000000cebc58] bus_for_each_drv+0xa8/0x130
[c000000003633400] [c000000000ceefcc] __device_attach+0x15c/0x250
[c0000000036334a0] [c000000000ced458] bus_probe_device+0x108/0x110
[c0000000036334f0] [c000000000ce92dc] device_add+0x7fc/0xa10
[c0000000036335b0] [c000000000d447c8] devm_create_dev_dax+0x1d8/0x530
[c000000003633640] [c000000000d46b60] __dax_pmem_probe+0x200/0x270
[c0000000036337b0] [c000000000d46bf0] dax_pmem_probe+0x20/0x70
[c0000000036337d0] [c000000000d2279c] nvdimm_bus_probe+0xac/0x2b0
[c000000003633860] [c000000000cef27c] really_probe+0x19c/0x520
[c0000000036338f0] [c000000000cef6b4] __driver_probe_device+0xb4/0x230
[c000000003633970] [c000000000cef888] driver_probe_device+0x58/0x120
[c0000000036339b0] [c000000000cefd08] __driver_attach+0x1d8/0x240
[c000000003633a30] [c000000000cebb04] bus_for_each_dev+0xb4/0x130
[c000000003633a90] [c000000000cee564] driver_attach+0x34/0x50
[c000000003633ab0] [c000000000ced878] bus_add_driver+0x218/0x300
[c000000003633b40] [c000000000cf1144] driver_register+0xa4/0x1b0
[c000000003633bb0] [c000000000d21a0c] __nd_driver_register+0x5c/0x100
[c000000003633c10] [c00000000206a2e8] dax_pmem_init+0x34/0x48
[c000000003633c30] [c0000000000132d0] do_one_initcall+0x60/0x320
[c000000003633d00] [c0000000020051b0] kernel_init_freeable+0x360/0x400
[c000000003633de0] [c000000000013764] kernel_init+0x34/0x1d0
[c000000003633e50] [c00000000000de14] ret_from_kernel_thread+0x5c/0x64
Link: https://lkml.kernel.org/r/20230411142214.64464-1-aneesh.kumar@linux.ibm.com
Fixes: 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory savings for compound devmaps")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Reported-by: Tarun Sahu <tsahu@linux.ibm.com>
Reviewed-by: Joao Martins <joao.m.martins@oracle.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-11 07:22:13 -07:00
|
|
|
if (vmemmap_can_optimize(altmap, pgmap))
|
2022-04-28 23:16:16 -07:00
|
|
|
r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap);
|
|
|
|
else
|
|
|
|
r = vmemmap_populate(start, end, nid, altmap);
|
|
|
|
|
|
|
|
if (r < 0)
|
2007-10-16 01:24:13 -07:00
|
|
|
return NULL;
|
|
|
|
|
mm: don't account memmap per-node
Fix invalid access to pgdat during hot-remove operation:
ndctl users reported a GPF when trying to destroy a namespace:
$ ndctl destroy-namespace all -r all -f
Segmentation fault
dmesg:
Oops: general protection fault, probably for
non-canonical address 0xdffffc0000005650: 0000 [#1] PREEMPT SMP KASAN
PTI
KASAN: probably user-memory-access in range
[0x000000000002b280-0x000000000002b287]
CPU: 26 UID: 0 PID: 1868 Comm: ndctl Not tainted 6.11.0-rc1 #1
Hardware name: Dell Inc. PowerEdge R640/08HT8T, BIOS
2.20.1 09/13/2023
RIP: 0010:mod_node_page_state+0x2a/0x110
cxl-test users report a GPF when trying to unload the test module:
$ modrpobe -r cxl-test
dmesg
BUG: unable to handle page fault for address: 0000000000004200
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0
Oops: Oops: 0000 [#1] PREEMPT SMP PTI
CPU: 0 UID: 0 PID: 1076 Comm: modprobe Tainted: G O N 6.11.0-rc1 #197
Tainted: [O]=OOT_MODULE, [N]=TEST
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/15
RIP: 0010:mod_node_page_state+0x6/0x90
Currently, when memory is hot-plugged or hot-removed the accounting is
done based on the assumption that memmap is allocated from the same node
as the hot-plugged/hot-removed memory, which is not always the case.
In addition, there are challenges with keeping the node id of the memory
that is being remove to the time when memmap accounting is actually
performed: since this is done after remove_pfn_range_from_zone(), and
also after remove_memory_block_devices(). Meaning that we cannot use
pgdat nor walking though memblocks to get the nid.
Given all of that, account the memmap overhead system wide instead.
For this we are going to be using global atomic counters, but given that
memmap size is rarely modified, and normally is only modified either
during early boot when there is only one CPU, or under a hotplug global
mutex lock, therefore there is no need for per-cpu optimizations.
Also, while we are here rename nr_memmap to nr_memmap_pages, and
nr_memmap_boot to nr_memmap_boot_pages to be self explanatory that the
units are in page count.
[pasha.tatashin@soleen.com: address a few nits from David Hildenbrand]
Link: https://lkml.kernel.org/r/20240809191020.1142142-4-pasha.tatashin@soleen.com
Link: https://lkml.kernel.org/r/20240809191020.1142142-4-pasha.tatashin@soleen.com
Link: https://lkml.kernel.org/r/20240808213437.682006-4-pasha.tatashin@soleen.com
Fixes: 15995a352474 ("mm: report per-page metadata information")
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Closes: https://lore.kernel.org/linux-cxl/CAHj4cs9Ax1=CoJkgBGP_+sNu6-6=6v=_L-ZBZY0bVLD3wUWZQg@mail.gmail.com
Reported-by: Alison Schofield <alison.schofield@intel.com>
Closes: https://lore.kernel.org/linux-mm/Zq0tPd2h6alFz8XF@aschofie-mobl2/#t
Tested-by: Dan Williams <dan.j.williams@intel.com>
Tested-by: Alison Schofield <alison.schofield@intel.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Cc: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Cc: Fan Ni <fan.ni@samsung.com>
Cc: Joel Granados <j.granados@samsung.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Li Zhijian <lizhijian@fujitsu.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Nhat Pham <nphamcs@gmail.com>
Cc: Sourav Panda <souravpanda@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-08-08 14:34:36 -07:00
|
|
|
if (system_state == SYSTEM_BOOTING)
|
|
|
|
memmap_boot_pages_add(DIV_ROUND_UP(end - start, PAGE_SIZE));
|
|
|
|
else
|
|
|
|
memmap_pages_add(DIV_ROUND_UP(end - start, PAGE_SIZE));
|
2024-06-05 15:27:51 -07:00
|
|
|
|
2019-07-18 15:58:11 -07:00
|
|
|
return pfn_to_page(pfn);
|
2007-10-16 01:24:13 -07:00
|
|
|
}
|