1

mm: memcg: add cache line padding to mem_cgroup_per_node

Memcg v1-specific fields serve a buffer function between read-mostly and
update often parts of the mem_cgroup_per_node structure.  If
CONFIG_MEMCG_V1 is not set and these fields are not present, an explicit
cacheline padding is needed.

Link: https://lkml.kernel.org/r/20240701185932.704807-2-roman.gushchin@linux.dev
Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
Suggested-by: Shakeel Butt <shakeel.butt@linux.dev>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Roman Gushchin 2024-07-01 18:59:32 +00:00 committed by Andrew Morton
parent 9fa001cf3b
commit 6df13230b6

View File

@ -95,14 +95,16 @@ struct mem_cgroup_per_node {
#ifdef CONFIG_MEMCG_V1 #ifdef CONFIG_MEMCG_V1
/* /*
* Memcg-v1 only stuff in middle as buffer between read mostly fields * Memcg-v1 only stuff in middle as buffer between read mostly fields
* and update often fields to avoid false sharing. Once v1 stuff is * and update often fields to avoid false sharing. If v1 stuff is
* moved in a separate struct, an explicit padding is needed. * not present, an explicit padding is needed.
*/ */
struct rb_node tree_node; /* RB tree node */ struct rb_node tree_node; /* RB tree node */
unsigned long usage_in_excess;/* Set to the value by which */ unsigned long usage_in_excess;/* Set to the value by which */
/* the soft limit is exceeded*/ /* the soft limit is exceeded*/
bool on_tree; bool on_tree;
#else
CACHELINE_PADDING(_pad1_);
#endif #endif
/* Fields which get updated often at the end. */ /* Fields which get updated often at the end. */