d6e0b7fa11
To speed up further allocations SLUB may store empty slabs in per cpu/node partial lists instead of freeing them immediately. This prevents per memcg caches destruction, because kmem caches created for a memory cgroup are only destroyed after the last page charged to the cgroup is freed. To fix this issue, this patch resurrects approach first proposed in [1]. It forbids SLUB to cache empty slabs after the memory cgroup that the cache belongs to was destroyed. It is achieved by setting kmem_cache's cpu_partial and min_partial constants to 0 and tuning put_cpu_partial() so that it would drop frozen empty slabs immediately if cpu_partial = 0. The runtime overhead is minimal. From all the hot functions, we only touch relatively cold put_cpu_partial(): we make it call unfreeze_partials() after freezing a slab that belongs to an offline memory cgroup. Since slab freezing exists to avoid moving slabs from/to a partial list on free/alloc, and there can't be allocations from dead caches, it shouldn't cause any overhead. We do have to disable preemption for put_cpu_partial() to achieve that though. The original patch was accepted well and even merged to the mm tree. However, I decided to withdraw it due to changes happening to the memcg core at that time. I had an idea of introducing per-memcg shrinkers for kmem caches, but now, as memcg has finally settled down, I do not see it as an option, because SLUB shrinker would be too costly to call since SLUB does not keep free slabs on a separate list. Besides, we currently do not even call per-memcg shrinkers for offline memcgs. Overall, it would introduce much more complexity to both SLUB and memcg than this small patch. Regarding to SLAB, there's no problem with it, because it shrinks per-cpu/node caches periodically. Thanks to list_lru reparenting, we no longer keep entries for offline cgroups in per-memcg arrays (such as memcg_cache_params->memcg_caches), so we do not have to bother if a per-memcg cache will be shrunk a bit later than it could be. [1] http://thread.gmane.org/gmane.linux.kernel.mm/118649/focus=118650 Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
---|---|---|
.. | ||
backing-dev.c | ||
balloon_compaction.c | ||
bootmem.c | ||
cleancache.c | ||
cma.c | ||
compaction.c | ||
debug-pagealloc.c | ||
debug.c | ||
dmapool.c | ||
early_ioremap.c | ||
fadvise.c | ||
failslab.c | ||
filemap_xip.c | ||
filemap.c | ||
frontswap.c | ||
gup.c | ||
highmem.c | ||
huge_memory.c | ||
hugetlb_cgroup.c | ||
hugetlb.c | ||
hwpoison-inject.c | ||
init-mm.c | ||
internal.h | ||
interval_tree.c | ||
iov_iter.c | ||
Kconfig | ||
Kconfig.debug | ||
kmemcheck.c | ||
kmemleak-test.c | ||
kmemleak.c | ||
ksm.c | ||
list_lru.c | ||
maccess.c | ||
madvise.c | ||
Makefile | ||
memblock.c | ||
memcontrol.c | ||
memory_hotplug.c | ||
memory-failure.c | ||
memory.c | ||
mempolicy.c | ||
mempool.c | ||
migrate.c | ||
mincore.c | ||
mlock.c | ||
mm_init.c | ||
mmap.c | ||
mmu_context.c | ||
mmu_notifier.c | ||
mmzone.c | ||
mprotect.c | ||
mremap.c | ||
msync.c | ||
nobootmem.c | ||
nommu.c | ||
oom_kill.c | ||
page_alloc.c | ||
page_counter.c | ||
page_ext.c | ||
page_io.c | ||
page_isolation.c | ||
page_owner.c | ||
page-writeback.c | ||
pagewalk.c | ||
percpu-km.c | ||
percpu-vm.c | ||
percpu.c | ||
pgtable-generic.c | ||
process_vm_access.c | ||
quicklist.c | ||
readahead.c | ||
rmap.c | ||
shmem.c | ||
slab_common.c | ||
slab.c | ||
slab.h | ||
slob.c | ||
slub.c | ||
sparse-vmemmap.c | ||
sparse.c | ||
swap_cgroup.c | ||
swap_state.c | ||
swap.c | ||
swapfile.c | ||
truncate.c | ||
util.c | ||
vmacache.c | ||
vmalloc.c | ||
vmpressure.c | ||
vmscan.c | ||
vmstat.c | ||
workingset.c | ||
zbud.c | ||
zpool.c | ||
zsmalloc.c | ||
zswap.c |