1
linux/mm
Minchan Kim 0e50ce3b50 mm: use up free swap space before reaching OOM kill
Recently, Luigi reported there are lots of free swap space when OOM
happens.  It's easily reproduced on zram-over-swap, where many instance
of memory hogs are running and laptop_mode is enabled.  He said there
was no problem when he disabled laptop_mode.  The problem when I
investigate problem is following as.

Assumption for easy explanation: There are no page cache page in system
because they all are already reclaimed.

1. try_to_free_pages disable may_writepage when laptop_mode is enabled.
2. shrink_inactive_list isolates victim pages from inactive anon lru list.
3. shrink_page_list adds them to swapcache via add_to_swap but it doesn't
   pageout because sc->may_writepage is 0 so the page is rotated back into
   inactive anon lru list. The add_to_swap made the page Dirty by SetPageDirty.
4. 3 couldn't reclaim any pages so do_try_to_free_pages increase priority and
   retry reclaim with higher priority.
5. shrink_inactlive_list try to isolate victim pages from inactive anon lru list
   but got failed because it try to isolate pages with ISOLATE_CLEAN mode but
   inactive anon lru list is full of dirty pages by 3 so it just returns
   without  any reclaim progress.
6. do_try_to_free_pages doesn't set may_writepage due to zero total_scanned.
   Because sc->nr_scanned is increased by shrink_page_list but we don't call
   shrink_page_list in 5 due to short of isolated pages.

Above loop is continued until OOM happens.

The problem didn't happen before [1] was merged because old logic's
isolatation in shrink_inactive_list was successful and tried to call
shrink_page_list to pageout them but it still ends up failed to page out
by may_writepage.  But important point is that sc->nr_scanned was
increased although we couldn't swap out them so do_try_to_free_pages
could set may_writepages.

Since commit f80c067361 ("mm: zone_reclaim: make isolate_lru_page()
filter-aware") was introduced, it's not a good idea any more to depends
on only the number of scanned pages for setting may_writepage.  So this
patch adds new trigger point of setting may_writepage as below
DEF_PRIOIRTY - 2 which is used to show the significant memory pressure
in VM so it's good fit for our purpose which would be better to lose
power saving or clickety rather than OOM killing.

Signed-off-by: Minchan Kim <minchan@kernel.org>
Reported-by: Luigi Semenzato <semenzato@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-23 17:50:21 -08:00
..
backing-dev.c bdi: allow block devices to say that they require stable page writes 2013-02-21 17:22:19 -08:00
balloon_compaction.c mm: introduce a common interface for balloon pages mobility 2012-12-11 17:22:26 -08:00
bootmem.c mm: Add alloc_bootmem_low_pages_nopanic() 2013-01-29 19:32:59 -08:00
bounce.c block: optionally snapshot page contents to provide stable pages during write 2013-02-21 17:22:20 -08:00
cleancache.c
compaction.c mm: add & use zone_end_pfn() and zone_spans_pfn() 2013-02-23 17:50:20 -08:00
debug-pagealloc.c
dmapool.c dmapool: make DMAPOOL_DEBUG detect corruption of free marker 2012-12-11 17:22:24 -08:00
fadvise.c
failslab.c
filemap_xip.c
filemap.c mm: only enforce stable page writes if the backing device requires it 2013-02-21 17:22:19 -08:00
fremap.c mm: introduce VM_POPULATE flag to better deal with racy userspace programs 2013-02-23 17:50:11 -08:00
frontswap.c
highmem.c Some nice cleanups, and even a patch my wife did as a "live" demo for 2012-12-20 08:37:05 -08:00
huge_memory.c mm: use NUMA_NO_NODE 2013-02-23 17:50:21 -08:00
hugetlb_cgroup.c mm/hugetlb: create hugetlb cgroup file in hugetlb_init 2012-12-18 15:02:15 -08:00
hugetlb.c mm/hugetlb.c: convert to pr_foo() 2013-02-23 17:50:09 -08:00
hwpoison-inject.c
init-mm.c
internal.h mm: directly use __mlock_vma_pages_range() in find_extend_vma() 2013-02-23 17:50:11 -08:00
interval_tree.c
Kconfig memory-hotplug: implement register_page_bootmem_info_section of sparse-vmemmap 2013-02-23 17:50:12 -08:00
Kconfig.debug
kmemcheck.c
kmemleak-test.c
kmemleak.c mm: add & use zone_end_pfn() and zone_spans_pfn() 2013-02-23 17:50:20 -08:00
ksm.c ksm: stop hotremove lockdep warning 2013-02-23 17:50:20 -08:00
maccess.c
madvise.c mm: make madvise(MADV_WILLNEED) support swap file prefetch 2013-02-23 17:50:10 -08:00
Makefile mm: introduce a common interface for balloon pages mobility 2012-12-11 17:22:26 -08:00
memblock.c mm/memblock.c: use CONFIG_HAVE_MEMBLOCK_NODE_MAP to protect movablecore_map in memblock_overlaps_region(). 2013-02-23 17:50:14 -08:00
memcontrol.c mm: refactor inactive_file_is_low() to use get_lru_size() 2013-02-23 17:50:20 -08:00
memory_hotplug.c mm/memory_hotplug: use pgdat_end_pfn() instead of open coding the same. 2013-02-23 17:50:21 -08:00
memory-failure.c mm: remove offlining arg to migrate_pages 2013-02-23 17:50:19 -08:00
memory.c ksm: remove old stable nodes more thoroughly 2013-02-23 17:50:19 -08:00
mempolicy.c mm: use NUMA_NO_NODE 2013-02-23 17:50:21 -08:00
mempool.c
migrate.c mm: remove offlining arg to migrate_pages 2013-02-23 17:50:19 -08:00
mincore.c swap: make each swap partition have one address_space 2013-02-23 17:50:17 -08:00
mlock.c mm/mlock.c: document scary-looking stack expansion mlock chain 2013-02-23 17:50:20 -08:00
mm_init.c mm: init: report on last-nid information stored in page->flags 2013-02-23 17:50:18 -08:00
mmap.c mm/rmap: rename anon_vma_unlock() => anon_vma_unlock_write() 2013-02-23 17:50:17 -08:00
mmu_context.c
mmu_notifier.c mmu_notifier_unregister NULL Pointer deref and multiple ->release() callouts 2013-02-23 17:50:21 -08:00
mmzone.c mm: rename page struct field helpers 2013-02-23 17:50:18 -08:00
mprotect.c mm/mprotect.c: coding-style cleanups 2012-12-18 15:02:15 -08:00
mremap.c mm/rmap: rename anon_vma_unlock() => anon_vma_unlock_write() 2013-02-23 17:50:17 -08:00
msync.c
nobootmem.c mm: Add alloc_bootmem_low_pages_nopanic() 2013-01-29 19:32:59 -08:00
nommu.c swap: add per-partition lock for swapfile 2013-02-23 17:50:17 -08:00
oom_kill.c memcg, oom: provide more precise dump info while memcg oom happening 2013-02-23 17:50:08 -08:00
page_alloc.c mm: use NUMA_NO_NODE 2013-02-23 17:50:21 -08:00
page_cgroup.c memcontrol: use N_MEMORY instead N_HIGH_MEMORY 2012-12-12 17:38:32 -08:00
page_io.c
page_isolation.c mm: fix zone_watermark_ok_safe() accounting of isolated pages 2013-01-04 16:11:46 -08:00
page-writeback.c page-writeback.c: subtract min_free_kbytes from dirtyable memory 2013-02-23 17:50:17 -08:00
pagewalk.c thp: change split_huge_page_pmd() interface 2012-12-12 17:38:31 -08:00
percpu-km.c
percpu-vm.c
percpu.c
pgtable-generic.c
process_vm_access.c
quicklist.c
readahead.c
rmap.c mm/rmap: rename anon_vma_unlock() => anon_vma_unlock_write() 2013-02-23 17:50:17 -08:00
shmem.c mm: shmem: use new radix tree iterator 2013-02-23 17:50:20 -08:00
slab_common.c slab: propagate tunable values 2012-12-18 15:02:14 -08:00
slab.c memcg: add comments clarifying aspects of cache attribute propagation 2012-12-18 15:02:15 -08:00
slab.h slab: propagate tunable values 2012-12-18 15:02:14 -08:00
slob.c mm: rename page struct field helpers 2013-02-23 17:50:18 -08:00
slub.c mm: rename page struct field helpers 2013-02-23 17:50:18 -08:00
sparse-vmemmap.c
sparse.c memory-failure: use num_poisoned_pages instead of mce_bad_pages 2013-02-23 17:50:15 -08:00
swap_state.c swap: add per-partition lock for swapfile 2013-02-23 17:50:17 -08:00
swap.c swap: make each swap partition have one address_space 2013-02-23 17:50:17 -08:00
swapfile.c swap: add per-partition lock for swapfile 2013-02-23 17:50:17 -08:00
truncate.c mm: drop vmtruncate 2012-12-20 18:46:29 -05:00
util.c swap: make each swap partition have one address_space 2013-02-23 17:50:17 -08:00
vmalloc.c mm: use NUMA_NO_NODE 2013-02-23 17:50:21 -08:00
vmscan.c mm: use up free swap space before reaching OOM kill 2013-02-23 17:50:21 -08:00
vmstat.c mm: add & use zone_end_pfn() and zone_spans_pfn() 2013-02-23 17:50:20 -08:00