summary refs log tree commit diff
path: root/fs/nfs/delegation.c
diff options
context:
space:
mode:
authorRoman Gushchin <klamm@yandex-team.ru>2015-02-11 15:28:39 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2015-02-11 17:06:07 -0800
commit5703b087dc8eaf47bfb399d6cf512d471beff405 (patch)
tree7891389716c896942b77627c6ea3b8033db8d24e /fs/nfs/delegation.c
parent57c2e36b6f4dd52e7e90f4c748a665b13fa228d2 (diff)
downloadlinux-5703b087dc8eaf47bfb399d6cf512d471beff405.tar.gz
mm/mmap.c: fix arithmetic overflow in __vm_enough_memory()
I noticed, that "allowed" can easily overflow by falling below 0,
because (total_vm / 32) can be larger than "allowed".  The problem
occurs in OVERCOMMIT_NONE mode.

In this case, a huge allocation can success and overcommit the system
(despite OVERCOMMIT_NONE mode).  All subsequent allocations will fall
(system-wide), so system become unusable.

The problem was masked out by commit c9b1d0981fcc
("mm: limit growth of 3% hardcoded other user reserve"),
but it's easy to reproduce it on older kernels:
1) set overcommit_memory sysctl to 2
2) mmap() large file multiple times (with VM_SHARED flag)
3) try to malloc() large amount of memory

It also can be reproduced on newer kernels, but miss-configured
sysctl_user_reserve_kbytes is required.

Fix this issue by switching to signed arithmetic here.

[akpm@linux-foundation.org: use min_t]
Signed-off-by: Roman Gushchin <klamm@yandex-team.ru>
Cc: Andrew Shewmaker <agshew@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'fs/nfs/delegation.c')
0 files changed, 0 insertions, 0 deletions