<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/mm, branch v4.14.73</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v4.14.73</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v4.14.73'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2018-09-29T10:06:04Z</updated>
<entry>
<title>mm: shmem.c: Correctly annotate new inodes for lockdep</title>
<updated>2018-09-29T10:06:04Z</updated>
<author>
<name>Joel Fernandes (Google)</name>
<email>joel@joelfernandes.org</email>
</author>
<published>2018-09-20T19:22:39Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=6447b34fc2708725e97707ab8256a9a88b31682d'/>
<id>urn:sha1:6447b34fc2708725e97707ab8256a9a88b31682d</id>
<content type='text'>
commit b45d71fb89ab8adfe727b9d0ee188ed58582a647 upstream.

Directories and inodes don't necessarily need to be in the same lockdep
class.  For ex, hugetlbfs splits them out too to prevent false positives
in lockdep.  Annotate correctly after new inode creation.  If its a
directory inode, it will be put into a different class.

This should fix a lockdep splat reported by syzbot:

&gt; ======================================================
&gt; WARNING: possible circular locking dependency detected
&gt; 4.18.0-rc8-next-20180810+ #36 Not tainted
&gt; ------------------------------------------------------
&gt; syz-executor900/4483 is trying to acquire lock:
&gt; 00000000d2bfc8fe (&amp;sb-&gt;s_type-&gt;i_mutex_key#9){++++}, at: inode_lock
&gt; include/linux/fs.h:765 [inline]
&gt; 00000000d2bfc8fe (&amp;sb-&gt;s_type-&gt;i_mutex_key#9){++++}, at:
&gt; shmem_fallocate+0x18b/0x12e0 mm/shmem.c:2602
&gt;
&gt; but task is already holding lock:
&gt; 0000000025208078 (ashmem_mutex){+.+.}, at: ashmem_shrink_scan+0xb4/0x630
&gt; drivers/staging/android/ashmem.c:448
&gt;
&gt; which lock already depends on the new lock.
&gt;
&gt; -&gt; #2 (ashmem_mutex){+.+.}:
&gt;        __mutex_lock_common kernel/locking/mutex.c:925 [inline]
&gt;        __mutex_lock+0x171/0x1700 kernel/locking/mutex.c:1073
&gt;        mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1088
&gt;        ashmem_mmap+0x55/0x520 drivers/staging/android/ashmem.c:361
&gt;        call_mmap include/linux/fs.h:1844 [inline]
&gt;        mmap_region+0xf27/0x1c50 mm/mmap.c:1762
&gt;        do_mmap+0xa10/0x1220 mm/mmap.c:1535
&gt;        do_mmap_pgoff include/linux/mm.h:2298 [inline]
&gt;        vm_mmap_pgoff+0x213/0x2c0 mm/util.c:357
&gt;        ksys_mmap_pgoff+0x4da/0x660 mm/mmap.c:1585
&gt;        __do_sys_mmap arch/x86/kernel/sys_x86_64.c:100 [inline]
&gt;        __se_sys_mmap arch/x86/kernel/sys_x86_64.c:91 [inline]
&gt;        __x64_sys_mmap+0xe9/0x1b0 arch/x86/kernel/sys_x86_64.c:91
&gt;        do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
&gt;        entry_SYSCALL_64_after_hwframe+0x49/0xbe
&gt;
&gt; -&gt; #1 (&amp;mm-&gt;mmap_sem){++++}:
&gt;        __might_fault+0x155/0x1e0 mm/memory.c:4568
&gt;        _copy_to_user+0x30/0x110 lib/usercopy.c:25
&gt;        copy_to_user include/linux/uaccess.h:155 [inline]
&gt;        filldir+0x1ea/0x3a0 fs/readdir.c:196
&gt;        dir_emit_dot include/linux/fs.h:3464 [inline]
&gt;        dir_emit_dots include/linux/fs.h:3475 [inline]
&gt;        dcache_readdir+0x13a/0x620 fs/libfs.c:193
&gt;        iterate_dir+0x48b/0x5d0 fs/readdir.c:51
&gt;        __do_sys_getdents fs/readdir.c:231 [inline]
&gt;        __se_sys_getdents fs/readdir.c:212 [inline]
&gt;        __x64_sys_getdents+0x29f/0x510 fs/readdir.c:212
&gt;        do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
&gt;        entry_SYSCALL_64_after_hwframe+0x49/0xbe
&gt;
&gt; -&gt; #0 (&amp;sb-&gt;s_type-&gt;i_mutex_key#9){++++}:
&gt;        lock_acquire+0x1e4/0x540 kernel/locking/lockdep.c:3924
&gt;        down_write+0x8f/0x130 kernel/locking/rwsem.c:70
&gt;        inode_lock include/linux/fs.h:765 [inline]
&gt;        shmem_fallocate+0x18b/0x12e0 mm/shmem.c:2602
&gt;        ashmem_shrink_scan+0x236/0x630 drivers/staging/android/ashmem.c:455
&gt;        ashmem_ioctl+0x3ae/0x13a0 drivers/staging/android/ashmem.c:797
&gt;        vfs_ioctl fs/ioctl.c:46 [inline]
&gt;        file_ioctl fs/ioctl.c:501 [inline]
&gt;        do_vfs_ioctl+0x1de/0x1720 fs/ioctl.c:685
&gt;        ksys_ioctl+0xa9/0xd0 fs/ioctl.c:702
&gt;        __do_sys_ioctl fs/ioctl.c:709 [inline]
&gt;        __se_sys_ioctl fs/ioctl.c:707 [inline]
&gt;        __x64_sys_ioctl+0x73/0xb0 fs/ioctl.c:707
&gt;        do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
&gt;        entry_SYSCALL_64_after_hwframe+0x49/0xbe
&gt;
&gt; other info that might help us debug this:
&gt;
&gt; Chain exists of:
&gt;   &amp;sb-&gt;s_type-&gt;i_mutex_key#9 --&gt; &amp;mm-&gt;mmap_sem --&gt; ashmem_mutex
&gt;
&gt;  Possible unsafe locking scenario:
&gt;
&gt;        CPU0                    CPU1
&gt;        ----                    ----
&gt;   lock(ashmem_mutex);
&gt;                                lock(&amp;mm-&gt;mmap_sem);
&gt;                                lock(ashmem_mutex);
&gt;   lock(&amp;sb-&gt;s_type-&gt;i_mutex_key#9);
&gt;
&gt;  *** DEADLOCK ***
&gt;
&gt; 1 lock held by syz-executor900/4483:
&gt;  #0: 0000000025208078 (ashmem_mutex){+.+.}, at:
&gt; ashmem_shrink_scan+0xb4/0x630 drivers/staging/android/ashmem.c:448

Link: http://lkml.kernel.org/r/20180821231835.166639-1-joel@joelfernandes.org
Signed-off-by: Joel Fernandes (Google) &lt;joel@joelfernandes.org&gt;
Reported-by: syzbot &lt;syzkaller@googlegroups.com&gt;
Reviewed-by: NeilBrown &lt;neilb@suse.com&gt;
Suggested-by: NeilBrown &lt;neilb@suse.com&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Hugh Dickins &lt;hughd@google.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>mm: get rid of vmacache_flush_all() entirely</title>
<updated>2018-09-19T20:43:48Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2018-09-13T09:57:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=06274364edb4407b386a996a7ff46c3ca3459b70'/>
<id>urn:sha1:06274364edb4407b386a996a7ff46c3ca3459b70</id>
<content type='text'>
commit 7a9cdebdcc17e426fb5287e4a82db1dfe86339b2 upstream.

Jann Horn points out that the vmacache_flush_all() function is not only
potentially expensive, it's buggy too.  It also happens to be entirely
unnecessary, because the sequence number overflow case can be avoided by
simply making the sequence number be 64-bit.  That doesn't even grow the
data structures in question, because the other adjacent fields are
already 64-bit.

So simplify the whole thing by just making the sequence number overflow
case go away entirely, which gets rid of all the complications and makes
the code faster too.  Win-win.

[ Oleg Nesterov points out that the VMACACHE_FULL_FLUSHES statistics
  also just goes away entirely with this ]

Reported-by: Jann Horn &lt;jannh@google.com&gt;
Suggested-by: Will Deacon &lt;will.deacon@arm.com&gt;
Acked-by: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Cc: Oleg Nesterov &lt;oleg@redhat.com&gt;
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>mm/fadvise.c: fix signed overflow UBSAN complaint</title>
<updated>2018-09-15T07:45:28Z</updated>
<author>
<name>Andrey Ryabinin</name>
<email>aryabinin@virtuozzo.com</email>
</author>
<published>2018-08-17T22:46:57Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4570403f6e116701503052011d129e7f86e44abd'/>
<id>urn:sha1:4570403f6e116701503052011d129e7f86e44abd</id>
<content type='text'>
[ Upstream commit a718e28f538441a3b6612da9ff226973376cdf0f ]

Signed integer overflow is undefined according to the C standard.  The
overflow in ksys_fadvise64_64() is deliberate, but since it is signed
overflow, UBSAN complains:

	UBSAN: Undefined behaviour in mm/fadvise.c:76:10
	signed integer overflow:
	4 + 9223372036854775805 cannot be represented in type 'long long int'

Use unsigned types to do math.  Unsigned overflow is defined so UBSAN
will not complain about it.  This patch doesn't change generated code.

[akpm@linux-foundation.org: add comment explaining the casts]
Link: http://lkml.kernel.org/r/20180629184453.7614-1-aryabinin@virtuozzo.com
Signed-off-by: Andrey Ryabinin &lt;aryabinin@virtuozzo.com&gt;
Reported-by: &lt;icytxw@gmail.com&gt;
Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Alexander Potapenko &lt;glider@google.com&gt;
Cc: Dmitry Vyukov &lt;dvyukov@google.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>mm/tlb: Remove tlb_remove_table() non-concurrent condition</title>
<updated>2018-09-09T17:55:59Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2018-08-22T15:30:14Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=7cf82f3b7a7710dc29fc90ce83bee8a1ea7ff6fb'/>
<id>urn:sha1:7cf82f3b7a7710dc29fc90ce83bee8a1ea7ff6fb</id>
<content type='text'>
commit a6f572084fbee8b30f91465f4a085d7a90901c57 upstream.

Will noted that only checking mm_users is incorrect; we should also
check mm_count in order to cover CPUs that have a lazy reference to
this mm (and could do speculative TLB operations).

If removing this turns out to be a performance issue, we can
re-instate a more complete check, but in tlb_table_flush() eliding the
call_rcu_sched().

Fixes: 267239116987 ("mm, powerpc: move the RCU page-table freeing into generic code")
Reported-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Acked-by: Rik van Riel &lt;riel@surriel.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Cc: David Miller &lt;davem@davemloft.net&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>readahead: stricter check for bdi io_pages</title>
<updated>2018-09-09T17:55:53Z</updated>
<author>
<name>Markus Stockhausen</name>
<email>stockhausen@collogia.de</email>
</author>
<published>2018-07-27T15:09:53Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=cf12d0f9c0dc9129f08490390d14ee4d9dbf6ebb'/>
<id>urn:sha1:cf12d0f9c0dc9129f08490390d14ee4d9dbf6ebb</id>
<content type='text'>
commit dc30b96ab6d569060741572cf30517d3179429a8 upstream.

ondemand_readahead() checks bdi-&gt;io_pages to cap the maximum pages
that need to be processed. This works until the readit section. If
we would do an async only readahead (async size = sync size) and
target is at beginning of window we expand the pages by another
get_next_ra_size() pages. Btrace for large reads shows that kernel
always issues a doubled size read at the beginning of processing.
Add an additional check for io_pages in the lower part of the func.
The fix helps devices that hard limit bio pages and rely on proper
handling of max_hw_read_sectors (e.g. older FusionIO cards). For
that reason it could qualify for stable.

Fixes: 9491ae4a ("mm: don't cap request size based on read-ahead setting")
Cc: stable@vger.kernel.org
Signed-off-by: Markus Stockhausen stockhausen@collogia.de
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE</title>
<updated>2018-09-05T07:26:37Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2018-08-22T15:30:15Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=e9afa7c1ef17708e9dd6714c151aab81be4a5f68'/>
<id>urn:sha1:e9afa7c1ef17708e9dd6714c151aab81be4a5f68</id>
<content type='text'>
commit d86564a2f085b79ec046a5cba90188e612352806 upstream.

Jann reported that x86 was missing required TLB invalidates when he
hit the !*batch slow path in tlb_remove_table().

This is indeed the case; RCU_TABLE_FREE does not provide TLB (cache)
invalidates, the PowerPC-hash where this code originated and the
Sparc-hash where this was subsequently used did not need that. ARM
which later used this put an explicit TLB invalidate in their
__p*_free_tlb() functions, and PowerPC-radix followed that example.

But when we hooked up x86 we failed to consider this. Fix this by
(optionally) hooking tlb_remove_table() into the TLB invalidate code.

NOTE: s390 was also needing something like this and might now
      be able to use the generic code again.

[ Modified to be on top of Nick's cleanups, which simplified this patch
  now that tlb_flush_mmu_tlbonly() really only flushes the TLB - Linus ]

Fixes: 9e52fc2b50de ("x86/mm: Enable RCU based page table freeing (CONFIG_HAVE_RCU_TABLE_FREE=y)")
Reported-by: Jann Horn &lt;jannh@google.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Acked-by: Rik van Riel &lt;riel@surriel.com&gt;
Cc: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Cc: David Miller &lt;davem@davemloft.net&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>mm: move tlb_table_flush to tlb_flush_mmu_free</title>
<updated>2018-09-05T07:26:36Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2018-08-23T08:47:08Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3e0994616d4ab790efd574c7f969cdc27491da88'/>
<id>urn:sha1:3e0994616d4ab790efd574c7f969cdc27491da88</id>
<content type='text'>
commit db7ddef301128dad394f1c0f77027f86ee9a4edb upstream.

There is no need to call this from tlb_flush_mmu_tlbonly, it logically
belongs with tlb_flush_mmu_free.  This makes future fixes simpler.

[ This was originally done to allow code consolidation for the
  mmu_notifier fix, but it also ends up helping simplify the
  HAVE_RCU_TABLE_INVALIDATE fix.    - Linus ]

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>mm/memory.c: check return value of ioremap_prot</title>
<updated>2018-09-05T07:26:33Z</updated>
<author>
<name>jie@chenjie6@huwei.com</name>
<email>jie@chenjie6@huwei.com</email>
</author>
<published>2018-08-11T00:23:06Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=cf7ab2abc524ec8499fabfa064d9c1f754092908'/>
<id>urn:sha1:cf7ab2abc524ec8499fabfa064d9c1f754092908</id>
<content type='text'>
[ Upstream commit 24eee1e4c47977bdfb71d6f15f6011e7b6188d04 ]

ioremap_prot() can return NULL which could lead to an oops.

Link: http://lkml.kernel.org/r/1533195441-58594-1-git-send-email-chenjie6@huawei.com
Signed-off-by: chen jie &lt;chenjie6@huawei.com&gt;
Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Li Zefan &lt;lizefan@huawei.com&gt;
Cc: chenjie &lt;chenjie6@huawei.com&gt;
Cc: Yang Shi &lt;shy828301@gmail.com&gt;
Cc: Alexey Dobriyan &lt;adobriyan@gmail.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>memcg: remove memcg_cgroup::id from IDR on mem_cgroup_css_alloc() failure</title>
<updated>2018-09-05T07:26:32Z</updated>
<author>
<name>Kirill Tkhai</name>
<email>ktkhai@virtuozzo.com</email>
</author>
<published>2018-08-02T22:36:01Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1d7bf02d716d353924b963e3f24508416701a382'/>
<id>urn:sha1:1d7bf02d716d353924b963e3f24508416701a382</id>
<content type='text'>
[ Upstream commit 7e97de0b033bcac4fa9a35cef72e0c06e6a22c67 ]

In case of memcg_online_kmem() failure, memcg_cgroup::id remains hashed
in mem_cgroup_idr even after memcg memory is freed.  This leads to leak
of ID in mem_cgroup_idr.

This patch adds removal into mem_cgroup_css_alloc(), which fixes the
problem.  For better readability, it adds a generic helper which is used
in mem_cgroup_alloc() and mem_cgroup_id_put_many() as well.

Link: http://lkml.kernel.org/r/152354470916.22460.14397070748001974638.stgit@localhost.localdomain
Fixes 73f576c04b94 ("mm: memcontrol: fix cgroup creation failure after many small jobs")
Signed-off-by: Kirill Tkhai &lt;ktkhai@virtuozzo.com&gt;
Acked-by: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Acked-by: Vladimir Davydov &lt;vdavydov.dev@gmail.com&gt;
Cc: Michal Hocko &lt;mhocko@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>mm: delete historical BUG from zap_pmd_range()</title>
<updated>2018-09-05T07:26:32Z</updated>
<author>
<name>Hugh Dickins</name>
<email>hughd@google.com</email>
</author>
<published>2018-08-01T18:31:52Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=249778d9459a4ed9f8dada4ca8ccc2ff09407482'/>
<id>urn:sha1:249778d9459a4ed9f8dada4ca8ccc2ff09407482</id>
<content type='text'>
[ Upstream commit 53406ed1bcfdabe4b5bc35e6d17946c6f9f563e2 ]

Delete the old VM_BUG_ON_VMA() from zap_pmd_range(), which asserted
that mmap_sem must be held when splitting an "anonymous" vma there.
Whether that's still strictly true nowadays is not entirely clear,
but the danger of sometimes crashing on the BUG is now fairly clear.

Even with the new stricter rules for anonymous vma marking, the
condition it checks for can possible trigger. Commit 44960f2a7b63
("staging: ashmem: Fix SIGBUS crash when traversing mmaped ashmem
pages") is good, and originally I thought it was safe from that
VM_BUG_ON_VMA(), because the /dev/ashmem fd exposed to the user is
disconnected from the vm_file in the vma, and madvise(,,MADV_REMOVE)
insists on VM_SHARED.

But after I read John's earlier mail, drawing attention to the
vfs_fallocate() in there: I may be wrong, and I don't know if Android
has THP in the config anyway, but it looks to me like an
unmap_mapping_range() from ashmem's vfs_fallocate() could hit precisely
the VM_BUG_ON_VMA(), once it's vma_is_anonymous().

Signed-off-by: Hugh Dickins &lt;hughd@google.com&gt;
Cc: John Stultz &lt;john.stultz@linaro.org&gt;
Cc: Kirill Shutemov &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
</feed>
