<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/mm, branch v3.0.17</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.0.17</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.0.17'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2012-01-06T22:14:00Z</updated>
<entry>
<title>mm: hugetlb: fix non-atomic enqueue of huge page</title>
<updated>2012-01-06T22:14:00Z</updated>
<author>
<name>Hillf Danton</name>
<email>dhillf@gmail.com</email>
</author>
<published>2011-12-28T23:57:16Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=7204bf5ef7295b6041047d581153990164b58988'/>
<id>urn:sha1:7204bf5ef7295b6041047d581153990164b58988</id>
<content type='text'>
commit b0365c8d0cb6e79eb5f21418ae61ab511f31b575 upstream.

If a huge page is enqueued under the protection of hugetlb_lock, then the
operation is atomic and safe.

Signed-off-by: Hillf Danton &lt;dhillf@gmail.com&gt;
Reviewed-by: Michal Hocko &lt;mhocko@suse.cz&gt;
Acked-by: KAMEZAWA Hiroyuki &lt;kamezawa.hiroyu@jp.fujitsu.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>memcg: keep root group unchanged if creation fails</title>
<updated>2012-01-06T22:13:55Z</updated>
<author>
<name>Hillf Danton</name>
<email>dhillf@gmail.com</email>
</author>
<published>2011-12-20T01:11:57Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4875f39a0548a1fa5f55b4a90dca0f4a7ad8f237'/>
<id>urn:sha1:4875f39a0548a1fa5f55b4a90dca0f4a7ad8f237</id>
<content type='text'>
commit a41c58a6665cc995e237303b05db42100b71b65e upstream.

If the request is to create non-root group and we fail to meet it, we
should leave the root unchanged.

Signed-off-by: Hillf Danton &lt;dhillf@gmail.com&gt;
Signed-off-by: Hugh Dickins &lt;hughd@google.com&gt;
Acked-by: KAMEZAWA Hiroyuki &lt;kamezawa.hiroyu@jp.fujitsu.com&gt;
Acked-by: Michal Hocko &lt;mhocko@suse.cz&gt;
Cc: Balbir Singh &lt;bsingharora@gmail.com&gt;
Cc: David Rientjes &lt;rientjes@google.com&gt;
Cc: Andrea Arcangeli &lt;aarcange@redhat.com&gt;
Cc: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>vfs: __read_cache_page should use gfp argument rather than GFP_KERNEL</title>
<updated>2012-01-06T22:13:54Z</updated>
<author>
<name>Dave Kleikamp</name>
<email>dave.kleikamp@oracle.com</email>
</author>
<published>2011-12-21T17:05:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=2f79eddd49d391aded0b0c20f7eef3bec6d5e096'/>
<id>urn:sha1:2f79eddd49d391aded0b0c20f7eef3bec6d5e096</id>
<content type='text'>
commit e6f67b8c05f5e129e126f4409ddac6f25f58ffcb upstream.

lockdep reports a deadlock in jfs because a special inode's rw semaphore
is taken recursively.  The mapping's gfp mask is GFP_NOFS, but is not
used when __read_cache_page() calls add_to_page_cache_lru().

Signed-off-by: Dave Kleikamp &lt;dave.kleikamp@oracle.com&gt;
Acked-by: Hugh Dickins &lt;hughd@google.com&gt;
Acked-by: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>oom: fix integer overflow of points in oom_badness</title>
<updated>2012-01-06T22:13:51Z</updated>
<author>
<name>Frantisek Hrbata</name>
<email>fhrbata@redhat.com</email>
</author>
<published>2011-12-20T01:11:59Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=724c6ae2912dbfc5c6a9e309f92b786fbc141462'/>
<id>urn:sha1:724c6ae2912dbfc5c6a9e309f92b786fbc141462</id>
<content type='text'>
commit ff05b6f7ae762b6eb464183eec994b28ea09f6dd upstream.

An integer overflow will happen on 64bit archs if task's sum of rss,
swapents and nr_ptes exceeds (2^31)/1000 value.  This was introduced by
commit

f755a04 oom: use pte pages in OOM score

where the oom score computation was divided into several steps and it's no
longer computed as one expression in unsigned long(rss, swapents, nr_pte
are unsigned long), where the result value assigned to points(int) is in
range(1..1000).  So there could be an int overflow while computing

176          points *= 1000;

and points may have negative value. Meaning the oom score for a mem hog task
will be one.

196          if (points &lt;= 0)
197                  return 1;

For example:
[ 3366]     0  3366 35390480 24303939   5       0             0 oom01
Out of memory: Kill process 3366 (oom01) score 1 or sacrifice child

Here the oom1 process consumes more than 24303939(rss)*4096~=92GB physical
memory, but it's oom score is one.

In this situation the mem hog task is skipped and oom killer kills another and
most probably innocent task with oom score greater than one.

The points variable should be of type long instead of int to prevent the
int overflow.

Signed-off-by: Frantisek Hrbata &lt;fhrbata@redhat.com&gt;
Acked-by: KOSAKI Motohiro &lt;kosaki.motohiro@jp.fujitsu.com&gt;
Acked-by: Oleg Nesterov &lt;oleg@redhat.com&gt;
Acked-by: David Rientjes &lt;rientjes@google.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>percpu: fix per_cpu_ptr_to_phys() handling of non-page-aligned addresses</title>
<updated>2012-01-06T22:13:50Z</updated>
<author>
<name>Eugene Surovegin</name>
<email>ebs@ebshome.net</email>
</author>
<published>2011-12-15T19:25:59Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d051424f29eac6b4c173365a3ba5b9bb76870ce6'/>
<id>urn:sha1:d051424f29eac6b4c173365a3ba5b9bb76870ce6</id>
<content type='text'>
commit 9f57bd4d6dc69a4e3bf43044fa00fcd24dd363e3 upstream.

per_cpu_ptr_to_phys() incorrectly rounds up its result for non-kmalloc
case to the page boundary, which is bogus for any non-page-aligned
address.

This affects the only in-tree user of this function - sysfs handler
for per-cpu 'crash_notes' physical address.  The trouble is that the
crash_notes per-cpu variable is not page-aligned:

crash_notes = 0xc08e8ed4
PER-CPU OFFSET VALUES:
 CPU 0: 3711f000
 CPU 1: 37129000
 CPU 2: 37133000
 CPU 3: 3713d000

So, the per-cpu addresses are:
 crash_notes on CPU 0: f7a07ed4 =&gt; phys 36b57ed4
 crash_notes on CPU 1: f7a11ed4 =&gt; phys 36b4ded4
 crash_notes on CPU 2: f7a1bed4 =&gt; phys 36b43ed4
 crash_notes on CPU 3: f7a25ed4 =&gt; phys 36b39ed4

However, /sys/devices/system/cpu/cpu*/crash_notes says:
 /sys/devices/system/cpu/cpu0/crash_notes: 36b57000
 /sys/devices/system/cpu/cpu1/crash_notes: 36b4d000
 /sys/devices/system/cpu/cpu2/crash_notes: 36b43000
 /sys/devices/system/cpu/cpu3/crash_notes: 36b39000

As you can see, all values are rounded down to a page
boundary. Consequently, this is where kexec sets up the NOTE segments,
and thus where the secondary kernel is looking for them. However, when
the first kernel crashes, it saves the notes to the unaligned
addresses, where they are not found.

Fix it by adding offset_in_page() to the translated page address.

-tj: Combined Eugene's and Petr's commit messages.

Signed-off-by: Eugene Surovegin &lt;ebs@ebshome.net&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reported-by: Petr Tesarik &lt;ptesarik@suse.cz&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>percpu: fix chunk range calculation</title>
<updated>2011-12-21T20:57:37Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2011-11-18T18:55:35Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=6400c8a382ded237df5dc1605f0d06e7939ff4da'/>
<id>urn:sha1:6400c8a382ded237df5dc1605f0d06e7939ff4da</id>
<content type='text'>
commit a855b84c3d8c73220d4d3cd392a7bee7c83de70e upstream.

Percpu allocator recorded the cpus which map to the first and last
units in pcpu_first/last_unit_cpu respectively and used them to
determine the address range of a chunk - e.g. it assumed that the
first unit has the lowest address in a chunk while the last unit has
the highest address.

This simply isn't true.  Groups in a chunk can have arbitrary positive
or negative offsets from the previous one and there is no guarantee
that the first unit occupies the lowest offset while the last one the
highest.

Fix it by actually comparing unit offsets to determine cpus occupying
the lowest and highest offsets.  Also, rename pcu_first/last_unit_cpu
to pcpu_low/high_unit_cpu to avoid confusion.

The chunk address range is used to flush cache on vmalloc area
map/unmap and decide whether a given address is in the first chunk by
per_cpu_ptr_to_phys() and the bug was discovered by invalid
per_cpu_ptr_to_phys() translation for crash_note.

Kudos to Dave Young for tracking down the problem.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reported-by: WANG Cong &lt;xiyou.wangcong@gmail.com&gt;
Reported-by: Dave Young &lt;dyoung@redhat.com&gt;
Tested-by: Dave Young &lt;dyoung@redhat.com&gt;
LKML-Reference: &lt;4EC21F67.10905@redhat.com&gt;
Signed-off-by: Thomas Renninger &lt;trenn@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>mm: vmalloc: check for page allocation failure before vmlist insertion</title>
<updated>2011-12-21T20:57:36Z</updated>
<author>
<name>Mel Gorman</name>
<email>mgorman@suse.de</email>
</author>
<published>2011-12-08T22:34:30Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3a15d7377faf8b10d04febc6c265ecf5f52c2558'/>
<id>urn:sha1:3a15d7377faf8b10d04febc6c265ecf5f52c2558</id>
<content type='text'>
commit 1368edf0647ac112d8cfa6ce47257dc950c50f5c upstream.

Commit f5252e00 ("mm: avoid null pointer access in vm_struct via
/proc/vmallocinfo") adds newly allocated vm_structs to the vmlist after
it is fully initialised.  Unfortunately, it did not check that
__vmalloc_area_node() successfully populated the area.  In the event of
allocation failure, the vmalloc area is freed but the pointer to freed
memory is inserted into the vmlist leading to a a crash later in
get_vmalloc_info().

This patch adds a check for ____vmalloc_area_node() failure within
__vmalloc_node_range.  It does not use "goto fail" as in the previous
error path as a warning was already displayed by __vmalloc_area_node()
before it called vfree in its failure path.

Credit goes to Luciano Chavez for doing all the real work of identifying
exactly where the problem was.

Signed-off-by: Mel Gorman &lt;mgorman@suse.de&gt;
Reported-by: Luciano Chavez &lt;lnx1138@linux.vnet.ibm.com&gt;
Tested-by: Luciano Chavez &lt;lnx1138@linux.vnet.ibm.com&gt;
Reviewed-by: Rik van Riel &lt;riel@redhat.com&gt;
Acked-by: David Rientjes &lt;rientjes@google.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>mm: Ensure that pfn_valid() is called once per pageblock when reserving pageblocks</title>
<updated>2011-12-21T20:57:36Z</updated>
<author>
<name>Michal Hocko</name>
<email>mhocko@suse.cz</email>
</author>
<published>2011-12-08T22:34:27Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=21830d75b1b3752a6418157c0abae8a11938d356'/>
<id>urn:sha1:21830d75b1b3752a6418157c0abae8a11938d356</id>
<content type='text'>
commit d021563888312018ca65681096f62e36c20e63cc upstream.

setup_zone_migrate_reserve() expects that zone-&gt;start_pfn starts at
pageblock_nr_pages aligned pfn otherwise we could access beyond an
existing memblock resulting in the following panic if
CONFIG_HOLES_IN_ZONE is not configured and we do not check pfn_valid:

  IP: [&lt;c02d331d&gt;] setup_zone_migrate_reserve+0xcd/0x180
  *pdpt = 0000000000000000 *pde = f000ff53f000ff53
  Oops: 0000 [#1] SMP
  Pid: 1, comm: swapper Not tainted 3.0.7-0.7-pae #1 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
  EIP: 0060:[&lt;c02d331d&gt;] EFLAGS: 00010006 CPU: 0
  EIP is at setup_zone_migrate_reserve+0xcd/0x180
  EAX: 000c0000 EBX: f5801fc0 ECX: 000c0000 EDX: 00000000
  ESI: 000c01fe EDI: 000c01fe EBP: 00140000 ESP: f2475f58
  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
  Process swapper (pid: 1, ti=f2474000 task=f2472cd0 task.ti=f2474000)
  Call Trace:
  [&lt;c02d389c&gt;] __setup_per_zone_wmarks+0xec/0x160
  [&lt;c02d3a1f&gt;] setup_per_zone_wmarks+0xf/0x20
  [&lt;c08a771c&gt;] init_per_zone_wmark_min+0x27/0x86
  [&lt;c020111b&gt;] do_one_initcall+0x2b/0x160
  [&lt;c086639d&gt;] kernel_init+0xbe/0x157
  [&lt;c05cae26&gt;] kernel_thread_helper+0x6/0xd
  Code: a5 39 f5 89 f7 0f 46 fd 39 cf 76 40 8b 03 f6 c4 08 74 32 eb 91 90 89 c8 c1 e8 0e 0f be 80 80 2f 86 c0 8b 14 85 60 2f 86 c0 89 c8 &lt;2b&gt; 82 b4 12 00 00 c1 e0 05 03 82 ac 12 00 00 8b 00 f6 c4 08 0f
  EIP: [&lt;c02d331d&gt;] setup_zone_migrate_reserve+0xcd/0x180 SS:ESP 0068:f2475f58
  CR2: 00000000000012b4

We crashed in pageblock_is_reserved() when accessing pfn 0xc0000 because
highstart_pfn = 0x36ffe.

The issue was introduced in 3.0-rc1 by 6d3163ce ("mm: check if any page
in a pageblock is reserved before marking it MIGRATE_RESERVE").

Make sure that start_pfn is always aligned to pageblock_nr_pages to
ensure that pfn_valid s always called at the start of each pageblock.
Architectures with holes in pageblocks will be correctly handled by
pfn_valid_within in pageblock_is_reserved.

Signed-off-by: Michal Hocko &lt;mhocko@suse.cz&gt;
Signed-off-by: Mel Gorman &lt;mgorman@suse.de&gt;
Tested-by: Dang Bo &lt;bdang@vmware.com&gt;
Reviewed-by: KAMEZAWA Hiroyuki &lt;kamezawa.hiroyu@jp.fujitsu.com&gt;
Cc: Andrea Arcangeli &lt;aarcange@redhat.com&gt;
Cc: David Rientjes &lt;rientjes@google.com&gt;
Cc: Arve Hjnnevg &lt;arve@android.com&gt;
Cc: KOSAKI Motohiro &lt;kosaki.motohiro@jp.fujitsu.com&gt;
Cc: John Stultz &lt;john.stultz@linaro.org&gt;
Cc: Dave Hansen &lt;dave@linux.vnet.ibm.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>thp: set compound tail page _count to zero</title>
<updated>2011-12-21T20:57:35Z</updated>
<author>
<name>Youquan Song</name>
<email>youquan.song@intel.com</email>
</author>
<published>2011-12-08T22:34:18Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b492a377ac1507a67091c5232afd5ebd1c7c6e25'/>
<id>urn:sha1:b492a377ac1507a67091c5232afd5ebd1c7c6e25</id>
<content type='text'>
commit 58a84aa92723d1ac3e1cc4e3b0ff49291663f7e1 upstream.

Commit 70b50f94f1644 ("mm: thp: tail page refcounting fix") keeps all
page_tail-&gt;_count zero at all times.  But the current kernel does not
set page_tail-&gt;_count to zero if a 1GB page is utilized.  So when an
IOMMU 1GB page is used by KVM, it wil result in a kernel oops because a
tail page's _count does not equal zero.

  kernel BUG at include/linux/mm.h:386!
  invalid opcode: 0000 [#1] SMP
  Call Trace:
    gup_pud_range+0xb8/0x19d
    get_user_pages_fast+0xcb/0x192
    ? trace_hardirqs_off+0xd/0xf
    hva_to_pfn+0x119/0x2f2
    gfn_to_pfn_memslot+0x2c/0x2e
    kvm_iommu_map_pages+0xfd/0x1c1
    kvm_iommu_map_memslots+0x7c/0xbd
    kvm_iommu_map_guest+0xaa/0xbf
    kvm_vm_ioctl_assigned_device+0x2ef/0xa47
    kvm_vm_ioctl+0x36c/0x3a2
    do_vfs_ioctl+0x49e/0x4e4
    sys_ioctl+0x5a/0x7c
    system_call_fastpath+0x16/0x1b
  RIP  gup_huge_pud+0xf2/0x159

Signed-off-by: Youquan Song &lt;youquan.song@intel.com&gt;
Reviewed-by: Andrea Arcangeli &lt;aarcange@redhat.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>hugetlb: release pages in the error path of hugetlb_cow()</title>
<updated>2011-12-09T16:52:37Z</updated>
<author>
<name>Hillf Danton</name>
<email>dhillf@gmail.com</email>
</author>
<published>2011-11-15T22:36:12Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d98627dd4a3e2cd4a7602a845fd1c1c78d372e97'/>
<id>urn:sha1:d98627dd4a3e2cd4a7602a845fd1c1c78d372e97</id>
<content type='text'>
commit ea4039a34c4c206d015d34a49d0b00868e37db1d upstream.

If we fail to prepare an anon_vma, the {new, old}_page should be released,
or they will leak.

Signed-off-by: Hillf Danton &lt;dhillf@gmail.com&gt;
Reviewed-by: Andrea Arcangeli &lt;aarcange@redhat.com&gt;
Cc: Hugh Dickins &lt;hughd@google.com&gt;
Cc: Johannes Weiner &lt;jweiner@redhat.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Michal Hocko &lt;mhocko@suse.cz&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
</feed>
