<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/Documentation/vm/locking, branch v3.0.76</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.0.76</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.0.76'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2011-05-25T15:39:18Z</updated>
<entry>
<title>mm: Convert i_mmap_lock to a mutex</title>
<updated>2011-05-25T15:39:18Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2011-05-25T00:12:06Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3d48ae45e72390ddf8cc5256ac32ed6f7a19cbea'/>
<id>urn:sha1:3d48ae45e72390ddf8cc5256ac32ed6f7a19cbea</id>
<content type='text'>
Straightforward conversion of i_mmap_lock to a mutex.

Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Acked-by: Hugh Dickins &lt;hughd@google.com&gt;
Cc: Benjamin Herrenschmidt &lt;benh@kernel.crashing.org&gt;
Cc: David Miller &lt;davem@davemloft.net&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: Russell King &lt;rmk@arm.linux.org.uk&gt;
Cc: Paul Mundt &lt;lethal@linux-sh.org&gt;
Cc: Jeff Dike &lt;jdike@addtoit.com&gt;
Cc: Richard Weinberger &lt;richard@nod.at&gt;
Cc: Tony Luck &lt;tony.luck@intel.com&gt;
Cc: KAMEZAWA Hiroyuki &lt;kamezawa.hiroyu@jp.fujitsu.com&gt;
Cc: Mel Gorman &lt;mel@csn.ul.ie&gt;
Cc: KOSAKI Motohiro &lt;kosaki.motohiro@jp.fujitsu.com&gt;
Cc: Nick Piggin &lt;npiggin@kernel.dk&gt;
Cc: Namhyung Kim &lt;namhyung@gmail.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>truncate: new helpers</title>
<updated>2009-09-24T12:41:47Z</updated>
<author>
<name>npiggin@suse.de</name>
<email>npiggin@suse.de</email>
</author>
<published>2009-08-20T16:35:05Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=25d9e2d15286281ec834b829a4aaf8969011f1cd'/>
<id>urn:sha1:25d9e2d15286281ec834b829a4aaf8969011f1cd</id>
<content type='text'>
Introduce new truncate helpers truncate_pagecache and inode_newsize_ok.
vmtruncate is also consolidated from mm/memory.c and mm/nommu.c and
into mm/truncate.c.

Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Nick Piggin &lt;npiggin@suse.de&gt;
Signed-off-by: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>[PATCH] swap: swap_lock replace list+device</title>
<updated>2005-09-05T07:05:42Z</updated>
<author>
<name>Hugh Dickins</name>
<email>hugh@veritas.com</email>
</author>
<published>2005-09-03T22:54:41Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=5d337b9194b1ce3b6fd5f3cb2799455ed2f9a3d1'/>
<id>urn:sha1:5d337b9194b1ce3b6fd5f3cb2799455ed2f9a3d1</id>
<content type='text'>
The idea of a swap_device_lock per device, and a swap_list_lock over them all,
is appealing; but in practice almost every holder of swap_device_lock must
already hold swap_list_lock, which defeats the purpose of the split.

The only exceptions have been swap_duplicate, valid_swaphandles and an
untrodden path in try_to_unuse (plus a few places added in this series).
valid_swaphandles doesn't show up high in profiles, but swap_duplicate does
demand attention.  However, with the hold time in get_swap_pages so much
reduced, I've not yet found a load and set of swap device priorities to show
even swap_duplicate benefitting from the split.  Certainly the split is mere
overhead in the common case of a single swap device.

So, replace swap_list_lock and swap_device_lock by spinlock_t swap_lock
(generally we seem to prefer an _ in the name, and not hide in a macro).

If someone can show a regression in swap_duplicate, then probably we should
add a hashlock for the swap_map entries alone (shorts being anatomic), so as
to help the case of the single swap device too.

Signed-off-by: Hugh Dickins &lt;hugh@veritas.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] Convert i_shared_sem back to a spinlock</title>
<updated>2004-05-22T15:02:36Z</updated>
<author>
<name>Andrew Morton</name>
<email>akpm@osdl.org</email>
</author>
<published>2004-05-22T15:02:36Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c08689623e4e807489a2727b5362a75c11ce6342'/>
<id>urn:sha1:c08689623e4e807489a2727b5362a75c11ce6342</id>
<content type='text'>
Having a semaphore in there causes modest performance regressions on heavily
mmap-intensive workloads on some hardware.  Specifically, up to 30% in SDET on
NUMAQ and big PPC64.

So switch it back to being a spinlock.  This does mean that unmap_vmas() needs
to be told whether or not it is allowed to schedule away; that's simple to do
via the zap_details structure.

This change means that there will be high scheuling latencies when someone
truncates a large file which is currently mmapped, but nobody does that
anyway.  The scheduling points in unmap_vmas() are mainly for munmap() and
exit(), and they still will work OK for that.

From: Hugh Dickins &lt;hugh@veritas.com&gt;

  Sorry, my premature optimizations (trying to pass down NULL zap_details
  except when needed) have caught you out doubly: unmap_mapping_range_list was
  NULLing the details even though atomic was set; and if it hadn't, then
  zap_pte_range would have missed free_swap_and_cache and pte_clear when pte
  not present.  Moved the optimization into zap_pte_range itself.  Plus
  massive documentation update.

From: Hugh Dickins &lt;hugh@veritas.com&gt;

  Here's a second patch to add to the first: mremap's cows can't come home
  without releasing the i_mmap_lock, better move the whole "Subtle point"
  locking from move_vma into move_page_tables.  And it's possible for the file
  that was behind an anonymous page to be truncated while we drop that lock,
  don't want to abort mremap because of VM_FAULT_SIGBUS.

  (Eek, should we be checking do_swap_page of a vm_file area against the
  truncate_count sequence?  Technically yes, but I doubt we need bother.)


- We cannot hold i_mmap_lock across move_one_page() because
  move_one_page() needs to perform __GFP_WAIT allocations of pagetable pages.

- Move the cond_resched() out so we test it once per page rather than only
  when move_one_page() returns -EAGAIN.
</content>
</entry>
<entry>
<title>[PATCH] update Kanoj Sarcar email address in docs</title>
<updated>2003-10-05T04:32:15Z</updated>
<author>
<name>Rusty Russell</name>
<email>trivial@rustcorp.com.au</email>
</author>
<published>2003-10-05T04:32:15Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=6d9298c77f70ccc4d3e3c1247c2f5ec3afebb6a0'/>
<id>urn:sha1:6d9298c77f70ccc4d3e3c1247c2f5ec3afebb6a0</id>
<content type='text'>
From:  Ed L Cashin &lt;ecashin@uga.edu&gt;

(Acked by Kanoj Sarcar &lt;kanojsarcar@yahoo.com&gt;)
</content>
</entry>
<entry>
<title>[PATCH] 2.5.63 loose pedantry; loose -&gt; lose where appropriate.</title>
<updated>2003-02-24T13:02:46Z</updated>
<author>
<name>Steven Cole</name>
<email>elenstev@mesatop.com</email>
</author>
<published>2003-02-24T13:02:46Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=5419be6f55efa93959119a929cf625282e3f163f'/>
<id>urn:sha1:5419be6f55efa93959119a929cf625282e3f163f</id>
<content type='text'>
This patch replaces "loose" with "lose" where appropriate.
There remain 56 correct uses of "loose" in the 2.5 kernel source.
</content>
</entry>
<entry>
<title>[PATCH] turn i_shared_lock into a semaphore</title>
<updated>2003-01-11T02:39:36Z</updated>
<author>
<name>Andrew Morton</name>
<email>akpm@digeo.com</email>
</author>
<published>2003-01-11T02:39:36Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d9be9136602e3decfd7a0375725a8eba1c5079ec'/>
<id>urn:sha1:d9be9136602e3decfd7a0375725a8eba1c5079ec</id>
<content type='text'>
i_shared_lock is held for a very long time during vmtruncate() and causes
high scheduling latencies when truncating a file which is mmapped.  I've seen
100 milliseconds.

So turn it into a semaphore.  It nests inside mmap_sem.

This change is also needed by the shared pagetable patch, which needs to
unshare pte's on the vmtruncate path - lots of pagetable pages need to
be allocated and they are using __GFP_WAIT.

The patch also makes unmap_vma() static.
</content>
</entry>
<entry>
<title>cachetlb.txt, locking, fork.c, mremap.c, mprotect.c, memory.c:</title>
<updated>2002-04-23T12:19:51Z</updated>
<author>
<name>Kanoj Sarcar</name>
<email>kanoj@vger.kernel.org</email>
</author>
<published>2002-04-23T12:19:51Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=30746bbd9545ec11346d515e419878ea265bb4d7'/>
<id>urn:sha1:30746bbd9545ec11346d515e419878ea265bb4d7</id>
<content type='text'>
  Make sure that flush_tlb_range is called with PTL held.
  Also, make sure no new threads can start up in user mode
  while a tlb_gather_mmu is in progress.
</content>
</entry>
<entry>
<title>v2.4.2.4 -&gt; v2.4.2.5</title>
<updated>2002-02-05T02:03:57Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@athlon.transmeta.com</email>
</author>
<published>2002-02-05T02:03:57Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=cc80f8f99c1ba16d54b0af64cb3911cd0146259e'/>
<id>urn:sha1:cc80f8f99c1ba16d54b0af64cb3911cd0146259e</id>
<content type='text'>
  - Rik van Riel and others: mm rw-semaphore (ps/top ok when swapping)
  - IDE: 256 sectors at a time is legal, but apparently confuses some
  drives. Max out at 255 sectors instead.
  - Petko Manolov: USB pegasus driver update
  - make the boottime memory map printout at least almost readable.
  - USB driver updates
  - pte_alloc()/pmd_alloc() need page_table_lock.
</content>
</entry>
<entry>
<title>Import changeset</title>
<updated>2002-02-05T01:40:40Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@athlon.transmeta.com</email>
</author>
<published>2002-02-05T01:40:40Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=7a2deb32924142696b8174cdf9b38cd72a11fc96'/>
<id>urn:sha1:7a2deb32924142696b8174cdf9b38cd72a11fc96</id>
<content type='text'>
</content>
</entry>
</feed>
