<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/kernel/locking/mutex.c, branch v3.18.128</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.18.128</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.18.128'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2016-06-20T03:47:43Z</updated>
<entry>
<title>locking/ww_mutex: Report recursive ww_mutex locking early</title>
<updated>2016-06-20T03:47:43Z</updated>
<author>
<name>Chris Wilson</name>
<email>chris@chris-wilson.co.uk</email>
</author>
<published>2016-05-26T20:08:17Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=33fae4cc30d0451479f620af8846c5d717f08b7b'/>
<id>urn:sha1:33fae4cc30d0451479f620af8846c5d717f08b7b</id>
<content type='text'>
[ Upstream commit 0422e83d84ae24b933e4b0d4c1e0f0b4ae8a0a3b ]

Recursive locking for ww_mutexes was originally conceived as an
exception. However, it is heavily used by the DRM atomic modesetting
code. Currently, the recursive deadlock is checked after we have queued
up for a busy-spin and as we never release the lock, we spin until
kicked, whereupon the deadlock is discovered and reported.

A simple solution for the now common problem is to move the recursive
deadlock discovery to the first action when taking the ww_mutex.

Suggested-by: Maarten Lankhorst &lt;maarten.lankhorst@linux.intel.com&gt;
Signed-off-by: Chris Wilson &lt;chris@chris-wilson.co.uk&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: Maarten Lankhorst &lt;maarten.lankhorst@linux.intel.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1464293297-19777-1-git-send-email-chris@chris-wilson.co.uk
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Sasha Levin &lt;sasha.levin@oracle.com&gt;
</content>
</entry>
<entry>
<title>locking/Documentation: Move locking related docs into Documentation/locking/</title>
<updated>2014-08-13T08:32:03Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>davidlohr@hp.com</email>
</author>
<published>2014-07-30T20:41:55Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=214e0aed639ef40987bf6159fad303171a6de31e'/>
<id>urn:sha1:214e0aed639ef40987bf6159fad303171a6de31e</id>
<content type='text'>
Specifically:
  Documentation/locking/lockdep-design.txt
  Documentation/locking/lockstat.txt
  Documentation/locking/mutex-design.txt
  Documentation/locking/rt-mutex-design.txt
  Documentation/locking/rt-mutex.txt
  Documentation/locking/spinlocks.txt
  Documentation/locking/ww-mutex-design.txt

Signed-off-by: Davidlohr Bueso &lt;davidlohr@hp.com&gt;
Acked-by: Randy Dunlap &lt;rdunlap@infradead.org&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: jason.low2@hp.com
Cc: aswin@hp.com
Cc: Alexei Starovoitov &lt;ast@plumgrid.com&gt;
Cc: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Chris Mason &lt;clm@fb.com&gt;
Cc: Dan Streetman &lt;ddstreet@ieee.org&gt;
Cc: David Airlie &lt;airlied@linux.ie&gt;
Cc: Davidlohr Bueso &lt;davidlohr@hp.com&gt;
Cc: David S. Miller &lt;davem@davemloft.net&gt;
Cc: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Cc: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Cc: Jason Low &lt;jason.low2@hp.com&gt;
Cc: Josef Bacik &lt;jbacik@fusionio.com&gt;
Cc: Kees Cook &lt;keescook@chromium.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Lubomir Rintel &lt;lkundrak@v3.sk&gt;
Cc: Masanari Iida &lt;standby24x7@gmail.com&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Randy Dunlap &lt;rdunlap@infradead.org&gt;
Cc: Tim Chen &lt;tim.c.chen@linux.intel.com&gt;
Cc: Vineet Gupta &lt;vgupta@synopsys.com&gt;
Cc: fengguang.wu@intel.com
Link: http://lkml.kernel.org/r/1406752916-3341-6-git-send-email-davidlohr@hp.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/mutexes: Refactor optimistic spinning code</title>
<updated>2014-08-13T08:32:01Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>davidlohr@hp.com</email>
</author>
<published>2014-07-30T20:41:53Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=76916515d9d84e6552ee5e218e0ed566ad75e600'/>
<id>urn:sha1:76916515d9d84e6552ee5e218e0ed566ad75e600</id>
<content type='text'>
When we fail to acquire the mutex in the fastpath, we end up calling
__mutex_lock_common(). A *lot* goes on in this function. Move out the
optimistic spinning code into mutex_optimistic_spin() and simplify
the former a bit. Furthermore, this is similar to what we have in
rwsems. No logical changes.

Signed-off-by: Davidlohr Bueso &lt;davidlohr@hp.com&gt;
Acked-by: Jason Low &lt;jason.low2@hp.com&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: aswin@hp.com
Cc: mingo@kernel.org
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Link: http://lkml.kernel.org/r/1406752916-3341-4-git-send-email-davidlohr@hp.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/mutexes: Document quick lock release when unlocking</title>
<updated>2014-08-13T08:31:59Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>davidlohr@hp.com</email>
</author>
<published>2014-07-30T20:41:51Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=42fa566bd74aa7b95413fb00611ec983b488222d'/>
<id>urn:sha1:42fa566bd74aa7b95413fb00611ec983b488222d</id>
<content type='text'>
When unlocking, we always want to reach the slowpath with the lock's counter
indicating it is unlocked. -- as returned by the asm fastpath call or by
explicitly setting it. While doing so, at least in theory, we can optimize
and allow faster lock stealing.

When unlocking, we always want to reach the slowpath with the lock's counter
indicating it is unlocked. -- as returned by the asm fastpath call or by
explicitly setting it. While doing so, at least in theory, we can optimize
and allow faster lock stealing.

Signed-off-by: Davidlohr Bueso &lt;davidlohr@hp.com&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: jason.low2@hp.com
Cc: aswin@hp.com
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Link: http://lkml.kernel.org/r/1406752916-3341-2-git-send-email-davidlohr@hp.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/mutexes: Standardize arguments in lock/unlock slowpaths</title>
<updated>2014-08-13T08:31:58Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>davidlohr@hp.com</email>
</author>
<published>2014-07-30T20:41:50Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=242489cfe97d44290e7f88b12591fab6c0819045'/>
<id>urn:sha1:242489cfe97d44290e7f88b12591fab6c0819045</id>
<content type='text'>
Just how the locking-end behaves, when unlocking, go ahead and
obtain the proper data structure immediately after the previous
(asm-end) call exits and there are (probably) pending waiters.
This simplifies a bit some of the layering.

Signed-off-by: Davidlohr Bueso &lt;davidlohr@hp.com&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: jason.low2@hp.com
Cc: aswin@hp.com
Cc: mingo@kernel.org
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1406752916-3341-1-git-send-email-davidlohr@hp.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>arch, locking: Ciao arch_mutex_cpu_relax()</title>
<updated>2014-07-17T10:32:47Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>davidlohr@hp.com</email>
</author>
<published>2014-06-29T22:09:33Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3a6bfbc91df04b081a44d419e0260bad54abddf7'/>
<id>urn:sha1:3a6bfbc91df04b081a44d419e0260bad54abddf7</id>
<content type='text'>
The arch_mutex_cpu_relax() function, introduced by 34b133f, is
hacky and ugly. It was added a few years ago to address the fact
that common cpu_relax() calls include yielding on s390, and thus
impact the optimistic spinning functionality of mutexes. Nowadays
we use this function well beyond mutexes: rwsem, qrwlock, mcs and
lockref. Since the macro that defines the call is in the mutex header,
any users must include mutex.h and the naming is misleading as well.

This patch (i) renames the call to cpu_relax_lowlatency  ("relax, but
only if you can do it with very low latency") and (ii) defines it in
each arch's asm/processor.h local header, just like for regular cpu_relax
functions. On all archs, except s390, cpu_relax_lowlatency is simply cpu_relax,
and thus we can take it out of mutex.h. While this can seem redundant,
I believe it is a good choice as it allows us to move out arch specific
logic from generic locking primitives and enables future(?) archs to
transparently define it, similarly to System Z.

Signed-off-by: Davidlohr Bueso &lt;davidlohr@hp.com&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Anton Blanchard &lt;anton@samba.org&gt;
Cc: Aurelien Jacquiot &lt;a-jacquiot@ti.com&gt;
Cc: Benjamin Herrenschmidt &lt;benh@kernel.crashing.org&gt;
Cc: Bharat Bhushan &lt;r65777@freescale.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Chen Liqin &lt;liqin.linux@gmail.com&gt;
Cc: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
Cc: Christian Borntraeger &lt;borntraeger@de.ibm.com&gt;
Cc: Chris Zankel &lt;chris@zankel.net&gt;
Cc: David Howells &lt;dhowells@redhat.com&gt;
Cc: David S. Miller &lt;davem@davemloft.net&gt;
Cc: Deepthi Dharwar &lt;deepthi@linux.vnet.ibm.com&gt;
Cc: Dominik Dingel &lt;dingel@linux.vnet.ibm.com&gt;
Cc: Fenghua Yu &lt;fenghua.yu@intel.com&gt;
Cc: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Cc: Guan Xuetao &lt;gxt@mprc.pku.edu.cn&gt;
Cc: Haavard Skinnemoen &lt;hskinnemoen@gmail.com&gt;
Cc: Hans-Christian Egtvedt &lt;egtvedt@samfundet.no&gt;
Cc: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Cc: Helge Deller &lt;deller@gmx.de&gt;
Cc: Hirokazu Takata &lt;takata@linux-m32r.org&gt;
Cc: Ivan Kokshaysky &lt;ink@jurassic.park.msu.ru&gt;
Cc: James E.J. Bottomley &lt;jejb@parisc-linux.org&gt;
Cc: James Hogan &lt;james.hogan@imgtec.com&gt;
Cc: Jason Wang &lt;jasowang@redhat.com&gt;
Cc: Jesper Nilsson &lt;jesper.nilsson@axis.com&gt;
Cc: Joe Perches &lt;joe@perches.com&gt;
Cc: Jonas Bonn &lt;jonas@southpole.se&gt;
Cc: Joseph Myers &lt;joseph@codesourcery.com&gt;
Cc: Kees Cook &lt;keescook@chromium.org&gt;
Cc: Koichi Yasutake &lt;yasutake.koichi@jp.panasonic.com&gt;
Cc: Lennox Wu &lt;lennox.wu@gmail.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Mark Salter &lt;msalter@redhat.com&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: Matt Turner &lt;mattst88@gmail.com&gt;
Cc: Max Filippov &lt;jcmvbkbc@gmail.com&gt;
Cc: Michael Neuling &lt;mikey@neuling.org&gt;
Cc: Michal Simek &lt;monstr@monstr.eu&gt;
Cc: Mikael Starvik &lt;starvik@axis.com&gt;
Cc: Nicolas Pitre &lt;nico@linaro.org&gt;
Cc: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Cc: Paul Burton &lt;paul.burton@imgtec.com&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Cc: Qais Yousef &lt;qais.yousef@imgtec.com&gt;
Cc: Qiaowei Ren &lt;qiaowei.ren@intel.com&gt;
Cc: Rafael Wysocki &lt;rafael.j.wysocki@intel.com&gt;
Cc: Ralf Baechle &lt;ralf@linux-mips.org&gt;
Cc: Richard Henderson &lt;rth@twiddle.net&gt;
Cc: Richard Kuo &lt;rkuo@codeaurora.org&gt;
Cc: Russell King &lt;linux@arm.linux.org.uk&gt;
Cc: Steven Miao &lt;realmz6@gmail.com&gt;
Cc: Steven Rostedt &lt;srostedt@redhat.com&gt;
Cc: Stratos Karafotis &lt;stratosk@semaphore.gr&gt;
Cc: Tim Chen &lt;tim.c.chen@linux.intel.com&gt;
Cc: Tony Luck &lt;tony.luck@intel.com&gt;
Cc: Vasily Kulikov &lt;segoon@openwall.com&gt;
Cc: Vineet Gupta &lt;vgupta@synopsys.com&gt;
Cc: Vineet Gupta &lt;Vineet.Gupta1@synopsys.com&gt;
Cc: Waiman Long &lt;Waiman.Long@hp.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Wolfram Sang &lt;wsa@the-dreams.de&gt;
Cc: adi-buildroot-devel@lists.sourceforge.net
Cc: linux390@de.ibm.com
Cc: linux-alpha@vger.kernel.org
Cc: linux-am33-list@redhat.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-c6x-dev@linux-c6x.org
Cc: linux-cris-kernel@axis.com
Cc: linux-hexagon@vger.kernel.org
Cc: linux-ia64@vger.kernel.org
Cc: linux@lists.openrisc.net
Cc: linux-m32r-ja@ml.linux-m32r.org
Cc: linux-m32r@ml.linux-m32r.org
Cc: linux-m68k@lists.linux-m68k.org
Cc: linux-metag@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linux-parisc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Cc: linux-xtensa@linux-xtensa.org
Cc: sparclinux@vger.kernel.org
Link: http://lkml.kernel.org/r/1404079773.2619.4.camel@buesod1.americas.hpqcorp.net
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'locking/urgent' into locking/core, before applying larger changes and to refresh the branch with fixes</title>
<updated>2014-07-17T09:45:29Z</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@kernel.org</email>
</author>
<published>2014-07-17T09:45:29Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b5e4111f027c4be85dbe97e090530d03c55c4cf4'/>
<id>urn:sha1:b5e4111f027c4be85dbe97e090530d03c55c4cf4</id>
<content type='text'>
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/spinlocks/mcs: Introduce and use init macro and function for osq locks</title>
<updated>2014-07-16T11:28:05Z</updated>
<author>
<name>Jason Low</name>
<email>jason.low2@hp.com</email>
</author>
<published>2014-07-14T17:27:50Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4d9d951e6b5df85ccfca2c5bd8b4f5c71d256b65'/>
<id>urn:sha1:4d9d951e6b5df85ccfca2c5bd8b4f5c71d256b65</id>
<content type='text'>
Currently, we initialize the osq lock by directly setting the lock's values. It
would be preferable if we use an init macro to do the initialization like we do
with other locks.

This patch introduces and uses a macro and function for initializing the osq lock.

Signed-off-by: Jason Low &lt;jason.low2@hp.com&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Scott Norton &lt;scott.norton@hp.com&gt;
Cc: "Paul E. McKenney" &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Dave Chinner &lt;david@fromorbit.com&gt;
Cc: Waiman Long &lt;waiman.long@hp.com&gt;
Cc: Davidlohr Bueso &lt;davidlohr@hp.com&gt;
Cc: Rik van Riel &lt;riel@redhat.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Tim Chen &lt;tim.c.chen@linux.intel.com&gt;
Cc: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Cc: Aswin Chandramouleeswaran &lt;aswin@hp.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Chris Mason &lt;clm@fb.com&gt;
Cc: Josef Bacik &lt;jbacik@fusionio.com&gt;
Link: http://lkml.kernel.org/r/1405358872-3732-4-git-send-email-jason.low2@hp.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead</title>
<updated>2014-07-16T11:28:04Z</updated>
<author>
<name>Jason Low</name>
<email>jason.low2@hp.com</email>
</author>
<published>2014-07-14T17:27:49Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=90631822c5d307b5410500806e8ac3e63928aa3e'/>
<id>urn:sha1:90631822c5d307b5410500806e8ac3e63928aa3e</id>
<content type='text'>
The cancellable MCS spinlock is currently used to queue threads that are
doing optimistic spinning. It uses per-cpu nodes, where a thread obtaining
the lock would access and queue the local node corresponding to the CPU that
it's running on. Currently, the cancellable MCS lock is implemented by using
pointers to these nodes.

In this patch, instead of operating on pointers to the per-cpu nodes, we
store the CPU numbers in which the per-cpu nodes correspond to in atomic_t.
A similar concept is used with the qspinlock.

By operating on the CPU # of the nodes using atomic_t instead of pointers
to those nodes, this can reduce the overhead of the cancellable MCS spinlock
by 32 bits (on 64 bit systems).

Signed-off-by: Jason Low &lt;jason.low2@hp.com&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Scott Norton &lt;scott.norton@hp.com&gt;
Cc: "Paul E. McKenney" &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Dave Chinner &lt;david@fromorbit.com&gt;
Cc: Waiman Long &lt;waiman.long@hp.com&gt;
Cc: Davidlohr Bueso &lt;davidlohr@hp.com&gt;
Cc: Rik van Riel &lt;riel@redhat.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Tim Chen &lt;tim.c.chen@linux.intel.com&gt;
Cc: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Cc: Aswin Chandramouleeswaran &lt;aswin@hp.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Chris Mason &lt;clm@fb.com&gt;
Cc: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Cc: Josef Bacik &lt;jbacik@fusionio.com&gt;
Link: http://lkml.kernel.org/r/1405358872-3732-3-git-send-email-jason.low2@hp.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/mutexes: Optimize mutex trylock slowpath</title>
<updated>2014-07-05T09:25:42Z</updated>
<author>
<name>Jason Low</name>
<email>jason.low2@hp.com</email>
</author>
<published>2014-06-11T18:37:23Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=72d5305dcb3637913c2c37e847a4de9028e49244'/>
<id>urn:sha1:72d5305dcb3637913c2c37e847a4de9028e49244</id>
<content type='text'>
The mutex_trylock() function calls into __mutex_trylock_fastpath() when
trying to obtain the mutex. On 32 bit x86, in the !__HAVE_ARCH_CMPXCHG
case, __mutex_trylock_fastpath() calls directly into __mutex_trylock_slowpath()
regardless of whether or not the mutex is locked.

In __mutex_trylock_slowpath(), we then acquire the wait_lock spinlock, xchg()
lock-&gt;count with -1, then set lock-&gt;count back to 0 if there are no waiters,
and return true if the prev lock count was 1.

However, if the mutex is already locked, then there isn't much point
in attempting all of the above expensive operations. In this patch, we only
attempt the above trylock operations if the mutex is unlocked.

Signed-off-by: Jason Low &lt;jason.low2@hp.com&gt;
Reviewed-by: Davidlohr Bueso &lt;davidlohr@hp.com&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: akpm@linux-foundation.org
Cc: tim.c.chen@linux.intel.com
Cc: paulmck@linux.vnet.ibm.com
Cc: rostedt@goodmis.org
Cc: Waiman.Long@hp.com
Cc: scott.norton@hp.com
Cc: aswin@hp.com
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Link: http://lkml.kernel.org/r/1402511843-4721-5-git-send-email-jason.low2@hp.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
</feed>
