<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/include/linux/slub_def.h, branch v3.0.85</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.0.85</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.0.85'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2011-05-21T09:53:53Z</updated>
<entry>
<title>slub: Deal with hyperthetical case of PAGE_SIZE &gt; 2M</title>
<updated>2011-05-21T09:53:53Z</updated>
<author>
<name>Christoph Lameter</name>
<email>cl@linux.com</email>
</author>
<published>2011-05-20T14:42:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3e0c2ab67e48f77c2da0a5c826aac397792a214e'/>
<id>urn:sha1:3e0c2ab67e48f77c2da0a5c826aac397792a214e</id>
<content type='text'>
kmalloc_index() currently returns -1 if the PAGE_SIZE is larger than 2M
which seems to cause some concern since the callers do not check for -1.

Insert a BUG() and add a comment to the -1 explaining that the code
cannot be reached.

Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Pekka Enberg &lt;penberg@kernel.org&gt;
</content>
</entry>
<entry>
<title>slub: Remove CONFIG_CMPXCHG_LOCAL ifdeffery</title>
<updated>2011-05-07T17:25:38Z</updated>
<author>
<name>Christoph Lameter</name>
<email>cl@linux.com</email>
</author>
<published>2011-05-05T20:23:54Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1759415e630e5db0dd2390df9f94892cbfb9a8a2'/>
<id>urn:sha1:1759415e630e5db0dd2390df9f94892cbfb9a8a2</id>
<content type='text'>
Remove the #ifdefs. This means that the irqsafe_cpu_cmpxchg_double() is used
everywhere.

There may be performance implications since:

A. We now have to manage a transaction ID for all arches

B. The interrupt holdoff for arches not supporting CONFIG_CMPXCHG_LOCAL is reduced
to a very short irqoff section.

There are no multiple irqoff/irqon sequences as a result of this change. Even in the fallback
case we only have to do one disable and enable like before.

Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Pekka Enberg &lt;penberg@kernel.org&gt;
</content>
</entry>
<entry>
<title>slub: Add statistics for this_cmpxchg_double failures</title>
<updated>2011-03-22T18:48:04Z</updated>
<author>
<name>Christoph Lameter</name>
<email>cl@linux.com</email>
</author>
<published>2011-03-22T18:35:00Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4fdccdfbb4652a7bbac8adbce7449eb093775118'/>
<id>urn:sha1:4fdccdfbb4652a7bbac8adbce7449eb093775118</id>
<content type='text'>
Add some statistics for debugging.

Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Pekka Enberg &lt;penberg@kernel.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'slub/lockless' into for-linus</title>
<updated>2011-03-20T16:13:26Z</updated>
<author>
<name>Pekka Enberg</name>
<email>penberg@kernel.org</email>
</author>
<published>2011-03-20T16:13:26Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=e8c500c2b64b6e237e67ecba7249e72363c47047'/>
<id>urn:sha1:e8c500c2b64b6e237e67ecba7249e72363c47047</id>
<content type='text'>
Conflicts:
	include/linux/slub_def.h
</content>
</entry>
<entry>
<title>slub: automatically reserve bytes at the end of slab</title>
<updated>2011-03-11T16:06:34Z</updated>
<author>
<name>Lai Jiangshan</name>
<email>laijs@cn.fujitsu.com</email>
</author>
<published>2011-03-10T07:21:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=ab9a0f196f2f4f080df54402493ea3dc31b5243e'/>
<id>urn:sha1:ab9a0f196f2f4f080df54402493ea3dc31b5243e</id>
<content type='text'>
There is no "struct" for slub's slab, it shares with struct page.
But struct page is very small, it is insufficient when we need
to add some metadata for slab.

So we add a field "reserved" to struct kmem_cache, when a slab
is allocated, kmem_cache-&gt;reserved bytes are automatically reserved
at the end of the slab for slab's metadata.

Changed from v1:
	Export the reserved field via sysfs

Acked-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Lai Jiangshan &lt;laijs@cn.fujitsu.com&gt;
Signed-off-by: Pekka Enberg &lt;penberg@kernel.org&gt;
</content>
</entry>
<entry>
<title>Lockless (and preemptless) fastpaths for slub</title>
<updated>2011-03-11T15:42:49Z</updated>
<author>
<name>Christoph Lameter</name>
<email>cl@linux.com</email>
</author>
<published>2011-02-25T17:38:54Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=8a5ec0ba42c4919e2d8f4c3138cc8b987fdb0b79'/>
<id>urn:sha1:8a5ec0ba42c4919e2d8f4c3138cc8b987fdb0b79</id>
<content type='text'>
Use the this_cpu_cmpxchg_double functionality to implement a lockless
allocation algorithm on arches that support fast this_cpu_ops.

Each of the per cpu pointers is paired with a transaction id that ensures
that updates of the per cpu information can only occur in sequence on
a certain cpu.

A transaction id is a "long" integer that is comprised of an event number
and the cpu number. The event number is incremented for every change to the
per cpu state. This means that the cmpxchg instruction can verify for an
update that nothing interfered and that we are updating the percpu structure
for the processor where we picked up the information and that we are also
currently on that processor when we update the information.

This results in a significant decrease of the overhead in the fastpaths. It
also makes it easy to adopt the fast path for realtime kernels since this
is lockless and does not require the use of the current per cpu area
over the critical section. It is only important that the per cpu area is
current at the beginning of the critical section and at the end.

So there is no need even to disable preemption.

Test results show that the fastpath cycle count is reduced by up to ~ 40%
(alloc/free test goes from ~140 cycles down to ~80). The slowpath for kfree
adds a few cycles.

Sadly this does nothing for the slowpath which is where the main issues with
performance in slub are but the best case performance rises significantly.
(For that see the more complex slub patches that require cmpxchg_double)

Kmalloc: alloc/free test

Before:

10000 times kmalloc(8)/kfree -&gt; 134 cycles
10000 times kmalloc(16)/kfree -&gt; 152 cycles
10000 times kmalloc(32)/kfree -&gt; 144 cycles
10000 times kmalloc(64)/kfree -&gt; 142 cycles
10000 times kmalloc(128)/kfree -&gt; 142 cycles
10000 times kmalloc(256)/kfree -&gt; 132 cycles
10000 times kmalloc(512)/kfree -&gt; 132 cycles
10000 times kmalloc(1024)/kfree -&gt; 135 cycles
10000 times kmalloc(2048)/kfree -&gt; 135 cycles
10000 times kmalloc(4096)/kfree -&gt; 135 cycles
10000 times kmalloc(8192)/kfree -&gt; 144 cycles
10000 times kmalloc(16384)/kfree -&gt; 754 cycles

After:

10000 times kmalloc(8)/kfree -&gt; 78 cycles
10000 times kmalloc(16)/kfree -&gt; 78 cycles
10000 times kmalloc(32)/kfree -&gt; 82 cycles
10000 times kmalloc(64)/kfree -&gt; 88 cycles
10000 times kmalloc(128)/kfree -&gt; 79 cycles
10000 times kmalloc(256)/kfree -&gt; 79 cycles
10000 times kmalloc(512)/kfree -&gt; 85 cycles
10000 times kmalloc(1024)/kfree -&gt; 82 cycles
10000 times kmalloc(2048)/kfree -&gt; 82 cycles
10000 times kmalloc(4096)/kfree -&gt; 85 cycles
10000 times kmalloc(8192)/kfree -&gt; 82 cycles
10000 times kmalloc(16384)/kfree -&gt; 706 cycles

Kmalloc: Repeatedly allocate then free test

Before:

10000 times kmalloc(8) -&gt; 211 cycles kfree -&gt; 113 cycles
10000 times kmalloc(16) -&gt; 174 cycles kfree -&gt; 115 cycles
10000 times kmalloc(32) -&gt; 235 cycles kfree -&gt; 129 cycles
10000 times kmalloc(64) -&gt; 222 cycles kfree -&gt; 120 cycles
10000 times kmalloc(128) -&gt; 343 cycles kfree -&gt; 139 cycles
10000 times kmalloc(256) -&gt; 827 cycles kfree -&gt; 147 cycles
10000 times kmalloc(512) -&gt; 1048 cycles kfree -&gt; 272 cycles
10000 times kmalloc(1024) -&gt; 2043 cycles kfree -&gt; 528 cycles
10000 times kmalloc(2048) -&gt; 4002 cycles kfree -&gt; 571 cycles
10000 times kmalloc(4096) -&gt; 7740 cycles kfree -&gt; 628 cycles
10000 times kmalloc(8192) -&gt; 8062 cycles kfree -&gt; 850 cycles
10000 times kmalloc(16384) -&gt; 8895 cycles kfree -&gt; 1249 cycles

After:

10000 times kmalloc(8) -&gt; 190 cycles kfree -&gt; 129 cycles
10000 times kmalloc(16) -&gt; 76 cycles kfree -&gt; 123 cycles
10000 times kmalloc(32) -&gt; 126 cycles kfree -&gt; 124 cycles
10000 times kmalloc(64) -&gt; 181 cycles kfree -&gt; 128 cycles
10000 times kmalloc(128) -&gt; 310 cycles kfree -&gt; 140 cycles
10000 times kmalloc(256) -&gt; 809 cycles kfree -&gt; 165 cycles
10000 times kmalloc(512) -&gt; 1005 cycles kfree -&gt; 269 cycles
10000 times kmalloc(1024) -&gt; 1999 cycles kfree -&gt; 527 cycles
10000 times kmalloc(2048) -&gt; 3967 cycles kfree -&gt; 570 cycles
10000 times kmalloc(4096) -&gt; 7658 cycles kfree -&gt; 637 cycles
10000 times kmalloc(8192) -&gt; 8111 cycles kfree -&gt; 859 cycles
10000 times kmalloc(16384) -&gt; 8791 cycles kfree -&gt; 1173 cycles

Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Pekka Enberg &lt;penberg@kernel.org&gt;
</content>
</entry>
<entry>
<title>slub: min_partial needs to be in first cacheline</title>
<updated>2011-03-11T15:42:49Z</updated>
<author>
<name>Christoph Lameter</name>
<email>cl@linux.com</email>
</author>
<published>2011-02-25T17:38:51Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1a757fe5d4234293d6a3acccd7196f1386443956'/>
<id>urn:sha1:1a757fe5d4234293d6a3acccd7196f1386443956</id>
<content type='text'>
It is used in unfreeze_slab() which is a performance critical
function.

Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Pekka Enberg &lt;penberg@kernel.org&gt;
</content>
</entry>
<entry>
<title>slub tracing: move trace calls out of always inlined functions to reduce kernel code size</title>
<updated>2010-11-06T07:04:33Z</updated>
<author>
<name>Richard Kennedy</name>
<email>richard@rsk.demon.co.uk</email>
</author>
<published>2010-10-21T09:29:19Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4a92379bdfb48680a5e6775dd53a586df7b6b0b1'/>
<id>urn:sha1:4a92379bdfb48680a5e6775dd53a586df7b6b0b1</id>
<content type='text'>
Having the trace calls defined in the always inlined kmalloc functions
in include/linux/slub_def.h causes a lot of code duplication as the
trace functions get instantiated for each kamalloc call site. This can
simply be removed by pushing the trace calls down into the functions in
slub.c.

On my x86_64 built this patch shrinks the code size of the kernel by
approx 36K and also shrinks the code size of many modules -- too many to
list here ;)

size vmlinux (2.6.36) reports
       text        data     bss     dec     hex filename
    5410611	 743172	 828928	6982711	 6a8c37	vmlinux
    5373738	 744244	 828928	6946910	 6a005e	vmlinux + patch

The resulting kernel has had some testing &amp; kmalloc trace still seems to
work.

This patch
- moves trace_kmalloc out of the inlined kmalloc() and pushes it down
into kmem_cache_alloc_trace() so this it only get instantiated once.

- rename kmem_cache_alloc_notrace()  to kmem_cache_alloc_trace() to
indicate that now is does have tracing. (maybe this would better being
called something like kmalloc_kmem_cache ?)

- adds a new function kmalloc_order() to handle allocation and tracing
of large allocations of page order.

- removes tracing from the inlined kmalloc_large() replacing them with a
call to kmalloc_order();

- move tracing out of inlined kmalloc_node() and pushing it down into
kmem_cache_alloc_node_trace

- rename kmem_cache_alloc_node_notrace() to
kmem_cache_alloc_node_trace()

- removes the include of trace/events/kmem.h from slub_def.h.

v2
- keep kmalloc_order_trace inline when !CONFIG_TRACE

Signed-off-by: Richard Kennedy &lt;richard@rsk.demon.co.uk&gt;
Signed-off-by: Pekka Enberg &lt;penberg@kernel.org&gt;
</content>
</entry>
<entry>
<title>slub: Enable sysfs support for !CONFIG_SLUB_DEBUG</title>
<updated>2010-10-06T13:54:36Z</updated>
<author>
<name>Christoph Lameter</name>
<email>cl@linux.com</email>
</author>
<published>2010-10-05T18:57:26Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=ab4d5ed5eeda4f57c50d14131ce1b1da75d0c938'/>
<id>urn:sha1:ab4d5ed5eeda4f57c50d14131ce1b1da75d0c938</id>
<content type='text'>
Currently disabling CONFIG_SLUB_DEBUG also disabled SYSFS support meaning
that the slabs cannot be tuned without DEBUG.

Make SYSFS support independent of CONFIG_SLUB_DEBUG

Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Pekka Enberg &lt;penberg@kernel.org&gt;
</content>
</entry>
<entry>
<title>slub: reduce differences between SMP and NUMA</title>
<updated>2010-10-02T07:44:10Z</updated>
<author>
<name>Christoph Lameter</name>
<email>cl@linux.com</email>
</author>
<published>2010-09-28T13:10:26Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=7340cc84141d5236c5dd003359ee921513cd9b84'/>
<id>urn:sha1:7340cc84141d5236c5dd003359ee921513cd9b84</id>
<content type='text'>
Reduce the #ifdefs and simplify bootstrap by making SMP and NUMA as much alike
as possible. This means that there will be an additional indirection to get to
the kmem_cache_node field under SMP.

Acked-by: David Rientjes &lt;rientjes@google.com&gt;
Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Pekka Enberg &lt;penberg@kernel.org&gt;
</content>
</entry>
</feed>
