<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/arch/tile/kernel/single_step.c, branch v4.9.139</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v4.9.139</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v4.9.139'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2016-01-18T19:49:30Z</updated>
<entry>
<title>arch/tile: move user_exit() to early kernel entry sequence</title>
<updated>2016-01-18T19:49:30Z</updated>
<author>
<name>Chris Metcalf</name>
<email>cmetcalf@ezchip.com</email>
</author>
<published>2015-12-23T22:13:04Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1bb50cad45f4a3fb9a339006d76efc50f70eed5b'/>
<id>urn:sha1:1bb50cad45f4a3fb9a339006d76efc50f70eed5b</id>
<content type='text'>
This ensures that we always notify context tracking that we
have exited from user space no matter how we enter the kernel.
It is similar to how arm64 handles context tracking, for example.

This allows the removal of all the exception_enter() calls that
were added in commit 49e4e15619cd ("tile: support CONTEXT_TRACKING and
thus NOHZ_FULL").

Signed-off-by: Chris Metcalf &lt;cmetcalf@ezchip.com&gt;
</content>
</entry>
<entry>
<title>tile: support CONTEXT_TRACKING and thus NOHZ_FULL</title>
<updated>2015-04-17T18:01:10Z</updated>
<author>
<name>Chris Metcalf</name>
<email>cmetcalf@ezchip.com</email>
</author>
<published>2015-03-23T18:23:58Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=49e4e15619cd7cd9fc275d460fae2a95c1337fcc'/>
<id>urn:sha1:49e4e15619cd7cd9fc275d460fae2a95c1337fcc</id>
<content type='text'>
Add the TIF_NOHZ flag appropriately.

Add call to user_exit() on entry to do_work_pending() and on entry
to syscalls via do_syscall_trace_enter(), and also the top of
do_syscall_trace_exit() just because it's done in x86.

Add call to user_enter() at the bottom of do_work_pending() once we
have no more work to do before returning to userspace.

Wrap all the trap code in exception_enter() / exception_exit().

Signed-off-by: Chris Metcalf &lt;cmetcalf@ezchip.com&gt;
Acked-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
</content>
</entry>
<entry>
<title>tile: Use the more common pr_warn instead of pr_warning</title>
<updated>2014-11-11T20:51:42Z</updated>
<author>
<name>Joe Perches</name>
<email>joe@perches.com</email>
</author>
<published>2014-10-31T17:50:46Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=f47436734dc89ece62654d4db8d08163a89dd7ca'/>
<id>urn:sha1:f47436734dc89ece62654d4db8d08163a89dd7ca</id>
<content type='text'>
And other message logging neatening.

Other miscellanea:

o coalesce formats
o realign arguments
o standardize a couple of macros
o use __func__ instead of embedding the function name

Signed-off-by: Joe Perches &lt;joe@perches.com&gt;
Signed-off-by: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
</content>
</entry>
<entry>
<title>tile: Replace __get_cpu_var uses</title>
<updated>2014-08-26T17:45:54Z</updated>
<author>
<name>Christoph Lameter</name>
<email>cl@linux.com</email>
</author>
<published>2014-08-17T17:30:50Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b4f501916ce2ae80c28017814d71d1bf83679271'/>
<id>urn:sha1:b4f501916ce2ae80c28017814d71d1bf83679271</id>
<content type='text'>
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &amp;__get_cpu_var(x).  This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.

Other use cases are for storing and retrieving data from the current
processors percpu area.  __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.

__get_cpu_var() is defined as :

#define __get_cpu_var(var) (*this_cpu_ptr(&amp;(var)))

__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.

this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.

This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset.  Thereby address calculations are avoided and less registers
are used when code is generated.

At the end of the patch set all uses of __get_cpu_var have been removed so
the macro is removed too.

The patch set includes passes over all arches as well. Once these operations
are used throughout then specialized macros can be defined in non -x86
arches as well in order to optimize per cpu access by f.e.  using a global
register that may be set to the per cpu base.

Transformations done to __get_cpu_var()

1. Determine the address of the percpu instance of the current processor.

	DEFINE_PER_CPU(int, y);
	int *x = &amp;__get_cpu_var(y);

    Converts to

	int *x = this_cpu_ptr(&amp;y);

2. Same as #1 but this time an array structure is involved.

	DEFINE_PER_CPU(int, y[20]);
	int *x = __get_cpu_var(y);

    Converts to

	int *x = this_cpu_ptr(y);

3. Retrieve the content of the current processors instance of a per cpu
variable.

	DEFINE_PER_CPU(int, y);
	int x = __get_cpu_var(y)

   Converts to

	int x = __this_cpu_read(y);

4. Retrieve the content of a percpu struct

	DEFINE_PER_CPU(struct mystruct, y);
	struct mystruct x = __get_cpu_var(y);

   Converts to

	memcpy(&amp;x, this_cpu_ptr(&amp;y), sizeof(x));

5. Assignment to a per cpu variable

	DEFINE_PER_CPU(int, y)
	__get_cpu_var(y) = x;

   Converts to

	__this_cpu_write(y, x);

6. Increment/Decrement etc of a per cpu variable

	DEFINE_PER_CPU(int, y);
	__get_cpu_var(y)++

   Converts to

	__this_cpu_inc(y)

Acked-by: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
</entry>
<entry>
<title>tile: remove support for TILE64</title>
<updated>2013-09-03T18:53:29Z</updated>
<author>
<name>Chris Metcalf</name>
<email>cmetcalf@tilera.com</email>
</author>
<published>2013-08-15T20:23:24Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d7c9661115fd23b4dabb710b3080dd9919dfa891'/>
<id>urn:sha1:d7c9661115fd23b4dabb710b3080dd9919dfa891</id>
<content type='text'>
This chip is no longer being actively developed for (it was superceded
by the TILEPro64 in 2008), and in any case the existing compiler and
toolchain in the community do not support it.  It's unlikely that the
kernel works with TILE64 at this point as the configuration has not been
tested in years.  The support is also awkward as it requires maintaining
a significant number of ifdefs.  So, just remove it altogether.

Signed-off-by: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
</content>
</entry>
<entry>
<title>tile: fast-path unaligned memory access for tilegx</title>
<updated>2013-08-13T20:04:10Z</updated>
<author>
<name>Chris Metcalf</name>
<email>cmetcalf@tilera.com</email>
</author>
<published>2013-08-06T20:04:13Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=2f9ac29eec71a696cb0dcc5fb82c0f8d4dac28c9'/>
<id>urn:sha1:2f9ac29eec71a696cb0dcc5fb82c0f8d4dac28c9</id>
<content type='text'>
This change enables unaligned userspace memory access via a kernel
fast path on tilegx.  The kernel tracks user PC/instruction pairs
per-thread using a direct-mapped cache in userspace.  The cache
maps those PC/instruction pairs to JIT'ed instruction sequences that
load or store using byte-wide load store intructions and then
synthesize 2-, 4- or 8-byte load or store results.  Once an
instruction has been seen to generate an unaligned access once,
subsequent hits on that instruction typically require overhead
of only around 50 cycles if cache and TLB is hot.

We support the prctl() PR_GET_UNALIGN / PR_SET_UNALIGN sys call to
enable or disable unaligned fixups on a per-process basis.

To do this we pull some of the tilepro unaligned support out of the
single_step.c file; tilepro uses instruction disassembly for both
single-step and unaligned access support.  Since tilegx actually has
hardware singlestep support, though, it's cleaner to keep the tilegx
unaligned access code in a separate file.  While we're at it,
properly rename the tilepro-specific types, etc., to have tilepro
suffixes instead of generic tile suffixes.

Signed-off-by: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
</content>
</entry>
<entry>
<title>arch/tile: support building big-endian kernel</title>
<updated>2012-05-25T16:48:22Z</updated>
<author>
<name>Chris Metcalf</name>
<email>cmetcalf@tilera.com</email>
</author>
<published>2012-03-29T17:30:31Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1efea40d4172a2a475ccb29b59d6221e9d0c174b'/>
<id>urn:sha1:1efea40d4172a2a475ccb29b59d6221e9d0c174b</id>
<content type='text'>
The toolchain supports big-endian mode now, so add support for building
the kernel to run big-endian as well.

Signed-off-by: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
</content>
</entry>
<entry>
<title>VM: add "vm_mmap()" helper function</title>
<updated>2012-04-21T00:29:13Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2012-04-21T00:13:58Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=6be5ceb02e98eaf6cfc4f8b12a896d04023f340d'/>
<id>urn:sha1:6be5ceb02e98eaf6cfc4f8b12a896d04023f340d</id>
<content type='text'>
This continues the theme started with vm_brk() and vm_munmap():
vm_mmap() does the same thing as do_mmap(), but additionally does the
required VM locking.

This uninlines (and rewrites it to be clearer) do_mmap(), which sadly
duplicates it in mm/mmap.c and mm/nommu.c.  But that way we don't have
to export our internal do_mmap_pgoff() function.

Some day we hopefully don't have to export do_mmap() either, if all
modular users can become the simpler vm_mmap() instead.  We're actually
very close to that already, with the notable exception of the (broken)
use in i810, and a couple of stragglers in binfmt_elf.

Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>arch/tile: return SIGBUS for addresses that are unaligned AND invalid</title>
<updated>2012-04-02T16:13:56Z</updated>
<author>
<name>Chris Metcalf</name>
<email>cmetcalf@tilera.com</email>
</author>
<published>2012-03-30T20:24:41Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=cdd8e16feba87a3fc2bb1885d36f895a2a3288bf'/>
<id>urn:sha1:cdd8e16feba87a3fc2bb1885d36f895a2a3288bf</id>
<content type='text'>
Previously we were returning SIGSEGV in this case.  It seems cleaner
to return SIGBUS since the hardware figures out alignment traps
before TLB violations, so SIGBUS is the "more correct" signal.

Signed-off-by: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
</content>
</entry>
<entry>
<title>Disintegrate asm/system.h for Tile</title>
<updated>2012-03-28T17:30:03Z</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2012-03-28T17:30:03Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=bd119c69239322caafdb64517a806037d0d0c70a'/>
<id>urn:sha1:bd119c69239322caafdb64517a806037d0d0c70a</id>
<content type='text'>
Disintegrate asm/system.h for Tile.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
Acked-by: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
</content>
</entry>
</feed>
