<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/include/linux/dma-direct.h, branch v5.3.2</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.3.2</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.3.2'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2019-07-16T20:15:46Z</updated>
<entry>
<title>dma-direct: Force unencrypted DMA under SME for certain DMA masks</title>
<updated>2019-07-16T20:15:46Z</updated>
<author>
<name>Tom Lendacky</name>
<email>thomas.lendacky@amd.com</email>
</author>
<published>2019-07-10T19:01:19Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=9087c37584fb7d8315877bb55f85e4268cc0b4f4'/>
<id>urn:sha1:9087c37584fb7d8315877bb55f85e4268cc0b4f4</id>
<content type='text'>
If a device doesn't support DMA to a physical address that includes the
encryption bit (currently bit 47, so 48-bit DMA), then the DMA must
occur to unencrypted memory. SWIOTLB is used to satisfy that requirement
if an IOMMU is not active (enabled or configured in passthrough mode).

However, commit fafadcd16595 ("swiotlb: don't dip into swiotlb pool for
coherent allocations") modified the coherent allocation support in
SWIOTLB to use the DMA direct coherent allocation support. When an IOMMU
is not active, this resulted in dma_alloc_coherent() failing for devices
that didn't support DMA addresses that included the encryption bit.

Addressing this requires changes to the force_dma_unencrypted() function
in kernel/dma/direct.c. Since the function is now non-trivial and
SME/SEV specific, update the DMA direct support to add an arch override
for the force_dma_unencrypted() function. The arch override is selected
when CONFIG_AMD_MEM_ENCRYPT is set. The arch override function resides in
the arch/x86/mm/mem_encrypt.c file and forces unencrypted DMA when either
SEV is active or SME is active and the device does not support DMA to
physical addresses that include the encryption bit.

Fixes: fafadcd16595 ("swiotlb: don't dip into swiotlb pool for coherent allocations")
Suggested-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Tom Lendacky &lt;thomas.lendacky@amd.com&gt;
Acked-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
[hch: moved the force_dma_unencrypted declaration to dma-mapping.h,
      fold the s390 fix from Halil Pasic]
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>dma-mapping: bypass indirect calls for dma-direct</title>
<updated>2018-12-13T20:06:18Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2018-12-06T21:39:32Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=356da6d0cde3323236977fce54c1f9612a742036'/>
<id>urn:sha1:356da6d0cde3323236977fce54c1f9612a742036</id>
<content type='text'>
Avoid expensive indirect calls in the fast path DMA mapping
operations by directly calling the dma_direct_* ops if we are using
the directly mapped DMA operations.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Acked-by: Jesper Dangaard Brouer &lt;brouer@redhat.com&gt;
Tested-by: Jesper Dangaard Brouer &lt;brouer@redhat.com&gt;
Tested-by: Tony Luck &lt;tony.luck@intel.com&gt;
</content>
</entry>
<entry>
<title>dma-direct: merge swiotlb_dma_ops into the dma_direct code</title>
<updated>2018-12-13T20:06:17Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2018-12-03T10:43:54Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=55897af63091ebc2c3f239c6a6666f748113ac50'/>
<id>urn:sha1:55897af63091ebc2c3f239c6a6666f748113ac50</id>
<content type='text'>
While the dma-direct code is (relatively) clean and simple we actually
have to use the swiotlb ops for the mapping on many architectures due
to devices with addressing limits.  Instead of keeping two
implementations around this commit allows the dma-direct
implementation to call the swiotlb bounce buffering functions and
thus share the guts of the mapping implementation.  This also
simplified the dma-mapping setup on a few architectures where we
don't have to differenciate which implementation to use.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Acked-by: Jesper Dangaard Brouer &lt;brouer@redhat.com&gt;
Tested-by: Jesper Dangaard Brouer &lt;brouer@redhat.com&gt;
Tested-by: Tony Luck &lt;tony.luck@intel.com&gt;
</content>
</entry>
<entry>
<title>swiotlb: remove dma_mark_clean</title>
<updated>2018-12-13T20:06:14Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2018-12-06T15:06:04Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=68c608345cc569bcfa1c1b2add4c00c343ecf933'/>
<id>urn:sha1:68c608345cc569bcfa1c1b2add4c00c343ecf933</id>
<content type='text'>
Instead of providing a special dma_mark_clean hook just for ia64, switch
ia64 to use the normal arch_sync_dma_for_cpu hooks instead.

This means that we now also set the PG_arch_1 bit for pages in the
swiotlb buffer, which isn't stricly needed as we will never execute code
out of the swiotlb buffer, but otherwise harmless.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Acked-by: Jesper Dangaard Brouer &lt;brouer@redhat.com&gt;
Tested-by: Jesper Dangaard Brouer &lt;brouer@redhat.com&gt;
Tested-by: Tony Luck &lt;tony.luck@intel.com&gt;
</content>
</entry>
<entry>
<title>dma-direct: remove the mapping_error dma_map_ops method</title>
<updated>2018-12-06T14:56:36Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2018-11-21T17:52:35Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b0cbeae4944924640bf550b75487729a20204c14'/>
<id>urn:sha1:b0cbeae4944924640bf550b75487729a20204c14</id>
<content type='text'>
The dma-direct code already returns (~(dma_addr_t)0x0) on mapping
failures, so we can switch over to returning DMA_MAPPING_ERROR and let
the core dma-mapping code handle the rest.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Acked-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>dma-direct: provide page based alloc/free helpers</title>
<updated>2018-12-01T16:55:02Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2018-11-04T16:27:56Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b18814e767a445534ab9ccba02e82a31208f85d6'/>
<id>urn:sha1:b18814e767a445534ab9ccba02e82a31208f85d6</id>
<content type='text'>
Some architectures support remapping highmem into DMA coherent
allocations.  To use the common code for them we need variants of
dma_direct_{alloc,free}_pages that do not use kernel virtual addresses.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
</content>
</entry>
<entry>
<title>dma-direct: Make DIRECT_MAPPING_ERROR viable for SWIOTLB</title>
<updated>2018-11-21T17:47:52Z</updated>
<author>
<name>Robin Murphy</name>
<email>robin.murphy@arm.com</email>
</author>
<published>2018-11-21T16:00:50Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b34087157dd76e8d96e5e52808134a791ac61e57'/>
<id>urn:sha1:b34087157dd76e8d96e5e52808134a791ac61e57</id>
<content type='text'>
With the overflow buffer removed, we no longer have a unique address
which is guaranteed not to be a valid DMA target to use as an error
token. The DIRECT_MAPPING_ERROR value of 0 tries to at least represent
an unlikely DMA target, but unfortunately there are already SWIOTLB
users with DMA-able memory at physical address 0 which now gets falsely
treated as a mapping failure and leads to all manner of misbehaviour.

The best we can do to mitigate that is flip DIRECT_MAPPING_ERROR to the
other commonly-used error value of all-bits-set, since the last single
byte of memory is by far the least-likely-valid DMA target.

Fixes: dff8d6c1ed58 ("swiotlb: remove the overflow buffer")
Reported-by: John Stultz &lt;john.stultz@linaro.org&gt;
Tested-by: John Stultz &lt;john.stultz@linaro.org&gt;
Acked-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>swiotlb: remove the overflow buffer</title>
<updated>2018-10-19T06:43:46Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2018-08-16T12:30:39Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=dff8d6c1ed584de65aac40494d3e7468c50980c3'/>
<id>urn:sha1:dff8d6c1ed584de65aac40494d3e7468c50980c3</id>
<content type='text'>
Like all other dma mapping drivers just return an error code instead
of an actual memory buffer.  The reason for the overflow buffer was
that at the time swiotlb was invented there was no way to check for
dma mapping errors, but this has long been fixed.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Acked-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Reviewed-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
</content>
</entry>
<entry>
<title>dma-direct: implement complete bus_dma_mask handling</title>
<updated>2018-10-01T14:28:03Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2018-09-20T12:04:08Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b4ebe6063204da58e48600b810a97c29ae9e5d12'/>
<id>urn:sha1:b4ebe6063204da58e48600b810a97c29ae9e5d12</id>
<content type='text'>
Instead of rejecting devices with a too small bus_dma_mask we can handle
by taking the bus dma_mask into account for allocations and bounce
buffering decisions.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>dma-direct: add an explicit dma_direct_get_required_mask</title>
<updated>2018-10-01T14:27:15Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2018-09-20T11:26:13Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=a20bb058375147cb639c7aa17ef86ad68b32d847'/>
<id>urn:sha1:a20bb058375147cb639c7aa17ef86ad68b32d847</id>
<content type='text'>
This is somewhat modelled after the powerpc version, and differs from
the legacy fallback in use fls64 instead of pointlessly splitting up the
address into low and high dwords and in that it takes (__)phys_to_dma
into account.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Acked-by: Benjamin Herrenschmidt &lt;benh@kernel.crashing.org&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
</content>
</entry>
</feed>
