<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/kernel/dma, branch v5.15.91</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.15.91</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.15.91'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2022-10-05T08:39:40Z</updated>
<entry>
<title>swiotlb: max mapping size takes min align mask into account</title>
<updated>2022-10-05T08:39:40Z</updated>
<author>
<name>Tianyu Lan</name>
<email>Tianyu.Lan@microsoft.com</email>
</author>
<published>2022-05-10T14:21:09Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=bfe5dc2101ba014bd4937b76a4b02e7f8aa8f816'/>
<id>urn:sha1:bfe5dc2101ba014bd4937b76a4b02e7f8aa8f816</id>
<content type='text'>
commit 82806744fd7dde603b64c151eeddaa4ee62193fd upstream.

swiotlb_find_slots() skips slots according to io tlb aligned mask
calculated from min aligned mask and original physical address
offset. This affects max mapping size. The mapping size can't
achieve the IO_TLB_SEGSIZE * IO_TLB_SIZE when original offset is
non-zero. This will cause system boot up failure in Hyper-V
Isolation VM where swiotlb force is enabled. Scsi layer use return
value of dma_max_mapping_size() to set max segment size and it
finally calls swiotlb_max_mapping_size(). Hyper-V storage driver
sets min align mask to 4k - 1. Scsi layer may pass 256k length of
request buffer with 0~4k offset and Hyper-V storage driver can't
get swiotlb bounce buffer via DMA API. Swiotlb_find_slots() can't
find 256k length bounce buffer with offset. Make swiotlb_max_mapping
_size() take min align mask into account.

Signed-off-by: Tianyu Lan &lt;Tianyu.Lan@microsoft.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Rishabh Bhatnagar &lt;risbhat@amazon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>swiotlb: avoid potential left shift overflow</title>
<updated>2022-09-15T09:30:07Z</updated>
<author>
<name>Chao Gao</name>
<email>chao.gao@intel.com</email>
</author>
<published>2022-08-19T08:45:37Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4f8d658848087b5efd3c84658fb145474ca0d336'/>
<id>urn:sha1:4f8d658848087b5efd3c84658fb145474ca0d336</id>
<content type='text'>
[ Upstream commit 3f0461613ebcdc8c4073e235053d06d5aa58750f ]

The second operand passed to slot_addr() is declared as int or unsigned int
in all call sites. The left-shift to get the offset of a slot can overflow
if swiotlb size is larger than 4G.

Convert the macro to an inline function and declare the second argument as
phys_addr_t to avoid the potential overflow.

Fixes: 26a7e094783d ("swiotlb: refactor swiotlb_tbl_map_single")
Signed-off-by: Chao Gao &lt;chao.gao@intel.com&gt;
Reviewed-by: Dongli Zhang &lt;dongli.zhang@oracle.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>swiotlb: fail map correctly with failed io_tlb_default_mem</title>
<updated>2022-08-17T12:24:07Z</updated>
<author>
<name>Robin Murphy</name>
<email>robin.murphy@arm.com</email>
</author>
<published>2022-07-12T06:46:45Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1008e81163e8fadb3b92dce4eee3ea96e3b7dcd7'/>
<id>urn:sha1:1008e81163e8fadb3b92dce4eee3ea96e3b7dcd7</id>
<content type='text'>
[ Upstream commit c51ba246cb172c9e947dc6fb8868a1eaf0b2a913 ]

In the failure case of trying to use a buffer which we'd previously
failed to allocate, the "!mem" condition is no longer sufficient since
io_tlb_default_mem became static and assigned by default. Update the
condition to work as intended per the rest of that conversion.

Fixes: 463e862ac63e ("swiotlb: Convert io_default_tlb_mem to static allocation")
Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>dma-direct: use the correct size for dma_set_encrypted()</title>
<updated>2022-06-29T07:03:31Z</updated>
<author>
<name>Dexuan Cui</name>
<email>decui@microsoft.com</email>
</author>
<published>2022-06-22T19:14:24Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=cced9ce619ef483616b9e5ef4da6287ba2292a29'/>
<id>urn:sha1:cced9ce619ef483616b9e5ef4da6287ba2292a29</id>
<content type='text'>
commit 3be4562584bba603f33863a00c1c32eecf772ee6 upstream.

The third parameter of dma_set_encrypted() is a size in bytes rather than
the number of pages.

Fixes: 4d0564785bb0 ("dma-direct: factor out dma_set_{de,en}crypted helpers")
Signed-off-by: Dexuan Cui &lt;decui@microsoft.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>dma-debug: make things less spammy under memory pressure</title>
<updated>2022-06-22T12:21:55Z</updated>
<author>
<name>Rob Clark</name>
<email>robdclark@chromium.org</email>
</author>
<published>2022-06-01T14:51:16Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=e4e166f10e7026681bbab3fa1eb2dd2a0914453d'/>
<id>urn:sha1:e4e166f10e7026681bbab3fa1eb2dd2a0914453d</id>
<content type='text'>
[ Upstream commit e19f8fa6ce1ca9b8b934ba7d2e8f34c95abc6e60 ]

Limit the error msg to avoid flooding the console.  If you have a lot of
threads hitting this at once, they could have already gotten passed the
dma_debug_disabled() check before they get to the point of allocation
failure, resulting in quite a lot of this error message spamming the
log.  Use pr_err_once() to limit that.

Signed-off-by: Rob Clark &lt;robdclark@chromium.org&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>dma-direct: don't over-decrypt memory</title>
<updated>2022-06-09T08:23:03Z</updated>
<author>
<name>Robin Murphy</name>
<email>robin.murphy@arm.com</email>
</author>
<published>2022-05-20T17:10:13Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=a48a7f89494f5335f9e7a799ba9de68c1948f671'/>
<id>urn:sha1:a48a7f89494f5335f9e7a799ba9de68c1948f671</id>
<content type='text'>
[ Upstream commit 4a37f3dd9a83186cb88d44808ab35b78375082c9 ]

The original x86 sev_alloc() only called set_memory_decrypted() on
memory returned by alloc_pages_node(), so the page order calculation
fell out of that logic. However, the common dma-direct code has several
potential allocators, not all of which are guaranteed to round up the
underlying allocation to a power-of-two size, so carrying over that
calculation for the encryption/decryption size was a mistake. Fix it by
rounding to a *number* of pages, rather than an order.

Until recently there was an even worse interaction with DMA_DIRECT_REMAP
where we could have ended up decrypting part of the next adjacent
vmalloc area, only averted by no architecture actually supporting both
configs at once. Don't ask how I found that one out...

Fixes: c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent buffers in common code")
Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Acked-by: David Rientjes &lt;rientjes@google.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>dma-direct: always leak memory that can't be re-encrypted</title>
<updated>2022-06-09T08:23:03Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2021-11-09T14:41:01Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=5beb74d11eabd745d75bc91947b2721d2bd95deb'/>
<id>urn:sha1:5beb74d11eabd745d75bc91947b2721d2bd95deb</id>
<content type='text'>
[ Upstream commit a90cf30437489343b8386ae87b4827b6d6c3ed50 ]

We must never let unencrypted memory go back into the general page pool.
So if we fail to set it back to encrypted when freeing DMA memory, leak
the memory instead and warn the user.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>dma-direct: don't call dma_set_decrypted for remapped allocations</title>
<updated>2022-06-09T08:23:03Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2021-10-21T07:20:39Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=9ba801c80c47c1283a570248d8ea3d139fb190d7'/>
<id>urn:sha1:9ba801c80c47c1283a570248d8ea3d139fb190d7</id>
<content type='text'>
[ Upstream commit 5570449b6876f215d49ac4db9ccce6ff7aa1e20a ]

Remapped allocations handle the encrypted bit through the pgprot passed
to vmap, so there is no call dma_set_decrypted.  Note that this case is
currently entirely theoretical as no valid kernel configuration supports
remapped allocations and memory encryption currently.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>dma-direct: factor out dma_set_{de,en}crypted helpers</title>
<updated>2022-06-09T08:23:03Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2021-10-18T11:18:34Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=82b3f045aff5b6e40526d48f78b8fd9b0488aa34'/>
<id>urn:sha1:82b3f045aff5b6e40526d48f78b8fd9b0488aa34</id>
<content type='text'>
[ Upstream commit 4d0564785bb03841e4b5c5b31aa4ecd1eb0d01bb ]

Factor out helpers the make dealing with memory encryption a little less
cumbersome.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>dma-direct: don't fail on highmem CMA pages in dma_direct_alloc_pages</title>
<updated>2022-06-09T08:22:56Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2022-04-23T17:20:24Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=6635e6ba1649ee5f68baf59cc01bf0151d0b4fd8'/>
<id>urn:sha1:6635e6ba1649ee5f68baf59cc01bf0151d0b4fd8</id>
<content type='text'>
[ Upstream commit 92826e967535db2eb117db227b1191aaf98e4bb3 ]

When dma_direct_alloc_pages encounters a highmem page it just gives up
currently.  But what we really should do is to try memory using the
page allocator instead - without this platforms with a global highmem
CMA pool will fail all dma_alloc_pages allocations.

Fixes: efa70f2fdc84 ("dma-mapping: add a new dma_alloc_pages API")
Reported-by: Mark O'Neill &lt;mao@tumblingdice.co.uk&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
</feed>
