<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/include/linux/scatterlist.h, branch v6.2.7</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v6.2.7</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v6.2.7'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2022-07-26T11:27:47Z</updated>
<entry>
<title>lib/scatterlist: add flag for indicating P2PDMA segments in an SGL</title>
<updated>2022-07-26T11:27:47Z</updated>
<author>
<name>Logan Gunthorpe</name>
<email>logang@deltatee.com</email>
</author>
<published>2022-07-08T16:50:52Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=42399301203e3cddef36cde457228f9247618313'/>
<id>urn:sha1:42399301203e3cddef36cde457228f9247618313</id>
<content type='text'>
Introduce a dma_flags field in struct scatterlist. These flags will be
used by dma_[un]map_sg_p2pdma() to determine when a given SGL segments
dma_address points to a PCI bus address. dma_unmap_sg_p2pdma() will need
to perform different cleanup when a segment is marked as a bus address.

The dma_flags field will fit in the existing padding on 64BIT systems
(assuming CONFIG_NEED_SG_DMA_LENGTH is also set).

The new bit will only be used when CONFIG_PCI_P2PDMA is set; this means
PCI P2PDMA will require CONFIG_64BIT. This should be acceptable as the
majority of P2PDMA use cases are restricted to newer root complexes and
roughly require the extra address space for memory BARs used in the
transactions.

Signed-off-by: Logan Gunthorpe &lt;logang@deltatee.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>lib/scatterlist: cleanup macros into static inline functions</title>
<updated>2021-12-22T08:21:43Z</updated>
<author>
<name>Logan Gunthorpe</name>
<email>logang@deltatee.com</email>
</author>
<published>2021-11-17T21:53:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=f857acfc457ea63fa5b862d77f055665d863acfe'/>
<id>urn:sha1:f857acfc457ea63fa5b862d77f055665d863acfe</id>
<content type='text'>
Convert the sg_is_chain(), sg_is_last() and sg_chain_ptr() macros
into static inline functions. There's no reason for these to be macros
and static inline are generally preferred these days.

Also introduce the SG_PAGE_LINK_MASK define so the P2PDMA work, which is
adding another bit to this mask, can do so more easily.

Suggested-by: Jason Gunthorpe &lt;jgg@nvidia.com&gt;
Signed-off-by: Logan Gunthorpe &lt;logang@deltatee.com&gt;
Reviewed-by: Chaitanya Kulkarni &lt;kch@nvidia.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>lib/scatterlist: Fix wrong update of orig_nents</title>
<updated>2021-08-24T22:52:40Z</updated>
<author>
<name>Maor Gottlieb</name>
<email>maorg@nvidia.com</email>
</author>
<published>2021-08-24T14:25:30Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3e302dbc6774a27edaea39a1d5107f0c12e35cf2'/>
<id>urn:sha1:3e302dbc6774a27edaea39a1d5107f0c12e35cf2</id>
<content type='text'>
orig_nents should represent the number of entries with pages,
but __sg_alloc_table_from_pages sets orig_nents as the number of
total entries in the table. This is wrong when the API is used for
dynamic allocation where not all the table entries are mapped with
pages. It wasn't observed until now, since RDMA umem who uses this
API in the dynamic form doesn't use orig_nents implicit or explicit
by the scatterlist APIs.

Fix it by changing the append API to track the SG append table
state and have an API to free the append table according to the
total number of entries in the table.
Now all APIs set orig_nents as number of enries with pages.

Fixes: 07da1223ec93 ("lib/scatterlist: Add support in dynamic allocation of SG table from pages")
Link: https://lore.kernel.org/r/20210824142531.3877007-3-maorg@nvidia.com
Signed-off-by: Maor Gottlieb &lt;maorg@nvidia.com&gt;
Signed-off-by: Leon Romanovsky &lt;leonro@nvidia.com&gt;
Signed-off-by: Jason Gunthorpe &lt;jgg@nvidia.com&gt;
</content>
</entry>
<entry>
<title>lib/scatterlist: Provide a dedicated function to support table append</title>
<updated>2021-08-24T18:21:14Z</updated>
<author>
<name>Maor Gottlieb</name>
<email>maorg@nvidia.com</email>
</author>
<published>2021-08-24T14:25:29Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=90e7a6de62781c27d6a111fccfb19b807f9b6887'/>
<id>urn:sha1:90e7a6de62781c27d6a111fccfb19b807f9b6887</id>
<content type='text'>
RDMA is the only in-kernel user that uses __sg_alloc_table_from_pages to
append pages dynamically. In the next patch. That mode will be extended
and that function will get more parameters. So separate it into a unique
function to make such change more clear.

Link: https://lore.kernel.org/r/20210824142531.3877007-2-maorg@nvidia.com
Signed-off-by: Maor Gottlieb &lt;maorg@nvidia.com&gt;
Signed-off-by: Leon Romanovsky &lt;leonro@nvidia.com&gt;
Signed-off-by: Jason Gunthorpe &lt;jgg@nvidia.com&gt;
</content>
</entry>
<entry>
<title>lib: fix spelling mistakes in header files</title>
<updated>2021-07-08T18:48:20Z</updated>
<author>
<name>Zhen Lei</name>
<email>thunder.leizhen@huawei.com</email>
</author>
<published>2021-07-08T01:07:34Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c23c80822fbdf69c1aacbca50b8339972697f850'/>
<id>urn:sha1:c23c80822fbdf69c1aacbca50b8339972697f850</id>
<content type='text'>
Fix some spelling mistakes in comments found by "codespell":
Hoever ==&gt; However
poiter ==&gt; pointer
representaion ==&gt; representation
uppon ==&gt; upon
independend ==&gt; independent
aquired ==&gt; acquired
mis-match ==&gt; mismatch
scrach ==&gt; scratch
struture ==&gt; structure
Analagous ==&gt; Analogous
interation ==&gt; iteration

And some were discovered manually by Joe Perches and Christoph Lameter:
stroed ==&gt; stored
arch independent ==&gt; an architecture independent
A example structure for ==&gt; Example structure for

Link: https://lkml.kernel.org/r/20210609150027.14805-2-thunder.leizhen@huawei.com
Signed-off-by: Zhen Lei &lt;thunder.leizhen@huawei.com&gt;
Cc: Christoph Lameter &lt;cl@gentwo.de&gt;
Cc: Masami Hiramatsu &lt;mhiramat@kernel.org&gt;
Cc: Dennis Zhou &lt;dennis@kernel.org&gt;
Cc: Tejun Heo &lt;tj@kernel.org&gt;
Cc: Joe Perches &lt;joe@perches.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>drm: Remove SCATTERLIST_MAX_SEGMENT</title>
<updated>2020-11-02T13:42:57Z</updated>
<author>
<name>Jason Gunthorpe</name>
<email>jgg@nvidia.com</email>
</author>
<published>2020-10-28T19:15:26Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=7a60c2dd0f575ab14a457e99582af0ca1e072a74'/>
<id>urn:sha1:7a60c2dd0f575ab14a457e99582af0ca1e072a74</id>
<content type='text'>
Since commit 9a40401cfa13 ("lib/scatterlist: Do not limit max_segment to
PAGE_ALIGNED values") the max_segment input to sg_alloc_table_from_pages()
does not have to be any special value. The new algorithm will always
create something less than what the user provides. Thus eliminate this
confusing constant.

- vmwgfx should use the HW capability, not mix in the OS page size for
  calling dma_set_max_seg_size()

- i915 uses i915_sg_segment_size() both for sg_alloc_table_from_pages
  and for some open coded sgl construction. This doesn't change the value
  since rounddown(size, UINT_MAX) == SCATTERLIST_MAX_SEGMENT

- drm_prime_pages_to_sg uses it as a default if max_segment is zero,
  UINT_MAX is fine to use directly.

Cc: Gerd Hoffmann &lt;kraxel@redhat.com&gt;
Cc: Daniel Vetter &lt;daniel.vetter@ffwll.ch&gt;
Cc: Thomas Hellstrom &lt;thellstrom@vmware.com&gt;
Cc: Qian Cai &lt;cai@lca.pw&gt;
Cc: "Ursulin, Tvrtko" &lt;tvrtko.ursulin@intel.com&gt;
Suggested-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Jason Gunthorpe &lt;jgg@nvidia.com&gt;
Signed-off-by: Daniel Vetter &lt;daniel.vetter@ffwll.ch&gt;
Link: https://patchwork.freedesktop.org/patch/msgid/0-v1-44733fccd781+13d-rm_scatterlist_max_jgg@nvidia.com
</content>
</entry>
<entry>
<title>lib/scatterlist: Add support in dynamic allocation of SG table from pages</title>
<updated>2020-10-05T23:45:45Z</updated>
<author>
<name>Maor Gottlieb</name>
<email>maorg@nvidia.com</email>
</author>
<published>2020-10-04T15:43:37Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=07da1223ec939982497db3caccd6215b55acc35c'/>
<id>urn:sha1:07da1223ec939982497db3caccd6215b55acc35c</id>
<content type='text'>
Extend __sg_alloc_table_from_pages to support dynamic allocation of
SG table from pages. It should be used by drivers that can't supply
all the pages at one time.

This function returns the last populated SGE in the table. Users should
pass it as an argument to the function from the second call and forward.
As before, nents will be equal to the number of populated SGEs (chunks).

With this new extension, drivers can benefit the optimization of merging
contiguous pages without a need to allocate all pages in advance and
hold them in a large buffer.

E.g. with the Infiniband driver that allocates a single page for hold the
pages. For 1TB memory registration, the temporary buffer would consume only
4KB, instead of 2GB.

Link: https://lore.kernel.org/r/20201004154340.1080481-2-leon@kernel.org
Signed-off-by: Maor Gottlieb &lt;maorg@nvidia.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Leon Romanovsky &lt;leonro@nvidia.com&gt;
Signed-off-by: Jason Gunthorpe &lt;jgg@nvidia.com&gt;
</content>
</entry>
<entry>
<title>scatterlist: protect parameters of the sg_table related macros</title>
<updated>2020-07-06T14:07:25Z</updated>
<author>
<name>Marek Szyprowski</name>
<email>m.szyprowski@samsung.com</email>
</author>
<published>2020-06-30T08:16:02Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=68d237056e007c88031d80900cdba0945121a287'/>
<id>urn:sha1:68d237056e007c88031d80900cdba0945121a287</id>
<content type='text'>
Add brackets to protect parameters of the recently added sg_table related
macros from side-effects.

Fixes: 709d6d73c756 ("scatterlist: add generic wrappers for iterating over sgtable objects")
Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>scatterlist: add generic wrappers for iterating over sgtable objects</title>
<updated>2020-05-13T13:48:17Z</updated>
<author>
<name>Marek Szyprowski</name>
<email>m.szyprowski@samsung.com</email>
</author>
<published>2020-05-13T13:32:09Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=709d6d73c756107fb8a292a9f957d630097425fa'/>
<id>urn:sha1:709d6d73c756107fb8a292a9f957d630097425fa</id>
<content type='text'>
struct sg_table is a common structure used for describing a memory
buffer. It consists of a scatterlist with memory pages and DMA addresses
(sgl entry), as well as the number of scatterlist entries: CPU pages
(orig_nents entry) and DMA mapped pages (nents entry).

It turned out that it was a common mistake to misuse nents and orig_nents
entries, calling the scatterlist iterating functions with a wrong number
of the entries.

To avoid such issues, lets introduce a common wrappers operating directly
on the struct sg_table objects, which take care of the proper use of
the nents and orig_nents entries.

While touching this, lets clarify some ambiguities in the comments for
the existing for_each helpers.

Signed-off-by: Marek Szyprowski &lt;m.szyprowski@samsung.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>scsi: lib/sg_pool.c: improve APIs for allocating sg pool</title>
<updated>2019-06-20T19:21:33Z</updated>
<author>
<name>Ming Lei</name>
<email>ming.lei@redhat.com</email>
</author>
<published>2019-04-28T07:39:30Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4635873c561ac57b66adfcc2487c38106b1c916c'/>
<id>urn:sha1:4635873c561ac57b66adfcc2487c38106b1c916c</id>
<content type='text'>
sg_alloc_table_chained() currently allows the caller to provide one
preallocated SGL and returns if the requested number isn't bigger than
size of that SGL. This is used to inline an SGL for an IO request.

However, scattergather code only allows that size of the 1st preallocated
SGL to be SG_CHUNK_SIZE(128). This means a substantial amount of memory
(4KB) is claimed for the SGL for each IO request. If the I/O is small, it
would be prudent to allocate a smaller SGL.

Introduce an extra parameter to sg_alloc_table_chained() and
sg_free_table_chained() for specifying size of the preallocated SGL.

Both __sg_free_table() and __sg_alloc_table() assume that each SGL has the
same size except for the last one.  Change the code to allow both functions
to accept a variable size for the 1st preallocated SGL.

[mkp: attempted to clarify commit desc]

Cc: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Bart Van Assche &lt;bvanassche@acm.org&gt;
Cc: Ewan D. Milne &lt;emilne@redhat.com&gt;
Cc: Hannes Reinecke &lt;hare@suse.com&gt;
Cc: Sagi Grimberg &lt;sagi@grimberg.me&gt;
Cc: Chuck Lever &lt;chuck.lever@oracle.com&gt;
Cc: netdev@vger.kernel.org
Cc: linux-nvme@lists.infradead.org
Suggested-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Ming Lei &lt;ming.lei@redhat.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Martin K. Petersen &lt;martin.petersen@oracle.com&gt;
</content>
</entry>
</feed>
