<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/include/linux/mlx5/driver.h, branch v4.19.54</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v4.19.54</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v4.19.54'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2019-04-17T06:38:42Z</updated>
<entry>
<title>net/mlx5e: Add a lock on tir list</title>
<updated>2019-04-17T06:38:42Z</updated>
<author>
<name>Yuval Avnery</name>
<email>yuvalav@mellanox.com</email>
</author>
<published>2019-03-11T04:18:24Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c297e881457882b2121d2c539fd7e75a0977f8cd'/>
<id>urn:sha1:c297e881457882b2121d2c539fd7e75a0977f8cd</id>
<content type='text'>
[ Upstream commit 80a2a9026b24c6bd34b8d58256973e22270bedec ]

Refresh tirs is looping over a global list of tirs while netdevs are
adding and removing tirs from that list. That is why a lock is
required.

Fixes: 724b2aa15126 ("net/mlx5e: TIRs management refactoring")
Signed-off-by: Yuval Avnery &lt;yuvalav@mellanox.com&gt;
Signed-off-by: Saeed Mahameed &lt;saeedm@mellanox.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>net/mlx5: EQ, Use the right place to store/read IRQ affinity hint</title>
<updated>2019-02-12T18:47:01Z</updated>
<author>
<name>Saeed Mahameed</name>
<email>saeedm@mellanox.com</email>
</author>
<published>2018-11-19T18:52:31Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1b63e37679cb38e2e69e09e722c57ac80d42051c'/>
<id>urn:sha1:1b63e37679cb38e2e69e09e722c57ac80d42051c</id>
<content type='text'>
[ Upstream commit 1e86ace4c140fd5a693e266c9b23409358f25381 ]

Currently the cpu affinity hint mask for completion EQs is stored and
read from the wrong place, since reading and storing is done from the
same index, there is no actual issue with that, but internal irq_info
for completion EQs stars at MLX5_EQ_VEC_COMP_BASE offset in irq_info
array, this patch changes the code to use the correct offset to store
and read the IRQ affinity hint.

Signed-off-by: Saeed Mahameed &lt;saeedm@mellanox.com&gt;
Reviewed-by: Leon Romanovsky &lt;leonro@mellanox.com&gt;
Reviewed-by: Tariq Toukan &lt;tariqt@mellanox.com&gt;
Signed-off-by: Leon Romanovsky &lt;leonro@mellanox.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>net/mlx5: WQ, fixes for fragmented WQ buffers API</title>
<updated>2018-10-11T01:26:16Z</updated>
<author>
<name>Tariq Toukan</name>
<email>tariqt@mellanox.com</email>
</author>
<published>2018-08-21T11:41:41Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=37fdffb217a45609edccbb8b407d031143f551c0'/>
<id>urn:sha1:37fdffb217a45609edccbb8b407d031143f551c0</id>
<content type='text'>
mlx5e netdevice used to calculate fragment edges by a call to
mlx5_wq_cyc_get_frag_size(). This calculation did not give the correct
indication for queues smaller than a PAGE_SIZE, (broken by default on
PowerPC, where PAGE_SIZE == 64KB).  Here it is replaced by the correct new
calls/API.

Since (TX/RX) Work Queues buffers are fragmented, here we introduce
changes to the API in core driver, so that it gets a stride index and
returns the index of last stride on same fragment, and an additional
wrapping function that returns the number of physically contiguous
strides that can be written contiguously to the work queue.

This obsoletes the following API functions, and their buggy
usage in EN driver:
* mlx5_wq_cyc_get_frag_size()
* mlx5_wq_cyc_ctr2fragix()

The new API improves modularity and hides the details of such
calculation for mlx5e netdevice and mlx5_ib rdma drivers.

New calculation is also more efficient, and improves performance
as follows:

Packet rate test: pktgen, UDP / IPv4, 64byte, single ring, 8K ring size.

Before: 16,477,619 pps
After:  17,085,793 pps

3.7% improvement

Fixes: 3a2f70331226 ("net/mlx5: Use order-0 allocations for all WQ types")
Signed-off-by: Tariq Toukan &lt;tariqt@mellanox.com&gt;
Reviewed-by: Eran Ben Elisha &lt;eranbe@mellanox.com&gt;
Signed-off-by: Saeed Mahameed &lt;saeedm@mellanox.com&gt;
</content>
</entry>
<entry>
<title>net/mlx5: Use u16 for Work Queue buffer strides offset</title>
<updated>2018-09-06T00:08:33Z</updated>
<author>
<name>Tariq Toukan</name>
<email>tariqt@mellanox.com</email>
</author>
<published>2018-08-21T13:07:58Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=a09036221092989b88c55d24d1f12ceb1d7d361f'/>
<id>urn:sha1:a09036221092989b88c55d24d1f12ceb1d7d361f</id>
<content type='text'>
Minimal stride size is 16.
Hence, the number of strides in a fragment (of PAGE_SIZE)
is &lt;= PAGE_SIZE / 16 &lt;= 4K.

u16 is sufficient to represent this.

Fixes: d7037ad73daa ("net/mlx5: Fix QP fragmented buffer allocation")
Signed-off-by: Tariq Toukan &lt;tariqt@mellanox.com&gt;
Reviewed-by: Eran Ben Elisha &lt;eranbe@mellanox.com&gt;
Signed-off-by: Saeed Mahameed &lt;saeedm@mellanox.com&gt;
</content>
</entry>
<entry>
<title>net/mlx5: Use u16 for Work Queue buffer fragment size</title>
<updated>2018-09-06T00:08:33Z</updated>
<author>
<name>Tariq Toukan</name>
<email>tariqt@mellanox.com</email>
</author>
<published>2018-08-21T13:04:41Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=8d71e818506718e8d7032ce824b5c74a17d4f7a5'/>
<id>urn:sha1:8d71e818506718e8d7032ce824b5c74a17d4f7a5</id>
<content type='text'>
Minimal stride size is 16.
Hence, the number of strides in a fragment (of PAGE_SIZE)
is &lt;= PAGE_SIZE / 16 &lt;= 4K.

u16 is sufficient to represent this.

Fixes: 388ca8be0037 ("IB/mlx5: Implement fragmented completion queue (CQ)")
Signed-off-by: Tariq Toukan &lt;tariqt@mellanox.com&gt;
Reviewed-by: Eran Ben Elisha &lt;eranbe@mellanox.com&gt;
Signed-off-by: Saeed Mahameed &lt;saeedm@mellanox.com&gt;
</content>
</entry>
<entry>
<title>net/mlx5: Fix use-after-free in self-healing flow</title>
<updated>2018-09-06T00:08:33Z</updated>
<author>
<name>Jack Morgenstein</name>
<email>jackm@dev.mellanox.co.il</email>
</author>
<published>2018-08-05T06:19:33Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=76d5581c870454be5f1f1a106c57985902e7ea20'/>
<id>urn:sha1:76d5581c870454be5f1f1a106c57985902e7ea20</id>
<content type='text'>
When the mlx5 health mechanism detects a problem while the driver
is in the middle of init_one or remove_one, the driver needs to prevent
the health mechanism from scheduling future work; if future work
is scheduled, there is a problem with use-after-free: the system WQ
tries to run the work item (which has been freed) at the scheduled
future time.

Prevent this by disabling work item scheduling in the health mechanism
when the driver is in the middle of init_one() or remove_one().

Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Signed-off-by: Jack Morgenstein &lt;jackm@dev.mellanox.co.il&gt;
Reviewed-by: Feras Daoud &lt;ferasda@mellanox.com&gt;
Signed-off-by: Saeed Mahameed &lt;saeedm@mellanox.com&gt;
</content>
</entry>
<entry>
<title>Merge branch 'linus/master' into rdma.git for-next</title>
<updated>2018-08-16T20:21:29Z</updated>
<author>
<name>Jason Gunthorpe</name>
<email>jgg@mellanox.com</email>
</author>
<published>2018-08-16T20:13:03Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=0a3173a5f09bc58a3638ecfd0a80bdbae55e123c'/>
<id>urn:sha1:0a3173a5f09bc58a3638ecfd0a80bdbae55e123c</id>
<content type='text'>
rdma.git merge resolution for the 4.19 merge window

Conflicts:
 drivers/infiniband/core/rdma_core.c
   - Use the rdma code and revise with the new spelling for
     atomic_fetch_add_unless
 drivers/nvme/host/rdma.c
   - Replace max_sge with max_send_sge in new blk code
 drivers/nvme/target/rdma.c
   - Use the blk code and revise to use NULL for ib_post_recv when
     appropriate
   - Replace max_sge with max_recv_sge in new blk code
 net/rds/ib_send.c
   - Use the net code and revise to use NULL for ib_post_recv when
     appropriate

Signed-off-by: Jason Gunthorpe &lt;jgg@mellanox.com&gt;
</content>
</entry>
<entry>
<title>Merge tag 'v4.18' into rdma.git for-next</title>
<updated>2018-08-16T19:12:00Z</updated>
<author>
<name>Jason Gunthorpe</name>
<email>jgg@mellanox.com</email>
</author>
<published>2018-08-16T19:08:18Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=89982f7ccee2fcd8fea7936b81eec6defbf0f131'/>
<id>urn:sha1:89982f7ccee2fcd8fea7936b81eec6defbf0f131</id>
<content type='text'>
Resolve merge conflicts from the -rc cycle against the rdma.git tree:

Conflicts:
 drivers/infiniband/core/uverbs_cmd.c
  - New ifs added to ib_uverbs_ex_create_flow in -rc and for-next
  - Merge removal of file-&gt;ucontext in for-next with new code in -rc
 drivers/infiniband/core/uverbs_main.c
  - for-next removed code from ib_uverbs_write() that was modified
    in for-rc

Signed-off-by: Jason Gunthorpe &lt;jgg@mellanox.com&gt;
</content>
</entry>
<entry>
<title>RDMA/netdev: Use priv_destructor for netdev cleanup</title>
<updated>2018-08-03T02:27:43Z</updated>
<author>
<name>Jason Gunthorpe</name>
<email>jgg@mellanox.com</email>
</author>
<published>2018-07-29T08:34:56Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=9f49a5b5c21d58aa84e16cfdc5e99e49faefcb7a'/>
<id>urn:sha1:9f49a5b5c21d58aa84e16cfdc5e99e49faefcb7a</id>
<content type='text'>
Now that the unregister_netdev flow for IPoIB no longer relies on external
code we can now introduce the use of priv_destructor and
needs_free_netdev.

The rdma_netdev flow is switched to use the netdev common priv_destructor
instead of the special free_rdma_netdev and the IPOIB ULP adjusted:
 - priv_destructor needs to switch to point to the ULP's destructor
   which will then call the rdma_ndev's in the right order
 - We need to be careful around the error unwind of register_netdev
   as it sometimes calls priv_destructor on failure
 - ULPs need to use ndo_init/uninit to ensure proper ordering
   of failures around register_netdev

Switching to priv_destructor is a necessary pre-requisite to using
the rtnl new_link mechanism.

The VNIC user for rdma_netdev should also be revised, but that is left for
another patch.

Signed-off-by: Jason Gunthorpe &lt;jgg@mellanox.com&gt;
Signed-off-by: Denis Drozdov &lt;denisd@mellanox.com&gt;
Signed-off-by: Leon Romanovsky &lt;leonro@mellanox.com&gt;
</content>
</entry>
<entry>
<title>net/mlx5e: Vxlan, move vxlan logic to core driver</title>
<updated>2018-07-27T22:46:13Z</updated>
<author>
<name>Saeed Mahameed</name>
<email>saeedm@mellanox.com</email>
</author>
<published>2018-05-09T20:28:00Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=358aa5ce288aa1085f0f3ef9f315119563fa6541'/>
<id>urn:sha1:358aa5ce288aa1085f0f3ef9f315119563fa6541</id>
<content type='text'>
Move vxlan logic and objects to mlx5 core dirver.
Since it going to be used from different mlx5 interfaces.
e.g. mlx5e PF NIC netdev and mlx5e E-Switch representors.

Signed-off-by: Saeed Mahameed &lt;saeedm@mellanox.com&gt;
Reviewed-by: Or Gerlitz &lt;ogerlitz@mellanox.com&gt;
</content>
</entry>
</feed>
