summaryrefslogtreecommitdiff
path: root/net/rds
AgeCommit message (Collapse)Author
9 daysnet/rds: Trigger rds_send_ping() more than onceGerd Rausch
Even though a peer may have already received a non-zero value for "RDS_EXTHDR_NPATHS" from a node in the past, the current peer may not. Therefore it is important to initiate another rds_send_ping() after a re-connect to any peer: It is unknown at that time if we're still talking to the same instance of RDS kernel modules on the other side. Otherwise, the peer may just operate on a single lane ("c_npaths == 0"), not knowing that more lanes are available. However, if "c_with_sport_idx" is supported, we also need to check that the connection we accepted on lane#0 meets the proper source port modulo requirement, as we fan out: Since the exchange of "RDS_EXTHDR_NPATHS" and "RDS_EXTHDR_SPORT_IDX" is asynchronous, initially we have no choice but to accept an incoming connection (via "accept") in the first slot ("cp_index == 0") for backwards compatibility. But that very connection may have come from a different lane with "cp_index != 0", since the peer thought that we already understood and handled "c_with_sport_idx" properly, as indicated by a previous exchange before a module was reloaded. In short: If a module gets reloaded, we recover from that, but do *not* allow a downgrade to support fewer lanes. Downgrades would require us to merge messages from separate lanes, which is rather tricky with the current RDS design. Each lane has its own sequence number space and all messages would need to be re-sequenced as we merge, all while handling "RDS_FLAG_RETRANSMITTED" and "cp_retrans" properly. Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260203055723.1085751-9-achender@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
9 daysnet/rds: Use the first lane until RDS_EXTHDR_NPATHS arrivesGerd Rausch
Instead of just blocking the sender until "c_npaths" is known (it gets updated upon the receipt of a MPRDS PONG message), simply use the first lane (cp_index#0). But just using the first lane isn't enough. As soon as we enqueue messages on a different lane, we'd run the risk of out-of-order delivery of RDS messages. Earlier messages enqueued on "cp_index == 0" could be delivered later than more recent messages enqueued on "cp_index > 0", mostly because of possible head of line blocking issues causing the first lane to be slower. To avoid that, we simply take a snapshot of "cp_next_tx_seq" at the time we're about to fan-out to more lanes. Then we delay the transmission of messages enqueued on other lanes with "cp_index > 0" until cp_index#0 caught up with the delivery of new messages (from "cp_send_queue") as well as in-flight messages (from "cp_retrans") that haven't been acknowledged yet by the receiver. We also add a new counter "mprds_catchup_tx0_retries" to keep track of how many times "rds_send_xmit" had to suspend activities, because it was waiting for the first lane to catch up. Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260203055723.1085751-8-achender@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
9 daysnet/rds: Update struct rds_statistics to use u64 instead of uint64_tAllison Henderson
Quick clean up to avoid checkpatch errors when adding members to this struct (Prefer kernel type 'u64' over 'uint64_t'). No functional changes added. Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260203055723.1085751-7-achender@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
9 daysnet/rds: Clear reconnect pending bitHåkon Bugge
When canceling the reconnect worker, care must be taken to reset the reconnect-pending bit. If the reconnect worker has not yet been scheduled before it is canceled, the reconnect-pending bit will stay on forever. Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260203055723.1085751-6-achender@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
9 daysnet/rds: Kick-start TCP receiver after acceptGerd Rausch
In cases where the server (the node with the higher IP-address) in an RDS/TCP connection is overwhelmed it is possible that the socket that was just accepted is chock-full of messages, up to the limit of what the socket receive buffer permits. Subsequently, "rds_tcp_data_ready" won't be called anymore, because there is no more space to receive additional messages. Nor was it called prior to the point of calling "rds_tcp_set_callbacks", because the "sk_data_ready" pointer didn't even point to "rds_tcp_data_ready" yet. We fix this by simply kick-starting the receive-worker for all cases where the socket state is neither "TCP_CLOSE_WAIT" nor "TCP_CLOSE". Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260203055723.1085751-5-achender@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
9 daysnet/rds: rds_tcp_conn_path_shutdown must not discard messagesGerd Rausch
RDS/TCP differs from RDS/RDMA in that message acknowledgment is done based on TCP sequence numbers: As soon as the last byte of a message has been acknowledged by the TCP stack of a peer, rds_tcp_write_space() goes on to discard prior messages from the send queue. Which is fine, for as long as the receiver never throws any messages away. The dequeuing of messages in RDS/TCP is done either from the "sk_data_ready" callback pointing to rds_tcp_data_ready() (the most common case), or from the receive worker pointing to rds_tcp_recv_path() which is called for as long as the connection is "RDS_CONN_UP". However, as soon as rds_conn_path_drop() is called for whatever reason, including "DR_USER_RESET", "cp_state" transitions to "RDS_CONN_ERROR", and rds_tcp_restore_callbacks() ends up restoring the callbacks and thereby disabling message receipt. So messages already acknowledged to the sender were dropped. Furthermore, the "->shutdown" callback was always called with an invalid parameter ("RCV_SHUTDOWN | SEND_SHUTDOWN == 3"), instead of the correct pre-increment value ("SHUT_RDWR == 2"). inet_shutdown() returns "-EINVAL" in such cases, rendering this call a NOOP. So we change rds_tcp_conn_path_shutdown() to do the proper "->shutdown(SHUT_WR)" call in order to signal EOF to the peer and make it transition to "TCP_CLOSE_WAIT" (RFC 793). This should make the peer also enter rds_tcp_conn_path_shutdown() and do the same. This allows us to dequeue all messages already received and acknowledged to the peer. We do so, until we know that the receive queue no longer has data (skb_queue_empty()) and that we couldn't have any data in flight anymore, because the socket transitioned to any of the states "CLOSING", "TIME_WAIT", "CLOSE_WAIT", "LAST_ACK", or "CLOSE" (RFC 793). However, if we do just that, we suddenly see duplicate RDS messages being delivered to the application. So what gives? Turns out that with MPRDS and its multitude of backend connections, retransmitted messages ("RDS_FLAG_RETRANSMITTED") can outrace the dequeuing of their original counterparts. And the duplicate check implemented in rds_recv_local() only discards duplicates if flag "RDS_FLAG_RETRANSMITTED" is set. Rather curious, because a duplicate is a duplicate; it shouldn't matter which copy is looked at and delivered first. To avoid this entire situation, we simply make the sender discard messages from the send-queue right from within rds_tcp_conn_path_shutdown(). Just like rds_tcp_write_space() would have done, were it called in time or still called. This makes sure that we no longer have messages that we know the receiver already dequeued sitting in our send-queue, and therefore avoid the entire "RDS_FLAG_RETRANSMITTED" fiasco. Now we got rid of the duplicate RDS message delivery, but we still run into cases where RDS messages are dropped. This time it is due to the delayed setting of the socket-callbacks in rds_tcp_accept_one() via either rds_tcp_reset_callbacks() or rds_tcp_set_callbacks(). By the time rds_tcp_accept_one() gets there, the socket may already have transitioned into state "TCP_CLOSE_WAIT", but rds_tcp_state_change() was never called. Subsequently, "->shutdown(SHUT_WR)" did not happen either. So the peer ends up getting stuck in state "TCP_FIN_WAIT2". We fix that by checking for states "TCP_CLOSE_WAIT", "TCP_LAST_ACK", or "TCP_CLOSE" and drop the freshly accepted socket in that case. This problem is observable by running "rds-stress --reset" frequently on either of the two sides of a RDS connection, or both while other "rds-stress" processes are exchanging data. Those "rds-stress" processes reported out-of-sequence errors, with the expected sequence number being smaller than the one actually received (due to the dropped messages). Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260203055723.1085751-4-achender@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
9 daysnet/rds: Encode cp_index in TCP source portGerd Rausch
Upon "sendmsg", RDS/TCP selects a backend connection based on a hash calculated from the source-port ("RDS_MPATH_HASH"). However, "rds_tcp_accept_one" accepts connections in the order they arrive, which is non-deterministic. Therefore the mapping of the sender's "cp->cp_index" to that of the receiver changes if the backend connections are dropped and reconnected. However, connection state that's preserved across reconnects (e.g. "cp_next_rx_seq") relies on that sender<->receiver mapping to never change. So we make sure that client and server of the TCP connection have the exact same "cp->cp_index" across reconnects by encoding "cp->cp_index" in the lower three bits of the client's TCP source port. A new extension "RDS_EXTHDR_SPORT_IDX" is introduced, that allows the server to tell the difference between clients that do the "cp->cp_index" encoding, and legacy clients that pick source ports randomly. Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260203055723.1085751-3-achender@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
9 daysnet/rds: new extension header: rdma bytesShamir Rabinovitch
Introduce a new extension header type RDSV3_EXTHDR_RDMA_BYTES for an RDMA initiator to exchange rdma byte counts to its target. Currently, RDMA operations cannot precisely account how many bytes a peer just transferred via RDMA, which limits per-connection statistics and future policy (e.g., monitoring or rate/cgroup accounting of RDMA traffic). In this patch we expand rds_message_add_extension to accept multiple extensions, and add new flag to RDS header: RDS_FLAG_EXTHDR_EXTENSION, along with a new extension to RDS header: rds_ext_header_rdma_bytes. Signed-off-by: Shamir Rabinovitch <shamir.rabinovitch@oracle.com> Signed-off-by: Guangyu Sun <guangyu.sun@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260203055723.1085751-2-achender@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2026-01-23net/rds: rds_tcp_accept_one ought to not discard messagesGerd Rausch
RDS/TCP differs from RDS/RDMA in that message acknowledgment is done based on TCP sequence numbers: As soon as the last byte of a message has been acknowledged by the TCP stack of a peer, "rds_tcp_write_space()" goes on to discard prior messages from the send queue. Which is fine, for as long as the receiver never throws any messages away. Unfortunately, that is *not* the case since the introduction of MPRDS: commit 1a0e100fb2c96 "RDS: TCP: Enable multipath RDS for TCP" A new function "rds_tcp_accept_one_path" was introduced, which is entitled to return "NULL", if no connection path is currently available. Unfortunately, this happens after the "->accept()" call, and the new socket often already contains messages, since the peer already transitioned to "RDS_CONN_UP" on behalf of "TCP_ESTABLISHED". That's also the case after this [1]: commit 1a0e100fb2c96 "RDS: TCP: Force every connection to be initiated by numerically smaller IP address" which tried to address the situation of pending data by only transitioning connections from a smaller IP address to "RDS_CONN_UP". But even in those cases, and in particular if the "RDS_EXTHDR_NPATHS" handshake has not occurred yet, and therefore we're working with "c_npaths <= 1", "c_conn[0]" may be in a state distinct from "RDS_CONN_DOWN", and therefore all messages on the just accepted socket will be tossed away. This fix changes "rds_tcp_accept_one": * If connected from a peer with a larger IP address, the new socket will continue to get closed right away. With commit [1] above, there should not be any messages in the socket receive buffer, since the peer never transitioned to "RDS_CONN_UP". Therefore it should be okay to not make any efforts to dispatch the socket receive buffer. * If connected from a peer with a smaller IP address, we call "rds_tcp_accept_one_path" to find a free slot/"path". If found, business goes on as usual. If none was found, we save/stash the newly accepted socket into "rds_tcp_accepted_sock", in order to not lose any messages that may have arrived already. We then return from "rds_tcp_accept_one" with "-ENOBUFS". Later on, when a slot/"path" does become available again (e.g. state transitioned to "RDS_CONN_DOWN", or HS extension header was received with "c_npaths > 1") we call "rds_tcp_conn_slots_available" that simply re-issues a "rds_tcp_accept_one_path" worker-callback and picks up the new socket from "rds_tcp_accepted_sock", and thereby continuing where it left with "-ENOBUFS" last time. Since a new slot has become available, those messages won't be lost, since processing proceeds as if that slot had been available the first time around. Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com> Signed-off-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260122055213.83608-3-achender@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2026-01-23net/rds: No shortcut out of RDS_CONN_ERRORGerd Rausch
RDS connections carry a state "rds_conn_path::cp_state" and transitions from one state to another and are conditional upon an expected state: "rds_conn_path_transition." There is one exception to this conditionality, which is "RDS_CONN_ERROR" that can be enforced by "rds_conn_path_drop" regardless of what state the condition is currently in. But as soon as a connection enters state "RDS_CONN_ERROR", the connection handling code expects it to go through the shutdown-path. The RDS/TCP multipath changes added a shortcut out of "RDS_CONN_ERROR" straight back to "RDS_CONN_CONNECTING" via "rds_tcp_accept_one_path" (e.g. after "rds_tcp_state_change"). A subsequent "rds_tcp_reset_callbacks" can then transition the state to "RDS_CONN_RESETTING" with a shutdown-worker queued. That'll trip up "rds_conn_init_shutdown", which was never adjusted to handle "RDS_CONN_RESETTING" and subsequently drops the connection with the dreaded "DR_INV_CONN_STATE", which leaves "RDS_SHUTDOWN_WORK_QUEUED" on forever. So we do two things here: a) Don't shortcut "RDS_CONN_ERROR", but take the longer path through the shutdown code. b) Add "RDS_CONN_RESETTING" to the expected states in "rds_conn_init_shutdown" so that we won't error out and get stuck, if we ever hit weird state transitions like this again." Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260122055213.83608-2-achender@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2026-01-13net/rds: Give each connection path its own workqueueAllison Henderson
RDS was written to require ordered workqueues for "cp->cp_wq": Work is executed in the order scheduled, one item at a time. If these workqueues are shared across connections, then work executed on behalf of one connection blocks work scheduled for a different and unrelated connection. Luckily we don't need to share these workqueues. While it obviously makes sense to limit the number of workers (processes) that ought to be allocated on a system, a workqueue that doesn't have a rescue worker attached, has a tiny footprint compared to the connection as a whole: A workqueue costs ~900 bytes, including the workqueue_struct, pool_workqueue, workqueue_attrs, wq_node_nr_active and the node_nr_active flex array. Each connection can have up to 8 (RDS_MPATH_WORKERS) paths for a worst case of ~7 KBytes per connection. While an RDS/IB connection totals only ~5 MBytes. So we're getting a signficant performance gain (90% of connections fail over under 3 seconds vs. 40%) for a less than 0.02% overhead. RDS doesn't even benefit from the additional rescue workers: of all the reasons that RDS blocks workers, allocation under memory pressue is the least of our concerns. And even if RDS was stalling due to the memory-reclaim process, the work executed by the rescue workers are highly unlikely to free up any memory. If anything, they might try to allocate even more. By giving each connection path its own workqueues, we allow RDS to better utilize the unbound workers that the system has available. Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260109224843.128076-3-achender@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2026-01-13net/rds: Add per cp work queueAllison Henderson
This patch adds a per connection workqueue which can be initialized and used independently of the globally shared rds_wq. This patch is the first in a series that aims to address tcp ack timeouts during the tcp socket shutdown sequence. This initial refactoring lays the ground work needed to alleviate queue congestion during heavy reads and writes. The independently managed queues will allow shutdowns and reconnects respond more quickly before the peer(s) timeout waiting for the proper acks. Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20260109224843.128076-2-achender@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-11-04net: Convert proto_ops connect() callbacks to use sockaddr_unsizedKees Cook
Update all struct proto_ops connect() callback function prototypes from "struct sockaddr *" to "struct sockaddr_unsized *" to avoid lying to the compiler about object sizes. Calls into struct proto handlers gain casts that will be removed in the struct proto conversion patch. No binary changes expected. Signed-off-by: Kees Cook <kees@kernel.org> Link: https://patch.msgid.link/20251104002617.2752303-3-kees@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-04net: Convert proto_ops bind() callbacks to use sockaddr_unsizedKees Cook
Update all struct proto_ops bind() callback function prototypes from "struct sockaddr *" to "struct sockaddr_unsized *" to avoid lying to the compiler about object sizes. Calls into struct proto handlers gain casts that will be removed in the struct proto conversion patch. No binary changes expected. Signed-off-by: Kees Cook <kees@kernel.org> Link: https://patch.msgid.link/20251104002617.2752303-2-kees@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-09-22net: WQ_PERCPU added to alloc_workqueue usersMarco Crivellari
Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This change adds a new WQ_PERCPU flag at the network subsystem, to explicitly request the use of the per-CPU behavior. Both flags coexist for one release cycle to allow callers to transition their calls. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. All existing users have been updated accordingly. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://patch.msgid.link/20250918142427.309519-4-marco.crivellari@suse.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-09-18Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.17-rc7). No conflicts. Adjacent changes: drivers/net/ethernet/mellanox/mlx5/core/en/fs.h 9536fbe10c9d ("net/mlx5e: Add PSP steering in local NIC RX") 7601a0a46216 ("net/mlx5e: Add a miss level for ipsec crypto offload") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-09-15rds: ib: Increment i_fastreg_wrs before bailing outHåkon Bugge
We need to increment i_fastreg_wrs before we bail out from rds_ib_post_reg_frmr(). We have a fixed budget of how many FRWR operations that can be outstanding using the dedicated QP used for memory registrations and de-registrations. This budget is enforced by the atomic_t i_fastreg_wrs. If we bail out early in rds_ib_post_reg_frmr(), we will "leak" the possibility of posting an FRWR operation, and if that accumulates, no FRWR operation can be carried out. Fixes: 1659185fb4d0 ("RDS: IB: Support Fastreg MR (FRMR) memory registration mode") Fixes: 3a2886cca703 ("net/rds: Keep track of and wait for FRWR segments in use upon shutdown") Cc: stable@vger.kernel.org Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250911133336.451212-1-haakon.bugge@oracle.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-09-08rds: ib: Remove unused extern definitionHåkon Bugge
In the old days, RDS used FMR (Fast Memory Registration) to register IB MRs to be used by RDMA. A newer and better verbs based registration/de-registration method called FRWR (Fast Registration Work Request) was added to RDS by commit 1659185fb4d0 ("RDS: IB: Support Fastreg MR (FRMR) memory registration mode") in 2016. Detection and enablement of FRWR was done in commit 2cb2912d6563 ("RDS: IB: add Fastreg MR (FRMR) detection support"). But said commit added an extern bool prefer_frmr, which was not used by said commit - nor used by later commits. Hence, remove it. Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250905101958.4028647-1-haakon.bugge@oracle.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-22rds: Fix endianness annotations for RDS extension headersUjwal Kundur
Per the RDS 3.1 spec [1], RDS extension headers EXTHDR_NPATHS and EXTHDR_GEN_NUM are be16 and be32 values respectively, exchanged during normal operations over-the-wire (RDS Ping/Pong). This contrasts their declarations as host endian unsigned ints. Fix the annotations across occurrences. Flagged by Sparse. [1] https://oss.oracle.com/projects/rds/dist/documentation/rds-3.1-spec.html Signed-off-by: Ujwal Kundur <ujwal.kundur@gmail.com> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250820175550.498-5-ujwal.kundur@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-22rds: Fix endianness annotation for RDS_MPATH_HASHUjwal Kundur
jhash_1word accepts host endian inputs while rs_bound_port is a be16 value (sockaddr_in6.sin6_port). Use ntohs() for consistency. Flagged by Sparse. Signed-off-by: Ujwal Kundur <ujwal.kundur@gmail.com> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250820175550.498-4-ujwal.kundur@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-22rds: Fix endianness annotation of jhash wrappersUjwal Kundur
__ipv6_addr_jhash (wrapper around jhash2()) and __inet_ehashfn (wrapper around jhash_3words()) work with u32 (host endian) values but accept big endian inputs. Declare the local variables as big endian to avoid unnecessary casts. Flagged by Sparse. Signed-off-by: Ujwal Kundur <ujwal.kundur@gmail.com> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250820175550.498-3-ujwal.kundur@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-22rds: Replace POLLERR with EPOLLERRUjwal Kundur
Both constants are 1<<3, but EPOLLERR uses the correct annotations. Flagged by Sparse. Signed-off-by: Ujwal Kundur <ujwal.kundur@gmail.com> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250820175550.498-2-ujwal.kundur@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-12RDS: remove redundant __GFP_NOWARNQianfeng Rong
GFP_NOWAIT already includes __GFP_NOWARN, so let's remove the redundant __GFP_NOWARN. Signed-off-by: Qianfeng Rong <rongqianfeng@vivo.com> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250810072944.438574-3-rongqianfeng@vivo.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-15don't open-code kernel_accept() in rds_tcp_accept_one()Al Viro
rds_tcp_accept_one() starts with a pretty much verbatim copy of kernel_accept(). Might as well use the real thing... That code went into mainline in 2009, kernel_accept() had been added in Aug 2006, the copyright on rds/tcp_listen.c is "Copyright (c) 2006 Oracle", so it's entirely possible that it predates the introduction of kernel_accept(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Link: https://patch.msgid.link/20250713180134.GC1880847@ZenIV Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-21rds: Correct spellingSimon Horman
Correct spelling as flagged by codespell. With these changes in place codespell only flags false positives in net/rds. Signed-off-by: Simon Horman <horms@kernel.org> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250619-rds-minor-v1-2-86d2ee3a98b9@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-21rds: Correct endian annotation of port and addr assignmentsSimon Horman
Correct the endianness annotation of port assignments: A host byte order value (RDS_TCP_PORT) is correctly converted to network byte order (big endian) using htons. But it is then cast back to host byte order before assigning to a variable that expects a big endian value. Address this by dropping the cast. This is not a bug because, while the endian annotation is changed by this patch, the assigned value is unchanged. Also correct the endianness of address assignment. A host byte order value (INADDR_ANY) is incorrectly assigned as-is to a variable that expects a big endian value. Address this by converting the value to network byte order (big endian). This is not a bug because INADDR_ANY is 0, which is isomorphic with regards to endian conversions. IOW, while the endian annotation is changed by this patch, the assigned value is unchanged. Incorrect endian annotations appear to date back to IPv4-only code added by commit 70041088e3b9 ("RDS: Add TCP transport to RDS"). Flagged by Sparse. Signed-off-by: Simon Horman <horms@kernel.org> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250619-rds-minor-v1-1-86d2ee3a98b9@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-26replace strncpy with strscpy_padBaris Can Goral
The strncpy() function is actively dangerous to use since it may not NULL-terminate the destination string, resulting in potential memory content exposures, unbounded reads, or crashes. Link: https://github.com/KSPP/linux/issues/90 In addition, strscpy_pad is more appropriate because it also zero-fills any remaining space in the destination if the source is shorter than the provided buffer size. Signed-off-by: Baris Can Goral <goralbaris@gmail.com> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250521161036.14489-1-goralbaris@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-15rds: Use nested-BH locking for rds_page_remainderSebastian Andrzej Siewior
rds_page_remainder is a per-CPU variable and relies on disabled BH for its locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT this data structure requires explicit locking. Add a local_lock_t to the data structure and use local_lock_nested_bh() for locking. This change adds only lockdep coverage and does not alter the functional behaviour for !PREEMPT_RT. Cc: Allison Henderson <allison.henderson@oracle.com> Cc: linux-rdma@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20250512092736.229935-16-bigeasy@linutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-15rds: Acquire per-CPU pointer within BH disabled sectionSebastian Andrzej Siewior
rds_page_remainder_alloc() obtains the current CPU with get_cpu() while disabling preemption. Then the CPU number is used to access the per-CPU data structure via per_cpu(). This can be optimized by relying on local_bh_disable() to provide a stable CPU number/ avoid migration and then using this_cpu_ptr() to retrieve the data structure. Cc: Allison Henderson <allison.henderson@oracle.com> Cc: linux-rdma@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20250512092736.229935-15-bigeasy@linutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-15rds: Disable only bottom halves in rds_page_remainder_alloc()Sebastian Andrzej Siewior
rds_page_remainder_alloc() is invoked from a preemptible context or a tasklet. There is no need to disable interrupts for locking. Use local_bh_disable() instead of local_irq_save() for locking. Cc: Allison Henderson <allison.henderson@oracle.com> Cc: linux-rdma@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20250512092736.229935-14-bigeasy@linutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-27Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.14-rc5). Conflicts: drivers/net/ethernet/cadence/macb_main.c fa52f15c745c ("net: cadence: macb: Synchronize stats calculations") 75696dd0fd72 ("net: cadence: macb: Convert to get_stats64") https://lore.kernel.org/20250224125848.68ee63e5@canb.auug.org.au Adjacent changes: drivers/net/ethernet/intel/ice/ice_sriov.c 79990cf5e7ad ("ice: Fix deinitializing VF in error path") a203163274a4 ("ice: simplify VF MSI-X managing") net/ipv4/tcp.c 18912c520674 ("tcp: devmem: don't write truncated dmabuf CMSGs to userspace") 297d389e9e5b ("net: prefix devmem specific helpers") net/mptcp/subflow.c 8668860b0ad3 ("mptcp: reset when MPTCP opts are dropped after join") c3349a22c200 ("mptcp: consolidate subflow cleanup") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-21net: better track kernel sockets lifetimeEric Dumazet
While kernel sockets are dismantled during pernet_operations->exit(), their freeing can be delayed by any tx packets still held in qdisc or device queues, due to skb_set_owner_w() prior calls. This then trigger the following warning from ref_tracker_dir_exit() [1] To fix this, make sure that kernel sockets own a reference on net->passive. Add sk_net_refcnt_upgrade() helper, used whenever a kernel socket is converted to a refcounted one. [1] [ 136.263918][ T35] ref_tracker: net notrefcnt@ffff8880638f01e0 has 1/2 users at [ 136.263918][ T35] sk_alloc+0x2b3/0x370 [ 136.263918][ T35] inet6_create+0x6ce/0x10f0 [ 136.263918][ T35] __sock_create+0x4c0/0xa30 [ 136.263918][ T35] inet_ctl_sock_create+0xc2/0x250 [ 136.263918][ T35] igmp6_net_init+0x39/0x390 [ 136.263918][ T35] ops_init+0x31e/0x590 [ 136.263918][ T35] setup_net+0x287/0x9e0 [ 136.263918][ T35] copy_net_ns+0x33f/0x570 [ 136.263918][ T35] create_new_namespaces+0x425/0x7b0 [ 136.263918][ T35] unshare_nsproxy_namespaces+0x124/0x180 [ 136.263918][ T35] ksys_unshare+0x57d/0xa70 [ 136.263918][ T35] __x64_sys_unshare+0x38/0x40 [ 136.263918][ T35] do_syscall_64+0xf3/0x230 [ 136.263918][ T35] entry_SYSCALL_64_after_hwframe+0x77/0x7f [ 136.263918][ T35] [ 136.343488][ T35] ref_tracker: net notrefcnt@ffff8880638f01e0 has 1/2 users at [ 136.343488][ T35] sk_alloc+0x2b3/0x370 [ 136.343488][ T35] inet6_create+0x6ce/0x10f0 [ 136.343488][ T35] __sock_create+0x4c0/0xa30 [ 136.343488][ T35] inet_ctl_sock_create+0xc2/0x250 [ 136.343488][ T35] ndisc_net_init+0xa7/0x2b0 [ 136.343488][ T35] ops_init+0x31e/0x590 [ 136.343488][ T35] setup_net+0x287/0x9e0 [ 136.343488][ T35] copy_net_ns+0x33f/0x570 [ 136.343488][ T35] create_new_namespaces+0x425/0x7b0 [ 136.343488][ T35] unshare_nsproxy_namespaces+0x124/0x180 [ 136.343488][ T35] ksys_unshare+0x57d/0xa70 [ 136.343488][ T35] __x64_sys_unshare+0x38/0x40 [ 136.343488][ T35] do_syscall_64+0xf3/0x230 [ 136.343488][ T35] entry_SYSCALL_64_after_hwframe+0x77/0x7f Fixes: 0cafd77dcd03 ("net: add a refcount tracker for kernel sockets") Reported-by: syzbot+30a19e01a97420719891@syzkaller.appspotmail.com Closes: https://lore.kernel.org/netdev/67b72aeb.050a0220.14d86d.0283.GAE@google.com/T/#u Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250220131854.4048077-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-21net/rds: Replace deprecated strncpy() with strscpy_pad()Thorsten Blum
strncpy() is deprecated for NUL-terminated destination buffers. Use strscpy_pad() instead and remove the manual NUL-termination. Compile-tested only. Link: https://github.com/KSPP/linux/issues/90 Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Tested-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20250219224730.73093-2-thorsten.blum@linux.dev Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-01-09rds: sysctl: rds_tcp_{rcv,snd}buf: avoid using current->nsproxyMatthieu Baerts (NGI0)
As mentioned in a previous commit of this series, using the 'net' structure via 'current' is not recommended for different reasons: - Inconsistency: getting info from the reader's/writer's netns vs only from the opener's netns. - current->nsproxy can be NULL in some cases, resulting in an 'Oops' (null-ptr-deref), e.g. when the current task is exiting, as spotted by syzbot [1] using acct(2). The per-netns structure can be obtained from the table->data using container_of(), then the 'net' one can be retrieved from the listen socket (if available). Fixes: c6a58ffed536 ("RDS: TCP: Add sysctl tunables for sndbuf/rcvbuf on rds-tcp socket") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/67769ecb.050a0220.3a8527.003f.GAE@google.com [1] Suggested-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20250108-net-sysctl-current-nsproxy-v1-9-5df34b2083e8@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-03net/rds: remove unused struct 'rds_ib_dereg_odp_mr'Dr. David Alan Gilbert
'rds_ib_dereg_odp_mr' has been unused since the original commit 2eafa1746f17 ("net/rds: Handle ODP mr registration/unregistration"). Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev> Link: https://patch.msgid.link/20240930134358.48647-1-linux@treblig.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net: rds: add option for GCOV profilingVegard Nossum
To better our unit tests we need code coverage to be part of the kernel. This patch borrows heavily from how CONFIG_GCOV_PROFILE_FTRACE is implemented Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-01RDS: IB: Remove unused declarationsYue Haibing
Commit f4f943c958a2 ("RDS: IB: ack more receive completions to improve performance") removed rds_ib_recv_tasklet_fn() implementation but not the declaration. And commit ec16227e1414 ("RDS/IB: Infiniband transport") declared but never implemented other functions. Signed-off-by: Yue Haibing <yuehaibing@huawei.com> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Link: https://patch.msgid.link/20240731063630.3592046-1-yuehaibing@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-07-24sysctl: treewide: constify the ctl_table argument of proc_handlersJoel Granados
const qualify the struct ctl_table argument in the proc_handler function signatures. This is a prerequisite to moving the static ctl_table structs into .rodata data which will ensure that proc_handler function pointers cannot be modified. This patch has been generated by the following coccinelle script: ``` virtual patch @r1@ identifier ctl, write, buffer, lenp, ppos; identifier func !~ "appldata_(timer|interval)_handler|sched_(rt|rr)_handler|rds_tcp_skbuf_handler|proc_sctp_do_(hmac_alg|rto_min|rto_max|udp_port|alpha_beta|auth|probe_interval)"; @@ int func( - struct ctl_table *ctl + const struct ctl_table *ctl ,int write, void *buffer, size_t *lenp, loff_t *ppos); @r2@ identifier func, ctl, write, buffer, lenp, ppos; @@ int func( - struct ctl_table *ctl + const struct ctl_table *ctl ,int write, void *buffer, size_t *lenp, loff_t *ppos) { ... } @r3@ identifier func; @@ int func( - struct ctl_table * + const struct ctl_table * ,int , void *, size_t *, loff_t *); @r4@ identifier func, ctl; @@ int func( - struct ctl_table *ctl + const struct ctl_table *ctl ,int , void *, size_t *, loff_t *); @r5@ identifier func, write, buffer, lenp, ppos; @@ int func( - struct ctl_table * + const struct ctl_table * ,int write, void *buffer, size_t *lenp, loff_t *ppos); ``` * Code formatting was adjusted in xfs_sysctl.c to comply with code conventions. The xfs_stats_clear_proc_handler, xfs_panic_mask_proc_handler and xfs_deprecated_dointvec_minmax where adjusted. * The ctl_table argument in proc_watchdog_common was const qualified. This is called from a proc_handler itself and is calling back into another proc_handler, making it necessary to change it as part of the proc_handler migration. Co-developed-by: Thomas Weißschuh <linux@weissschuh.net> Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Co-developed-by: Joel Granados <j.granados@samsung.com> Signed-off-by: Joel Granados <j.granados@samsung.com>
2024-06-19rds:Simplify the allocation of slab cachesHongfu Li
Use the new KMEM_CACHE() macro instead of direct kmem_cache_create to simplify the creation of SLAB caches. Signed-off-by: Hongfu Li <lihongfu@kylinos.cn> Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-05-13net: change proto and proto_ops accept typeJens Axboe
Rather than pass in flags, error pointer, and whether this is a kernel invocation or not, add a struct proto_accept_arg struct as the argument. This then holds all of these arguments, and prepares accept for being able to pass back more information. No functional changes in this patch. Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-05-03net: rds: Remove the now superfluous sentinel elements from ctl_table arrayJoel Granados
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) * Remove sentinel element from ctl_table structs. Signed-off-by: Joel Granados <j.granados@samsung.com> Acked-by: Allison Henderson <allison.henderson@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-29net/rds: fix possible cp null dereferenceMahmoud Adam
cp might be null, calling cp->cp_conn would produce null dereference [Simon Horman adds:] Analysis: * cp is a parameter of __rds_rdma_map and is not reassigned. * The following call-sites pass a NULL cp argument to __rds_rdma_map() - rds_get_mr() - rds_get_mr_for_dest * Prior to the code above, the following assumes that cp may be NULL (which is indicative, but could itself be unnecessary) trans_private = rs->rs_transport->get_mr( sg, nents, rs, &mr->r_key, cp ? cp->cp_conn : NULL, args->vec.addr, args->vec.bytes, need_odp ? ODP_ZEROBASED : ODP_NOT_NEEDED); * The code modified by this patch is guarded by IS_ERR(trans_private), where trans_private is assigned as per the previous point in this analysis. The only implementation of get_mr that I could locate is rds_ib_get_mr() which can return an ERR_PTR if the conn (4th) argument is NULL. * ret is set to PTR_ERR(trans_private). rds_ib_get_mr can return ERR_PTR(-ENODEV) if the conn (4th) argument is NULL. Thus ret may be -ENODEV in which case the code in question will execute. Conclusion: * cp may be NULL at the point where this patch adds a check; this patch does seem to address a possible bug Fixes: c055fc00c07b ("net/rds: fix WARNING in rds_conn_connect_if_down") Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: Mahmoud Adam <mngyadam@amazon.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240326153132.55580-1-mngyadam@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-19rds: introduce acquire/release ordering in acquire/release_in_xmit()Yewon Choi
acquire/release_in_xmit() work as bit lock in rds_send_xmit(), so they are expected to ensure acquire/release memory ordering semantics. However, test_and_set_bit/clear_bit() don't imply such semantics, on top of this, following smp_mb__after_atomic() does not guarantee release ordering (memory barrier actually should be placed before clear_bit()). Instead, we use clear_bit_unlock/test_and_set_bit_lock() here. Fixes: 0f4b1c7e89e6 ("rds: fix rds_send_xmit() serialization") Fixes: 1f9ecd7eacfd ("RDS: Pass rds_conn_path to rds_send_xmit()") Signed-off-by: Yewon Choi <woni9911@gmail.com> Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Link: https://lore.kernel.org/r/ZfQUxnNTO9AJmzwc@libra05 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-03-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR. No conflicts. Adjacent changes: net/core/page_pool_user.c 0b11b1c5c320 ("netdev: let netlink core handle -EMSGSIZE errors") 429679dcf7d9 ("page_pool: fix netlink dump stop/resume") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-06net/rds: fix WARNING in rds_conn_connect_if_downEdward Adam Davis
If connection isn't established yet, get_mr() will fail, trigger connection after get_mr(). Fixes: 584a8279a44a ("RDS: RDMA: return appropriate error on rdma map failures") Reported-and-tested-by: syzbot+d4faee732755bba9838e@syzkaller.appspotmail.com Signed-off-by: Edward Adam Davis <eadavis@qq.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-02-15Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR. No conflicts. Adjacent changes: net/core/dev.c 9f30831390ed ("net: add rcu safety to rtnl_prop_list_size()") 723de3ebef03 ("net: free altname using an RCU callback") net/unix/garbage.c 11498715f266 ("af_unix: Remove io_uring code for GC.") 25236c91b5ab ("af_unix: Fix task hung while purging oob_skb in GC.") drivers/net/ethernet/renesas/ravb_main.c ed4adc07207d ("net: ravb: Count packets instead of descriptors in GbEth RX path" ) c2da9408579d ("ravb: Add Rx checksum offload support for GbEth") net/mptcp/protocol.c bdd70eb68913 ("mptcp: drop the push_pending field") 28e5c1380506 ("mptcp: annotate lockless accesses around read-mostly fields") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-02-13net:rds: Fix possible deadlock in rds_message_putAllison Henderson
Functions rds_still_queued and rds_clear_recv_queue lock a given socket in order to safely iterate over the incoming rds messages. However calling rds_inc_put while under this lock creates a potential deadlock. rds_inc_put may eventually call rds_message_purge, which will lock m_rs_lock. This is the incorrect locking order since m_rs_lock is meant to be locked before the socket. To fix this, we move the message item to a local list or variable that wont need rs_recv_lock protection. Then we can safely call rds_inc_put on any item stored locally after rs_recv_lock is released. Fixes: bdbe6fbc6a2f ("RDS: recv.c") Reported-by: syzbot+f9db6ff27b9bfdcfeca0@syzkaller.appspotmail.com Reported-by: syzbot+dcd73ff9291e6d34b3ab@syzkaller.appspotmail.com Signed-off-by: Allison Henderson <allison.henderson@oracle.com> Link: https://lore.kernel.org/r/20240209022854.200292-1-allison.henderson@oracle.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-01-30net: rds: Simplify the allocation of slab caches in rds_conn_initKunwu Chan
Use the new KMEM_CACHE() macro instead of direct kmem_cache_create to simplify the creation of SLAB caches. Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Link: https://lore.kernel.org/r/20240124075801.471330-1-chentao@kylinos.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-01-22net/rds: Fix UBSAN: array-index-out-of-bounds in rds_cmsg_recvSharath Srinivasan
Syzcaller UBSAN crash occurs in rds_cmsg_recv(), which reads inc->i_rx_lat_trace[j + 1] with index 4 (3 + 1), but with array size of 4 (RDS_RX_MAX_TRACES). Here 'j' is assigned from rs->rs_rx_trace[i] and in-turn from trace.rx_trace_pos[i] in rds_recv_track_latency(), with both arrays sized 3 (RDS_MSG_RX_DGRAM_TRACE_MAX). So fix the off-by-one bounds check in rds_recv_track_latency() to prevent a potential crash in rds_cmsg_recv(). Found by syzcaller: ================================================================= UBSAN: array-index-out-of-bounds in net/rds/recv.c:585:39 index 4 is out of range for type 'u64 [4]' CPU: 1 PID: 8058 Comm: syz-executor228 Not tainted 6.6.0-gd2f51b3516da #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x136/0x150 lib/dump_stack.c:106 ubsan_epilogue lib/ubsan.c:217 [inline] __ubsan_handle_out_of_bounds+0xd5/0x130 lib/ubsan.c:348 rds_cmsg_recv+0x60d/0x700 net/rds/recv.c:585 rds_recvmsg+0x3fb/0x1610 net/rds/recv.c:716 sock_recvmsg_nosec net/socket.c:1044 [inline] sock_recvmsg+0xe2/0x160 net/socket.c:1066 __sys_recvfrom+0x1b6/0x2f0 net/socket.c:2246 __do_sys_recvfrom net/socket.c:2264 [inline] __se_sys_recvfrom net/socket.c:2260 [inline] __x64_sys_recvfrom+0xe0/0x1b0 net/socket.c:2260 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x40/0x110 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x63/0x6b ================================================================== Fixes: 3289025aedc0 ("RDS: add receive message trace used by application") Reported-by: Chenyuan Yang <chenyuan0y@gmail.com> Closes: https://lore.kernel.org/linux-rdma/CALGdzuoVdq-wtQ4Az9iottBqC5cv9ZhcE5q8N7LfYFvkRsOVcw@mail.gmail.com/ Signed-off-by: Sharath Srinivasan <sharath.srinivasan@oracle.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-12-11ipv6: annotate data-races around np->mcast_oifEric Dumazet
np->mcast_oif is read locklessly in some contexts. Make all accesses to this field lockless, adding appropriate annotations. This also makes setsockopt( IPV6_MULTICAST_IF ) lockless. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>