diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2026-02-11 19:31:52 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2026-02-11 19:31:52 -0800 |
| commit | 37a93dd5c49b5fda807fd204edf2547c3493319c (patch) | |
| tree | ce1ef5a642b9ea3d7242156438eb96dc5607a752 /drivers/net/ethernet/intel | |
| parent | 098b6e44cbaa2d526d06af90c862d13fb414a0ec (diff) | |
| parent | 83310d613382f74070fc8b402f3f6c2af8439ead (diff) | |
Merge tag 'net-next-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextipvs/mainipvs/HEADipvs-next/mainipvs-next/HEADdavem/net-next/maindavem/net-next/HEAD
Pull networking updates from Paolo Abeni:
"Core & protocols:
- A significant effort all around the stack to guide the compiler to
make the right choice when inlining code, to avoid unneeded calls
for small helper and stack canary overhead in the fast-path.
This generates better and faster code with very small or no text
size increases, as in many cases the call generated more code than
the actual inlined helper.
- Extend AccECN implementation so that is now functionally complete,
also allow the user-space enabling it on a per network namespace
basis.
- Add support for memory providers with large (above 4K) rx buffer.
Paired with hw-gro, larger rx buffer sizes reduce the number of
buffers traversing the stack, dincreasing single stream CPU usage
by up to ~30%.
- Do not add HBH header to Big TCP GSO packets. This simplifies the
RX path, the TX path and the NIC drivers, and is possible because
user-space taps can now interpret correctly such packets without
the HBH hint.
- Allow IPv6 routes to be configured with a gateway address that is
resolved out of a different interface than the one specified,
aligning IPv6 to IPv4 behavior.
- Multi-queue aware sch_cake. This makes it possible to scale the
rate shaper of sch_cake across multiple CPUs, while still enforcing
a single global rate on the interface.
- Add support for the nbcon (new buffer console) infrastructure to
netconsole, enabling lock-free, priority-based console operations
that are safer in crash scenarios.
- Improve the TCP ipv6 output path to cache the flow information,
saving cpu cycles, reducing cache line misses and stack use.
- Improve netfilter packet tracker to resolve clashes for most
protocols, avoiding unneeded drops on rare occasions.
- Add IP6IP6 tunneling acceleration to the flowtable infrastructure.
- Reduce tcp socket size by one cache line.
- Notify neighbour changes atomically, avoiding inconsistencies
between the notification sequence and the actual states sequence.
- Add vsock namespace support, allowing complete isolation of vsocks
across different network namespaces.
- Improve xsk generic performances with cache-alignment-oriented
optimizations.
- Support netconsole automatic target recovery, allowing netconsole
to reestablish targets when underlying low-level interface comes
back online.
Driver API:
- Support for switching the working mode (automatic vs manual) of a
DPLL device via netlink.
- Introduce PHY ports representation to expose multiple front-facing
media ports over a single MAC.
- Introduce "rx-polarity" and "tx-polarity" device tree properties,
to generalize polarity inversion requirements for differential
signaling.
- Add helper to create, prepare and enable managed clocks.
Device drivers:
- Add Huawei hinic3 PF etherner driver.
- Add DWMAC glue driver for Motorcomm YT6801 PCIe ethernet
controller.
- Add ethernet driver for MaxLinear MxL862xx switches
- Remove parallel-port Ethernet driver.
- Convert existing driver timestamp configuration reporting to
hwtstamp_get and remove legacy ioctl().
- Convert existing drivers to .get_rx_ring_count(), simplifing the RX
ring count retrieval. Also remove the legacy fallback path.
- Ethernet high-speed NICs:
- Broadcom (bnxt, bng):
- bnxt: add FW interface update to support FEC stats histogram
and NVRAM defragmentation
- bng: add TSO and H/W GRO support
- nVidia/Mellanox (mlx5):
- improve latency of channel restart operations, reducing the
used H/W resources
- add TSO support for UDP over GRE over VLAN
- add flow counters support for hardware steering (HWS) rules
- use a static memory area to store headers for H/W GRO,
leading to 12% RX tput improvement
- Intel (100G, ice, idpf):
- ice: reorganizes layout of Tx and Rx rings for cacheline
locality and utilizes __cacheline_group* macros on the new
layouts
- ice: introduces Synchronous Ethernet (SyncE) support
- Meta (fbnic):
- adds debugfs for firmware mailbox and tx/rx rings vectors
- Ethernet virtual:
- geneve: introduce GRO/GSO support for double UDP encapsulation
- Ethernet NICs consumer, and embedded:
- Synopsys (stmmac):
- some code refactoring and cleanups
- RealTek (r8169):
- add support for RTL8127ATF (10G Fiber SFP)
- add dash and LTR support
- Airoha:
- AN8811HB 2.5 Gbps phy support
- Freescale (fec):
- add XDP zero-copy support
- Thunderbolt:
- add get link setting support to allow bonding
- Renesas:
- add support for RZ/G3L GBETH SoC
- Ethernet switches:
- Maxlinear:
- support R(G)MII slow rate configuration
- add support for Intel GSW150
- Motorcomm (yt921x):
- add DCB/QoS support
- TI:
- icssm-prueth: support bridging (STP/RSTP) via the switchdev
framework
- Ethernet PHYs:
- Realtek:
- enable SGMII and 2500Base-X in-band auto-negotiation
- simplify and reunify C22/C45 drivers
- Micrel: convert bindings to DT schema
- CAN:
- move skb headroom content into skb extensions, making CAN
metadata access more robust
- CAN drivers:
- rcar_canfd:
- add support for FD-only mode
- add support for the RZ/T2H SoC
- sja1000: cleanup the CAN state handling
- WiFi:
- implement EPPKE/802.1X over auth frames support
- split up drop reasons better, removing generic RX_DROP
- additional FTM capabilities: 6 GHz support, supported number of
spatial streams and supported number of LTF repetitions
- better mac80211 iterators to enumerate resources
- initial UHR (Wi-Fi 8) support for cfg80211/mac80211
- WiFi drivers:
- Qualcomm/Atheros:
- ath11k: support for Channel Frequency Response measurement
- ath12k: a significant driver refactor to support multi-wiphy
devices and and pave the way for future device support in the
same driver (rather than splitting to ath13k)
- ath12k: support for the QCC2072 chipset
- Intel:
- iwlwifi: partial Neighbor Awareness Networking (NAN) support
- iwlwifi: initial support for U-NII-9 and IEEE 802.11bn
- RealTek (rtw89):
- preparations for RTL8922DE support
- Bluetooth:
- implement setsockopt(BT_PHY) to set the connection packet type/PHY
- set link_policy on incoming ACL connections
- Bluetooth drivers:
- btusb: add support for MediaTek7920, Realtek RTL8761BU and 8851BE
- btqca: add WCN6855 firmware priority selection feature"
* tag 'net-next-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1254 commits)
bnge/bng_re: Add a new HSI
net: macb: Fix tx/rx malfunction after phy link down and up
af_unix: Fix memleak of newsk in unix_stream_connect().
net: ti: icssg-prueth: Add optional dependency on HSR
net: dsa: add basic initial driver for MxL862xx switches
net: mdio: add unlocked mdiodev C45 bus accessors
net: dsa: add tag format for MxL862xx switches
dt-bindings: net: dsa: add MaxLinear MxL862xx
selftests: drivers: net: hw: Modify toeplitz.c to poll for packets
octeontx2-pf: Unregister devlink on probe failure
net: renesas: rswitch: fix forwarding offload statemachine
ionic: Rate limit unknown xcvr type messages
tcp: inet6_csk_xmit() optimization
tcp: populate inet->cork.fl.u.ip6 in tcp_v6_syn_recv_sock()
tcp: populate inet->cork.fl.u.ip6 in tcp_v6_connect()
ipv6: inet6_csk_xmit() and inet6_csk_update_pmtu() use inet->cork.fl.u.ip6
ipv6: use inet->cork.fl.u.ip6 and np->final in ip6_datagram_dst_update()
ipv6: use np->final in inet6_sk_rebuild_header()
ipv6: add daddr/final storage in struct ipv6_pinfo
net: stmmac: qcom-ethqos: fix qcom_ethqos_serdes_powerup()
...
Diffstat (limited to 'drivers/net/ethernet/intel')
38 files changed, 2928 insertions, 1570 deletions
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 00f75d87c73f..def7efa15447 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -957,9 +957,6 @@ u16 ice_get_avail_rxq_count(struct ice_pf *pf); int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked); void ice_update_vsi_stats(struct ice_vsi *vsi); void ice_update_pf_stats(struct ice_pf *pf); -void -ice_fetch_u64_stats_per_ring(struct u64_stats_sync *syncp, - struct ice_q_stats stats, u64 *pkts, u64 *bytes); int ice_up(struct ice_vsi *vsi); int ice_down(struct ice_vsi *vsi); int ice_down_up(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index eadb1e3d12b3..afbff8aa9ceb 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -1414,8 +1414,8 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx) if (!vsi_stat) return; - memset(&vsi_stat->rx_ring_stats[q_idx]->rx_stats, 0, - sizeof(vsi_stat->rx_ring_stats[q_idx]->rx_stats)); + memset(&vsi_stat->rx_ring_stats[q_idx]->stats, 0, + sizeof(vsi_stat->rx_ring_stats[q_idx]->stats)); memset(&vsi_stat->tx_ring_stats[q_idx]->stats, 0, sizeof(vsi_stat->tx_ring_stats[q_idx]->stats)); if (vsi->xdp_rings) diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 785bf5cc1b25..64e798b8f18f 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -204,42 +204,6 @@ bool ice_is_generic_mac(struct ice_hw *hw) } /** - * ice_is_pf_c827 - check if pf contains c827 phy - * @hw: pointer to the hw struct - * - * Return: true if the device has c827 phy. - */ -static bool ice_is_pf_c827(struct ice_hw *hw) -{ - struct ice_aqc_get_link_topo cmd = {}; - u8 node_part_number; - u16 node_handle; - int status; - - if (hw->mac_type != ICE_MAC_E810) - return false; - - if (hw->device_id != ICE_DEV_ID_E810C_QSFP) - return true; - - cmd.addr.topo_params.node_type_ctx = - FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_TYPE_M, ICE_AQC_LINK_TOPO_NODE_TYPE_PHY) | - FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_CTX_M, ICE_AQC_LINK_TOPO_NODE_CTX_PORT); - cmd.addr.topo_params.index = 0; - - status = ice_aq_get_netlist_node(hw, &cmd, &node_part_number, - &node_handle); - - if (status || node_part_number != ICE_AQC_GET_LINK_TOPO_NODE_NR_C827) - return false; - - if (node_handle == E810C_QSFP_C827_0_HANDLE || node_handle == E810C_QSFP_C827_1_HANDLE) - return true; - - return false; -} - -/** * ice_clear_pf_cfg - Clear PF configuration * @hw: pointer to the hardware structure * @@ -958,30 +922,31 @@ static void ice_get_itr_intrl_gran(struct ice_hw *hw) } /** - * ice_wait_for_fw - wait for full FW readiness + * ice_wait_fw_load - wait for PHY firmware loading to complete * @hw: pointer to the hardware structure - * @timeout: milliseconds that can elapse before timing out + * @timeout: milliseconds that can elapse before timing out, 0 to bypass waiting * - * Return: 0 on success, -ETIMEDOUT on timeout. + * Return: + * * 0 on success + * * negative on timeout */ -static int ice_wait_for_fw(struct ice_hw *hw, u32 timeout) +static int ice_wait_fw_load(struct ice_hw *hw, u32 timeout) { - int fw_loading; - u32 elapsed = 0; + int fw_loading_reg; - while (elapsed <= timeout) { - fw_loading = rd32(hw, GL_MNG_FWSM) & GL_MNG_FWSM_FW_LOADING_M; + if (!timeout) + return 0; - /* firmware was not yet loaded, we have to wait more */ - if (fw_loading) { - elapsed += 100; - msleep(100); - continue; - } + fw_loading_reg = rd32(hw, GL_MNG_FWSM) & GL_MNG_FWSM_FW_LOADING_M; + /* notify the user only once if PHY FW is still loading */ + if (fw_loading_reg) + dev_info(ice_hw_to_dev(hw), "Link initialization is blocked by PHY FW initialization. Link initialization will continue after PHY FW initialization completes.\n"); + else return 0; - } - return -ETIMEDOUT; + return rd32_poll_timeout(hw, GL_MNG_FWSM, fw_loading_reg, + !(fw_loading_reg & GL_MNG_FWSM_FW_LOADING_M), + 10000, timeout * 1000); } static int __fwlog_send_cmd(void *priv, struct libie_aq_desc *desc, void *buf, @@ -1171,12 +1136,10 @@ int ice_init_hw(struct ice_hw *hw) * due to necessity of loading FW from an external source. * This can take even half a minute. */ - if (ice_is_pf_c827(hw)) { - status = ice_wait_for_fw(hw, 30000); - if (status) { - dev_err(ice_hw_to_dev(hw), "ice_wait_for_fw timed out"); - goto err_unroll_fltr_mgmt_struct; - } + status = ice_wait_fw_load(hw, 30000); + if (status) { + dev_err(ice_hw_to_dev(hw), "ice_wait_fw_load timed out"); + goto err_unroll_fltr_mgmt_struct; } hw->lane_num = ice_get_phy_lane_number(hw); diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 53b54e395a2e..baf02512d041 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -5,6 +5,7 @@ #include "ice_lib.h" #include "ice_trace.h" #include <linux/dpll.h> +#include <linux/property.h> #define ICE_CGU_STATE_ACQ_ERR_THRESHOLD 50 #define ICE_DPLL_PIN_IDX_INVALID 0xff @@ -529,6 +530,94 @@ ice_dpll_pin_disable(struct ice_hw *hw, struct ice_dpll_pin *pin, } /** + * ice_dpll_pin_store_state - updates the state of pin in SW bookkeeping + * @pin: pointer to a pin + * @parent: parent pin index + * @state: pin state (connected or disconnected) + */ +static void +ice_dpll_pin_store_state(struct ice_dpll_pin *pin, int parent, bool state) +{ + pin->state[parent] = state ? DPLL_PIN_STATE_CONNECTED : + DPLL_PIN_STATE_DISCONNECTED; +} + +/** + * ice_dpll_rclk_update_e825c - updates the state of rclk pin on e825c device + * @pf: private board struct + * @pin: pointer to a pin + * + * Update struct holding pin states info, states are separate for each parent + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - OK + * * negative - error + */ +static int ice_dpll_rclk_update_e825c(struct ice_pf *pf, + struct ice_dpll_pin *pin) +{ + u8 rclk_bits; + int err; + u32 reg; + + if (pf->dplls.rclk.num_parents > ICE_SYNCE_CLK_NUM) + return -EINVAL; + + err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R10, ®); + if (err) + return err; + + rclk_bits = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, reg); + ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK0, rclk_bits == + (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C)); + + err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R11, ®); + if (err) + return err; + + rclk_bits = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, reg); + ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK1, rclk_bits == + (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C)); + + return 0; +} + +/** + * ice_dpll_rclk_update - updates the state of rclk pin on a device + * @pf: private board struct + * @pin: pointer to a pin + * @port_num: port number + * + * Update struct holding pin states info, states are separate for each parent + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - OK + * * negative - error + */ +static int ice_dpll_rclk_update(struct ice_pf *pf, struct ice_dpll_pin *pin, + u8 port_num) +{ + int ret; + + for (u8 parent = 0; parent < pf->dplls.rclk.num_parents; parent++) { + u8 p = parent; + + ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &p, &port_num, + &pin->flags[parent], NULL); + if (ret) + return ret; + + ice_dpll_pin_store_state(pin, parent, + ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN & + pin->flags[parent]); + } + + return 0; +} + +/** * ice_dpll_sw_pins_update - update status of all SW pins * @pf: private board struct * @@ -668,22 +757,14 @@ ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin, } break; case ICE_DPLL_PIN_TYPE_RCLK_INPUT: - for (parent = 0; parent < pf->dplls.rclk.num_parents; - parent++) { - u8 p = parent; - - ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &p, - &port_num, - &pin->flags[parent], - NULL); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) { + ret = ice_dpll_rclk_update_e825c(pf, pin); + if (ret) + goto err; + } else { + ret = ice_dpll_rclk_update(pf, pin, port_num); if (ret) goto err; - if (ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN & - pin->flags[parent]) - pin->state[parent] = DPLL_PIN_STATE_CONNECTED; - else - pin->state[parent] = - DPLL_PIN_STATE_DISCONNECTED; } break; case ICE_DPLL_PIN_TYPE_SOFTWARE: @@ -1843,6 +1924,40 @@ ice_dpll_phase_offset_get(const struct dpll_pin *pin, void *pin_priv, } /** + * ice_dpll_synce_update_e825c - setting PHY recovered clock pins on e825c + * @hw: Pointer to the HW struct + * @ena: true if enable, false in disable + * @port_num: port number + * @output: output pin, we have two in E825C + * + * DPLL subsystem callback. Set proper signals to recover clock from port. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +static int ice_dpll_synce_update_e825c(struct ice_hw *hw, bool ena, + u32 port_num, enum ice_synce_clk output) +{ + int err; + + /* configure the mux to deliver proper signal to DPLL from the MUX */ + err = ice_tspll_cfg_bypass_mux_e825c(hw, ena, port_num, output); + if (err) + return err; + + err = ice_tspll_cfg_synce_ethdiv_e825c(hw, output); + if (err) + return err; + + dev_dbg(ice_hw_to_dev(hw), "CLK_SYNCE%u recovered clock: pin %s\n", + output, str_enabled_disabled(ena)); + + return 0; +} + +/** * ice_dpll_output_esync_set - callback for setting embedded sync * @pin: pointer to a pin * @pin_priv: private data pointer passed on pin registration @@ -2263,6 +2378,28 @@ ice_dpll_sw_input_ref_sync_get(const struct dpll_pin *pin, void *pin_priv, state, extack); } +static int +ice_dpll_pin_get_parent_num(struct ice_dpll_pin *pin, + const struct dpll_pin *parent) +{ + int i; + + for (i = 0; i < pin->num_parents; i++) + if (pin->pf->dplls.inputs[pin->parent_idx[i]].pin == parent) + return i; + + return -ENOENT; +} + +static int +ice_dpll_pin_get_parent_idx(struct ice_dpll_pin *pin, + const struct dpll_pin *parent) +{ + int num = ice_dpll_pin_get_parent_num(pin, parent); + + return num < 0 ? num : pin->parent_idx[num]; +} + /** * ice_dpll_rclk_state_on_pin_set - set a state on rclk pin * @pin: pointer to a pin @@ -2286,35 +2423,45 @@ ice_dpll_rclk_state_on_pin_set(const struct dpll_pin *pin, void *pin_priv, enum dpll_pin_state state, struct netlink_ext_ack *extack) { - struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv; bool enable = state == DPLL_PIN_STATE_CONNECTED; + struct ice_dpll_pin *p = pin_priv; struct ice_pf *pf = p->pf; + struct ice_hw *hw; int ret = -EINVAL; - u32 hw_idx; + int hw_idx; + + hw = &pf->hw; if (ice_dpll_is_reset(pf, extack)) return -EBUSY; mutex_lock(&pf->dplls.lock); - hw_idx = parent->idx - pf->dplls.base_rclk_idx; - if (hw_idx >= pf->dplls.num_inputs) + hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin); + if (hw_idx < 0) goto unlock; + hw_idx -= pf->dplls.base_rclk_idx; if ((enable && p->state[hw_idx] == DPLL_PIN_STATE_CONNECTED) || (!enable && p->state[hw_idx] == DPLL_PIN_STATE_DISCONNECTED)) { NL_SET_ERR_MSG_FMT(extack, "pin:%u state:%u on parent:%u already set", - p->idx, state, parent->idx); + p->idx, state, + ice_dpll_pin_get_parent_num(p, parent_pin)); goto unlock; } - ret = ice_aq_set_phy_rec_clk_out(&pf->hw, hw_idx, enable, - &p->freq); + + ret = hw->mac_type == ICE_MAC_GENERIC_3K_E825 ? + ice_dpll_synce_update_e825c(hw, enable, + pf->ptp.port.port_num, + (enum ice_synce_clk)hw_idx) : + ice_aq_set_phy_rec_clk_out(hw, hw_idx, enable, &p->freq); if (ret) NL_SET_ERR_MSG_FMT(extack, "err:%d %s failed to set pin state:%u for pin:%u on parent:%u", ret, - libie_aq_str(pf->hw.adminq.sq_last_status), - state, p->idx, parent->idx); + libie_aq_str(hw->adminq.sq_last_status), + state, p->idx, + ice_dpll_pin_get_parent_num(p, parent_pin)); unlock: mutex_unlock(&pf->dplls.lock); @@ -2344,17 +2491,17 @@ ice_dpll_rclk_state_on_pin_get(const struct dpll_pin *pin, void *pin_priv, enum dpll_pin_state *state, struct netlink_ext_ack *extack) { - struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv; + struct ice_dpll_pin *p = pin_priv; struct ice_pf *pf = p->pf; int ret = -EINVAL; - u32 hw_idx; + int hw_idx; if (ice_dpll_is_reset(pf, extack)) return -EBUSY; mutex_lock(&pf->dplls.lock); - hw_idx = parent->idx - pf->dplls.base_rclk_idx; - if (hw_idx >= pf->dplls.num_inputs) + hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin); + if (hw_idx < 0) goto unlock; ret = ice_dpll_pin_state_update(pf, p, ICE_DPLL_PIN_TYPE_RCLK_INPUT, @@ -2814,7 +2961,8 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin); + if (!IS_ERR_OR_NULL(pins[i].pin)) + dpll_pin_put(pins[i].pin, &pins[i].tracker); } /** @@ -2836,11 +2984,15 @@ static int ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, int start_idx, int count, u64 clock_id) { + u32 pin_index; int i, ret; for (i = 0; i < count; i++) { - pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, - &pins[i].prop); + pin_index = start_idx; + if (start_idx != DPLL_PIN_IDX_UNSPEC) + pin_index += i; + pins[i].pin = dpll_pin_get(clock_id, pin_index, THIS_MODULE, + &pins[i].prop, &pins[i].tracker); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); goto release_pins; @@ -2851,7 +3003,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, release_pins: while (--i >= 0) - dpll_pin_put(pins[i].pin); + dpll_pin_put(pins[i].pin, &pins[i].tracker); return ret; } @@ -2944,6 +3096,7 @@ unregister_pins: /** * ice_dpll_deinit_direct_pins - deinitialize direct pins + * @pf: board private structure * @cgu: if cgu is present and controlled by this NIC * @pins: pointer to pins array * @count: number of pins @@ -2955,7 +3108,8 @@ unregister_pins: * Release pins resources to the dpll subsystem. */ static void -ice_dpll_deinit_direct_pins(bool cgu, struct ice_dpll_pin *pins, int count, +ice_dpll_deinit_direct_pins(struct ice_pf *pf, bool cgu, + struct ice_dpll_pin *pins, int count, const struct dpll_pin_ops *ops, struct dpll_device *first, struct dpll_device *second) @@ -3024,77 +3178,230 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) { struct ice_dpll_pin *rclk = &pf->dplls.rclk; struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct dpll_pin *parent; + struct ice_dpll_pin *parent; int i; for (i = 0; i < rclk->num_parents; i++) { - parent = pf->dplls.inputs[rclk->parent_idx[i]].pin; - if (!parent) + parent = &pf->dplls.inputs[rclk->parent_idx[i]]; + if (IS_ERR_OR_NULL(parent->pin)) continue; - dpll_pin_on_pin_unregister(parent, rclk->pin, + dpll_pin_on_pin_unregister(parent->pin, rclk->pin, &ice_dpll_rclk_ops, rclk); } if (WARN_ON_ONCE(!vsi || !vsi->netdev)) return; dpll_netdev_pin_clear(vsi->netdev); - dpll_pin_put(rclk->pin); + dpll_pin_put(rclk->pin, &rclk->tracker); +} + +static bool ice_dpll_is_fwnode_pin(struct ice_dpll_pin *pin) +{ + return !IS_ERR_OR_NULL(pin->fwnode); +} + +static void ice_dpll_pin_notify_work(struct work_struct *work) +{ + struct ice_dpll_pin_work *w = container_of(work, + struct ice_dpll_pin_work, + work); + struct ice_dpll_pin *pin, *parent = w->pin; + struct ice_pf *pf = parent->pf; + int ret; + + wait_for_completion(&pf->dplls.dpll_init); + if (!test_bit(ICE_FLAG_DPLL, pf->flags)) + goto out; /* DPLL initialization failed */ + + switch (w->action) { + case DPLL_PIN_CREATED: + if (!IS_ERR_OR_NULL(parent->pin)) { + /* We have already our pin registered */ + goto out; + } + + /* Grab reference on fwnode pin */ + parent->pin = fwnode_dpll_pin_find(parent->fwnode, + &parent->tracker); + if (IS_ERR_OR_NULL(parent->pin)) { + dev_err(ice_pf_to_dev(pf), + "Cannot get fwnode pin reference\n"); + goto out; + } + + /* Register rclk pin */ + pin = &pf->dplls.rclk; + ret = dpll_pin_on_pin_register(parent->pin, pin->pin, + &ice_dpll_rclk_ops, pin); + if (ret) { + dev_err(ice_pf_to_dev(pf), + "Failed to register pin: %pe\n", ERR_PTR(ret)); + dpll_pin_put(parent->pin, &parent->tracker); + parent->pin = NULL; + goto out; + } + break; + case DPLL_PIN_DELETED: + if (IS_ERR_OR_NULL(parent->pin)) { + /* We have already our pin unregistered */ + goto out; + } + + /* Unregister rclk pin */ + pin = &pf->dplls.rclk; + dpll_pin_on_pin_unregister(parent->pin, pin->pin, + &ice_dpll_rclk_ops, pin); + + /* Drop fwnode pin reference */ + dpll_pin_put(parent->pin, &parent->tracker); + parent->pin = NULL; + break; + default: + break; + } +out: + kfree(w); +} + +static int ice_dpll_pin_notify(struct notifier_block *nb, unsigned long action, + void *data) +{ + struct ice_dpll_pin *pin = container_of(nb, struct ice_dpll_pin, nb); + struct dpll_pin_notifier_info *info = data; + struct ice_dpll_pin_work *work; + + if (action != DPLL_PIN_CREATED && action != DPLL_PIN_DELETED) + return NOTIFY_DONE; + + /* Check if the reported pin is this one */ + if (pin->fwnode != info->fwnode) + return NOTIFY_DONE; /* Not this pin */ + + work = kzalloc(sizeof(*work), GFP_KERNEL); + if (!work) + return NOTIFY_DONE; + + INIT_WORK(&work->work, ice_dpll_pin_notify_work); + work->action = action; + work->pin = pin; + + queue_work(pin->pf->dplls.wq, &work->work); + + return NOTIFY_OK; } /** - * ice_dpll_init_rclk_pins - initialize recovered clock pin + * ice_dpll_init_pin_common - initialize pin * @pf: board private structure * @pin: pin to register * @start_idx: on which index shall allocation start in dpll subsystem * @ops: callback ops registered with the pins * - * Allocate resource for recovered clock pin in dpll subsystem. Register the - * pin with the parents it has in the info. Register pin with the pf's main vsi - * netdev. + * Allocate resource for given pin in dpll subsystem. Register the pin with + * the parents it has in the info. * * Return: * * 0 - success * * negative - registration failure reason */ static int -ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin, - int start_idx, const struct dpll_pin_ops *ops) +ice_dpll_init_pin_common(struct ice_pf *pf, struct ice_dpll_pin *pin, + int start_idx, const struct dpll_pin_ops *ops) { - struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct dpll_pin *parent; + struct ice_dpll_pin *parent; int ret, i; - if (WARN_ON((!vsi || !vsi->netdev))) - return -EINVAL; - ret = ice_dpll_get_pins(pf, pin, start_idx, ICE_DPLL_RCLK_NUM_PER_PF, - pf->dplls.clock_id); + ret = ice_dpll_get_pins(pf, pin, start_idx, 1, pf->dplls.clock_id); if (ret) return ret; - for (i = 0; i < pf->dplls.rclk.num_parents; i++) { - parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[i]].pin; - if (!parent) { - ret = -ENODEV; - goto unregister_pins; + + for (i = 0; i < pin->num_parents; i++) { + parent = &pf->dplls.inputs[pin->parent_idx[i]]; + if (IS_ERR_OR_NULL(parent->pin)) { + if (!ice_dpll_is_fwnode_pin(parent)) { + ret = -ENODEV; + goto unregister_pins; + } + parent->pin = fwnode_dpll_pin_find(parent->fwnode, + &parent->tracker); + if (IS_ERR_OR_NULL(parent->pin)) { + dev_info(ice_pf_to_dev(pf), + "Mux pin not registered yet\n"); + continue; + } } - ret = dpll_pin_on_pin_register(parent, pf->dplls.rclk.pin, - ops, &pf->dplls.rclk); + ret = dpll_pin_on_pin_register(parent->pin, pin->pin, ops, pin); if (ret) goto unregister_pins; } - dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); return 0; unregister_pins: while (i) { - parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[--i]].pin; - dpll_pin_on_pin_unregister(parent, pf->dplls.rclk.pin, - &ice_dpll_rclk_ops, &pf->dplls.rclk); + parent = &pf->dplls.inputs[pin->parent_idx[--i]]; + if (IS_ERR_OR_NULL(parent->pin)) + continue; + dpll_pin_on_pin_unregister(parent->pin, pin->pin, ops, pin); } - ice_dpll_release_pins(pin, ICE_DPLL_RCLK_NUM_PER_PF); + ice_dpll_release_pins(pin, 1); + return ret; } /** + * ice_dpll_init_rclk_pin - initialize recovered clock pin + * @pf: board private structure + * @start_idx: on which index shall allocation start in dpll subsystem + * @ops: callback ops registered with the pins + * + * Allocate resource for recovered clock pin in dpll subsystem. Register the + * pin with the parents it has in the info. + * + * Return: + * * 0 - success + * * negative - registration failure reason + */ +static int +ice_dpll_init_rclk_pin(struct ice_pf *pf, int start_idx, + const struct dpll_pin_ops *ops) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); + int ret; + + ret = ice_dpll_init_pin_common(pf, &pf->dplls.rclk, start_idx, ops); + if (ret) + return ret; + + dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); + + return 0; +} + +static void +ice_dpll_deinit_fwnode_pin(struct ice_dpll_pin *pin) +{ + unregister_dpll_notifier(&pin->nb); + flush_workqueue(pin->pf->dplls.wq); + if (!IS_ERR_OR_NULL(pin->pin)) { + dpll_pin_put(pin->pin, &pin->tracker); + pin->pin = NULL; + } + fwnode_handle_put(pin->fwnode); + pin->fwnode = NULL; +} + +static void +ice_dpll_deinit_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, + int start_idx) +{ + int i; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) + ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]); + destroy_workqueue(pf->dplls.wq); +} + +/** * ice_dpll_deinit_pins - deinitialize direct pins * @pf: board private structure * @cgu: if cgu is controlled by this pf @@ -3113,6 +3420,8 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) struct ice_dpll *dp = &d->pps; ice_dpll_deinit_rclk_pin(pf); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0); if (cgu) { ice_dpll_unregister_pins(dp->dpll, inputs, &ice_dpll_input_ops, num_inputs); @@ -3127,12 +3436,12 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) &ice_dpll_output_ops, num_outputs); ice_dpll_release_pins(outputs, num_outputs); if (!pf->dplls.generic) { - ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM, &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); - ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM, &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll, @@ -3141,6 +3450,141 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) } } +static struct fwnode_handle * +ice_dpll_pin_node_get(struct ice_pf *pf, const char *name) +{ + struct fwnode_handle *fwnode = dev_fwnode(ice_pf_to_dev(pf)); + int index; + + index = fwnode_property_match_string(fwnode, "dpll-pin-names", name); + if (index < 0) + return ERR_PTR(-ENOENT); + + return fwnode_find_reference(fwnode, "dpll-pins", index); +} + +static int +ice_dpll_init_fwnode_pin(struct ice_dpll_pin *pin, const char *name) +{ + struct ice_pf *pf = pin->pf; + int ret; + + pin->fwnode = ice_dpll_pin_node_get(pf, name); + if (IS_ERR(pin->fwnode)) { + dev_err(ice_pf_to_dev(pf), + "Failed to find %s firmware node: %pe\n", name, + pin->fwnode); + pin->fwnode = NULL; + return -ENODEV; + } + + dev_dbg(ice_pf_to_dev(pf), "Found fwnode node for %s\n", name); + + pin->pin = fwnode_dpll_pin_find(pin->fwnode, &pin->tracker); + if (IS_ERR_OR_NULL(pin->pin)) { + dev_info(ice_pf_to_dev(pf), + "DPLL pin for %pfwp not registered yet\n", + pin->fwnode); + pin->pin = NULL; + } + + pin->nb.notifier_call = ice_dpll_pin_notify; + ret = register_dpll_notifier(&pin->nb); + if (ret) { + dev_err(ice_pf_to_dev(pf), + "Failed to subscribe for DPLL notifications\n"); + + if (!IS_ERR_OR_NULL(pin->pin)) { + dpll_pin_put(pin->pin, &pin->tracker); + pin->pin = NULL; + } + fwnode_handle_put(pin->fwnode); + pin->fwnode = NULL; + + return ret; + } + + return ret; +} + +/** + * ice_dpll_init_fwnode_pins - initialize pins from device tree + * @pf: board private structure + * @pins: pointer to pins array + * @start_idx: starting index for pins + * @count: number of pins to initialize + * + * Initialize input pins for E825 RCLK support. The parent pins (rclk0, rclk1) + * are expected to be defined by the system firmware (ACPI). This function + * allocates them in the dpll subsystem and stores their indices for later + * registration with the rclk pin. + * + * Return: + * * 0 - success + * * negative - initialization failure reason + */ +static int +ice_dpll_init_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, + int start_idx) +{ + char pin_name[8]; + int i, ret; + + pf->dplls.wq = create_singlethread_workqueue("ice_dpll_wq"); + if (!pf->dplls.wq) + return -ENOMEM; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) { + pins[start_idx + i].pf = pf; + snprintf(pin_name, sizeof(pin_name), "rclk%u", i); + ret = ice_dpll_init_fwnode_pin(&pins[start_idx + i], pin_name); + if (ret) + goto error; + } + + return 0; +error: + while (i--) + ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]); + + destroy_workqueue(pf->dplls.wq); + + return ret; +} + +/** + * ice_dpll_init_pins_e825 - init pins and register pins with a dplls + * @pf: board private structure + * @cgu: if cgu is present and controlled by this NIC + * + * Initialize directly connected pf's pins within pf's dplls in a Linux dpll + * subsystem. + * + * Return: + * * 0 - success + * * negative - initialization failure reason + */ +static int ice_dpll_init_pins_e825(struct ice_pf *pf) +{ + int ret; + + ret = ice_dpll_init_fwnode_pins(pf, pf->dplls.inputs, 0); + if (ret) + return ret; + + ret = ice_dpll_init_rclk_pin(pf, DPLL_PIN_IDX_UNSPEC, + &ice_dpll_rclk_ops); + if (ret) { + /* Inform DPLL notifier works that DPLL init was finished + * unsuccessfully (ICE_DPLL_FLAG not set). + */ + complete_all(&pf->dplls.dpll_init); + ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0); + } + + return ret; +} + /** * ice_dpll_init_pins - init pins and register pins with a dplls * @pf: board private structure @@ -3155,21 +3599,24 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) */ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) { + const struct dpll_pin_ops *output_ops; + const struct dpll_pin_ops *input_ops; int ret, count; + input_ops = &ice_dpll_input_ops; + output_ops = &ice_dpll_output_ops; + ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.inputs, 0, - pf->dplls.num_inputs, - &ice_dpll_input_ops, - pf->dplls.eec.dpll, pf->dplls.pps.dpll); + pf->dplls.num_inputs, input_ops, + pf->dplls.eec.dpll, + pf->dplls.pps.dpll); if (ret) return ret; count = pf->dplls.num_inputs; if (cgu) { ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.outputs, - count, - pf->dplls.num_outputs, - &ice_dpll_output_ops, - pf->dplls.eec.dpll, + count, pf->dplls.num_outputs, + output_ops, pf->dplls.eec.dpll, pf->dplls.pps.dpll); if (ret) goto deinit_inputs; @@ -3205,30 +3652,30 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) } else { count += pf->dplls.num_outputs + 2 * ICE_DPLL_PIN_SW_NUM; } - ret = ice_dpll_init_rclk_pins(pf, &pf->dplls.rclk, count + pf->hw.pf_id, - &ice_dpll_rclk_ops); + + ret = ice_dpll_init_rclk_pin(pf, count + pf->ptp.port.port_num, + &ice_dpll_rclk_ops); if (ret) goto deinit_ufl; return 0; deinit_ufl: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl, - ICE_DPLL_PIN_SW_NUM, - &ice_dpll_pin_ufl_ops, - pf->dplls.pps.dpll, pf->dplls.eec.dpll); + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM, + &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll, + pf->dplls.eec.dpll); deinit_sma: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma, - ICE_DPLL_PIN_SW_NUM, - &ice_dpll_pin_sma_ops, - pf->dplls.pps.dpll, pf->dplls.eec.dpll); + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM, + &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll, + pf->dplls.eec.dpll); deinit_outputs: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.outputs, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.outputs, pf->dplls.num_outputs, - &ice_dpll_output_ops, pf->dplls.pps.dpll, + output_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); deinit_inputs: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.inputs, pf->dplls.num_inputs, - &ice_dpll_input_ops, pf->dplls.pps.dpll, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.inputs, + pf->dplls.num_inputs, + input_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); return ret; } @@ -3239,15 +3686,15 @@ deinit_inputs: * @d: pointer to ice_dpll * @cgu: if cgu is present and controlled by this NIC * - * If cgu is owned unregister the dpll from dpll subsystem. - * Release resources of dpll device from dpll subsystem. + * If cgu is owned, unregister the DPLL from DPLL subsystem. + * Release resources of DPLL device from DPLL subsystem. */ static void ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) { if (cgu) dpll_device_unregister(d->dpll, d->ops, d); - dpll_device_put(d->dpll); + dpll_device_put(d->dpll, &d->tracker); } /** @@ -3257,8 +3704,8 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) * @cgu: if cgu is present and controlled by this NIC * @type: type of dpll being initialized * - * Allocate dpll instance for this board in dpll subsystem, if cgu is controlled - * by this NIC, register dpll with the callback ops. + * Allocate DPLL instance for this board in dpll subsystem, if cgu is controlled + * by this NIC, register DPLL with the callback ops. * * Return: * * 0 - success @@ -3271,7 +3718,8 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, u64 clock_id = pf->dplls.clock_id; int ret; - d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE); + d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, + &d->tracker); if (IS_ERR(d->dpll)) { ret = PTR_ERR(d->dpll); dev_err(ice_pf_to_dev(pf), @@ -3287,7 +3735,8 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ice_dpll_update_state(pf, d, true); ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { - dpll_device_put(d->dpll); + dpll_device_put(d->dpll, &d->tracker); + d->dpll = NULL; return ret; } d->ops = ops; @@ -3506,6 +3955,26 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf, } /** + * ice_dpll_init_info_pin_on_pin_e825c - initializes rclk pin information + * @pf: board private structure + * + * Init information for rclk pin, cache them in pf->dplls.rclk. + * + * Return: + * * 0 - success + */ +static int ice_dpll_init_info_pin_on_pin_e825c(struct ice_pf *pf) +{ + struct ice_dpll_pin *rclk_pin = &pf->dplls.rclk; + + rclk_pin->prop.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT; + rclk_pin->prop.capabilities |= DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE; + rclk_pin->pf = pf; + + return 0; +} + +/** * ice_dpll_init_info_rclk_pin - initializes rclk pin information * @pf: board private structure * @@ -3631,7 +4100,10 @@ ice_dpll_init_pins_info(struct ice_pf *pf, enum ice_dpll_pin_type pin_type) case ICE_DPLL_PIN_TYPE_OUTPUT: return ice_dpll_init_info_direct_pins(pf, pin_type); case ICE_DPLL_PIN_TYPE_RCLK_INPUT: - return ice_dpll_init_info_rclk_pin(pf); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + return ice_dpll_init_info_pin_on_pin_e825c(pf); + else + return ice_dpll_init_info_rclk_pin(pf); case ICE_DPLL_PIN_TYPE_SOFTWARE: return ice_dpll_init_info_sw_pins(pf); default: @@ -3654,6 +4126,50 @@ static void ice_dpll_deinit_info(struct ice_pf *pf) } /** + * ice_dpll_init_info_e825c - prepare pf's dpll information structure for e825c + * device + * @pf: board private structure + * + * Acquire (from HW) and set basic DPLL information (on pf->dplls struct). + * + * Return: + * * 0 - success + * * negative - init failure reason + */ +static int ice_dpll_init_info_e825c(struct ice_pf *pf) +{ + struct ice_dplls *d = &pf->dplls; + int ret = 0; + int i; + + d->clock_id = ice_generate_clock_id(pf); + d->num_inputs = ICE_SYNCE_CLK_NUM; + + d->inputs = kcalloc(d->num_inputs, sizeof(*d->inputs), GFP_KERNEL); + if (!d->inputs) + return -ENOMEM; + + ret = ice_get_cgu_rclk_pin_info(&pf->hw, &d->base_rclk_idx, + &pf->dplls.rclk.num_parents); + if (ret) + goto deinit_info; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) + pf->dplls.rclk.parent_idx[i] = d->base_rclk_idx + i; + + ret = ice_dpll_init_pins_info(pf, ICE_DPLL_PIN_TYPE_RCLK_INPUT); + if (ret) + goto deinit_info; + dev_dbg(ice_pf_to_dev(pf), + "%s - success, inputs: %u, outputs: %u, rclk-parents: %u\n", + __func__, d->num_inputs, d->num_outputs, d->rclk.num_parents); + return 0; +deinit_info: + ice_dpll_deinit_info(pf); + return ret; +} + +/** * ice_dpll_init_info - prepare pf's dpll information structure * @pf: board private structure * @cgu: if cgu is present and controlled by this NIC @@ -3772,14 +4288,16 @@ void ice_dpll_deinit(struct ice_pf *pf) ice_dpll_deinit_worker(pf); ice_dpll_deinit_pins(pf, cgu); - ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu); - ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu); + if (!IS_ERR_OR_NULL(pf->dplls.pps.dpll)) + ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu); + if (!IS_ERR_OR_NULL(pf->dplls.eec.dpll)) + ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu); ice_dpll_deinit_info(pf); mutex_destroy(&pf->dplls.lock); } /** - * ice_dpll_init - initialize support for dpll subsystem + * ice_dpll_init_e825 - initialize support for dpll subsystem * @pf: board private structure * * Set up the device dplls, register them and pins connected within Linux dpll @@ -3788,7 +4306,43 @@ void ice_dpll_deinit(struct ice_pf *pf) * * Context: Initializes pf->dplls.lock mutex. */ -void ice_dpll_init(struct ice_pf *pf) +static void ice_dpll_init_e825(struct ice_pf *pf) +{ + struct ice_dplls *d = &pf->dplls; + int err; + + mutex_init(&d->lock); + init_completion(&d->dpll_init); + + err = ice_dpll_init_info_e825c(pf); + if (err) + goto err_exit; + err = ice_dpll_init_pins_e825(pf); + if (err) + goto deinit_info; + set_bit(ICE_FLAG_DPLL, pf->flags); + complete_all(&d->dpll_init); + + return; + +deinit_info: + ice_dpll_deinit_info(pf); +err_exit: + mutex_destroy(&d->lock); + dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err); +} + +/** + * ice_dpll_init_e810 - initialize support for dpll subsystem + * @pf: board private structure + * + * Set up the device dplls, register them and pins connected within Linux dpll + * subsystem. Allow userspace to obtain state of DPLL and handling of DPLL + * configuration requests. + * + * Context: Initializes pf->dplls.lock mutex. + */ +static void ice_dpll_init_e810(struct ice_pf *pf) { bool cgu = ice_is_feature_supported(pf, ICE_F_CGU); struct ice_dplls *d = &pf->dplls; @@ -3828,3 +4382,15 @@ err_exit: mutex_destroy(&d->lock); dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err); } + +void ice_dpll_init(struct ice_pf *pf) +{ + switch (pf->hw.mac_type) { + case ICE_MAC_GENERIC_3K_E825: + ice_dpll_init_e825(pf); + break; + default: + ice_dpll_init_e810(pf); + break; + } +} diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h index c0da03384ce9..ae42cdea0ee1 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.h +++ b/drivers/net/ethernet/intel/ice/ice_dpll.h @@ -20,9 +20,16 @@ enum ice_dpll_pin_sw { ICE_DPLL_PIN_SW_NUM }; +struct ice_dpll_pin_work { + struct work_struct work; + unsigned long action; + struct ice_dpll_pin *pin; +}; + /** ice_dpll_pin - store info about pins * @pin: dpll pin structure * @pf: pointer to pf, which has registered the dpll_pin + * @tracker: reference count tracker * @idx: ice pin private idx * @num_parents: hols number of parent pins * @parent_idx: hold indexes of parent pins @@ -37,6 +44,9 @@ enum ice_dpll_pin_sw { struct ice_dpll_pin { struct dpll_pin *pin; struct ice_pf *pf; + dpll_tracker tracker; + struct fwnode_handle *fwnode; + struct notifier_block nb; u8 idx; u8 num_parents; u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX]; @@ -58,6 +68,7 @@ struct ice_dpll_pin { /** ice_dpll - store info required for DPLL control * @dpll: pointer to dpll dev * @pf: pointer to pf, which has registered the dpll_device + * @tracker: reference count tracker * @dpll_idx: index of dpll on the NIC * @input_idx: currently selected input index * @prev_input_idx: previously selected input index @@ -76,6 +87,7 @@ struct ice_dpll_pin { struct ice_dpll { struct dpll_device *dpll; struct ice_pf *pf; + dpll_tracker tracker; u8 dpll_idx; u8 input_idx; u8 prev_input_idx; @@ -114,7 +126,9 @@ struct ice_dpll { struct ice_dplls { struct kthread_worker *kworker; struct kthread_delayed_work work; + struct workqueue_struct *wq; struct mutex lock; + struct completion dpll_init; struct ice_dpll eec; struct ice_dpll pps; struct ice_dpll_pin *inputs; @@ -143,3 +157,19 @@ static inline void ice_dpll_deinit(struct ice_pf *pf) { } #endif #endif + +#define ICE_CGU_R10 0x28 +#define ICE_CGU_R10_SYNCE_CLKO_SEL GENMASK(8, 5) +#define ICE_CGU_R10_SYNCE_CLKODIV_M1 GENMASK(13, 9) +#define ICE_CGU_R10_SYNCE_CLKODIV_LOAD BIT(14) +#define ICE_CGU_R10_SYNCE_DCK_RST BIT(15) +#define ICE_CGU_R10_SYNCE_ETHCLKO_SEL GENMASK(18, 16) +#define ICE_CGU_R10_SYNCE_ETHDIV_M1 GENMASK(23, 19) +#define ICE_CGU_R10_SYNCE_ETHDIV_LOAD BIT(24) +#define ICE_CGU_R10_SYNCE_DCK2_RST BIT(25) +#define ICE_CGU_R10_SYNCE_S_REF_CLK GENMASK(31, 27) + +#define ICE_CGU_R11 0x2C +#define ICE_CGU_R11_SYNCE_S_BYP_CLK GENMASK(6, 1) + +#define ICE_CGU_BYPASS_MUX_OFFSET_E825C 3 diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 3565a5d96c6d..c6bc29cfb8e6 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -33,8 +33,8 @@ static int ice_q_stats_len(struct net_device *netdev) { struct ice_netdev_priv *np = netdev_priv(netdev); - return ((np->vsi->alloc_txq + np->vsi->alloc_rxq) * - (sizeof(struct ice_q_stats) / sizeof(u64))); + /* One packets and one bytes count per queue */ + return ((np->vsi->alloc_txq + np->vsi->alloc_rxq) * 2); } #define ICE_PF_STATS_LEN ARRAY_SIZE(ice_gstrings_pf_stats) @@ -1942,25 +1942,35 @@ __ice_get_ethtool_stats(struct net_device *netdev, rcu_read_lock(); ice_for_each_alloc_txq(vsi, j) { + u64 pkts, bytes; + tx_ring = READ_ONCE(vsi->tx_rings[j]); - if (tx_ring && tx_ring->ring_stats) { - data[i++] = tx_ring->ring_stats->stats.pkts; - data[i++] = tx_ring->ring_stats->stats.bytes; - } else { + if (!tx_ring || !tx_ring->ring_stats) { data[i++] = 0; data[i++] = 0; + continue; } + + ice_fetch_tx_ring_stats(tx_ring, &pkts, &bytes); + + data[i++] = pkts; + data[i++] = bytes; } ice_for_each_alloc_rxq(vsi, j) { + u64 pkts, bytes; + rx_ring = READ_ONCE(vsi->rx_rings[j]); - if (rx_ring && rx_ring->ring_stats) { - data[i++] = rx_ring->ring_stats->stats.pkts; - data[i++] = rx_ring->ring_stats->stats.bytes; - } else { + if (!rx_ring || !rx_ring->ring_stats) { data[i++] = 0; data[i++] = 0; + continue; } + + ice_fetch_rx_ring_stats(rx_ring, &pkts, &bytes); + + data[i++] = pkts; + data[i++] = bytes; } rcu_read_unlock(); @@ -3378,7 +3388,6 @@ process_link: */ rx_rings[i].next_to_use = 0; rx_rings[i].next_to_clean = 0; - rx_rings[i].next_to_alloc = 0; *vsi->rx_rings[i] = rx_rings[i]; } kfree(rx_rings); diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c b/drivers/net/ethernet/intel/ice/ice_irq.c index 30801fd375f0..1d9b2d646474 100644 --- a/drivers/net/ethernet/intel/ice/ice_irq.c +++ b/drivers/net/ethernet/intel/ice/ice_irq.c @@ -106,9 +106,10 @@ static struct ice_irq_entry *ice_get_irq_res(struct ice_pf *pf, #define ICE_RDMA_AEQ_MSIX 1 static int ice_get_default_msix_amount(struct ice_pf *pf) { - return ICE_MIN_LAN_OICR_MSIX + num_online_cpus() + + return ICE_MIN_LAN_OICR_MSIX + netif_get_num_default_rss_queues() + (test_bit(ICE_FLAG_FD_ENA, pf->flags) ? ICE_FDIR_MSIX : 0) + - (ice_is_rdma_ena(pf) ? num_online_cpus() + ICE_RDMA_AEQ_MSIX : 0); + (ice_is_rdma_ena(pf) ? netif_get_num_default_rss_queues() + + ICE_RDMA_AEQ_MSIX : 0); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index d47af94f31a9..d921269e1fe7 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -159,12 +159,14 @@ static void ice_vsi_set_num_desc(struct ice_vsi *vsi) static u16 ice_get_rxq_count(struct ice_pf *pf) { - return min(ice_get_avail_rxq_count(pf), num_online_cpus()); + return min(ice_get_avail_rxq_count(pf), + netif_get_num_default_rss_queues()); } static u16 ice_get_txq_count(struct ice_pf *pf) { - return min(ice_get_avail_txq_count(pf), num_online_cpus()); + return min(ice_get_avail_txq_count(pf), + netif_get_num_default_rss_queues()); } /** @@ -911,13 +913,15 @@ static void ice_vsi_set_rss_params(struct ice_vsi *vsi) if (vsi->type == ICE_VSI_CHNL) vsi->rss_size = min_t(u16, vsi->num_rxq, max_rss_size); else - vsi->rss_size = min_t(u16, num_online_cpus(), + vsi->rss_size = min_t(u16, + netif_get_num_default_rss_queues(), max_rss_size); vsi->rss_lut_type = ICE_LUT_PF; break; case ICE_VSI_SF: vsi->rss_table_size = ICE_LUT_VSI_SIZE; - vsi->rss_size = min_t(u16, num_online_cpus(), max_rss_size); + vsi->rss_size = min_t(u16, netif_get_num_default_rss_queues(), + max_rss_size); vsi->rss_lut_type = ICE_LUT_VSI; break; case ICE_VSI_VF: @@ -3431,20 +3435,6 @@ out: } /** - * ice_update_ring_stats - Update ring statistics - * @stats: stats to be updated - * @pkts: number of processed packets - * @bytes: number of processed bytes - * - * This function assumes that caller has acquired a u64_stats_sync lock. - */ -static void ice_update_ring_stats(struct ice_q_stats *stats, u64 pkts, u64 bytes) -{ - stats->bytes += bytes; - stats->pkts += pkts; -} - -/** * ice_update_tx_ring_stats - Update Tx ring specific counters * @tx_ring: ring to update * @pkts: number of processed packets @@ -3453,7 +3443,8 @@ static void ice_update_ring_stats(struct ice_q_stats *stats, u64 pkts, u64 bytes void ice_update_tx_ring_stats(struct ice_tx_ring *tx_ring, u64 pkts, u64 bytes) { u64_stats_update_begin(&tx_ring->ring_stats->syncp); - ice_update_ring_stats(&tx_ring->ring_stats->stats, pkts, bytes); + u64_stats_add(&tx_ring->ring_stats->pkts, pkts); + u64_stats_add(&tx_ring->ring_stats->bytes, bytes); u64_stats_update_end(&tx_ring->ring_stats->syncp); } @@ -3466,11 +3457,48 @@ void ice_update_tx_ring_stats(struct ice_tx_ring *tx_ring, u64 pkts, u64 bytes) void ice_update_rx_ring_stats(struct ice_rx_ring *rx_ring, u64 pkts, u64 bytes) { u64_stats_update_begin(&rx_ring->ring_stats->syncp); - ice_update_ring_stats(&rx_ring->ring_stats->stats, pkts, bytes); + u64_stats_add(&rx_ring->ring_stats->pkts, pkts); + u64_stats_add(&rx_ring->ring_stats->bytes, bytes); u64_stats_update_end(&rx_ring->ring_stats->syncp); } /** + * ice_fetch_tx_ring_stats - Fetch Tx ring packet and byte counters + * @ring: ring to update + * @pkts: number of processed packets + * @bytes: number of processed bytes + */ +void ice_fetch_tx_ring_stats(const struct ice_tx_ring *ring, + u64 *pkts, u64 *bytes) +{ + unsigned int start; + + do { + start = u64_stats_fetch_begin(&ring->ring_stats->syncp); + *pkts = u64_stats_read(&ring->ring_stats->pkts); + *bytes = u64_stats_read(&ring->ring_stats->bytes); + } while (u64_stats_fetch_retry(&ring->ring_stats->syncp, start)); +} + +/** + * ice_fetch_rx_ring_stats - Fetch Rx ring packet and byte counters + * @ring: ring to read + * @pkts: number of processed packets + * @bytes: number of processed bytes + */ +void ice_fetch_rx_ring_stats(const struct ice_rx_ring *ring, + u64 *pkts, u64 *bytes) +{ + unsigned int start; + + do { + start = u64_stats_fetch_begin(&ring->ring_stats->syncp); + *pkts = u64_stats_read(&ring->ring_stats->pkts); + *bytes = u64_stats_read(&ring->ring_stats->bytes); + } while (u64_stats_fetch_retry(&ring->ring_stats->syncp, start)); +} + +/** * ice_is_dflt_vsi_in_use - check if the default forwarding VSI is being used * @pi: port info of the switch with default VSI * @@ -3961,6 +3989,9 @@ void ice_init_feature_support(struct ice_pf *pf) break; } + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + ice_set_feature_support(pf, ICE_F_PHY_RCLK); + if (pf->hw.mac_type == ICE_MAC_E830) { ice_set_feature_support(pf, ICE_F_MBX_LIMIT); ice_set_feature_support(pf, ICE_F_GCS); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 2cb1eb98b9da..49454d98dcfe 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -92,6 +92,12 @@ void ice_update_tx_ring_stats(struct ice_tx_ring *ring, u64 pkts, u64 bytes); void ice_update_rx_ring_stats(struct ice_rx_ring *ring, u64 pkts, u64 bytes); +void ice_fetch_tx_ring_stats(const struct ice_tx_ring *ring, + u64 *pkts, u64 *bytes); + +void ice_fetch_rx_ring_stats(const struct ice_rx_ring *ring, + u64 *pkts, u64 *bytes); + void ice_write_intrl(struct ice_q_vector *q_vector, u8 intrl); void ice_write_itr(struct ice_ring_container *rc, u16 itr); void ice_set_q_vector_intrl(struct ice_q_vector *q_vector); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index d04605d3e61a..4da37caa3ec9 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -159,8 +159,8 @@ static void ice_check_for_hang_subtask(struct ice_pf *pf) * prev_pkt would be negative if there was no * pending work. */ - packets = ring_stats->stats.pkts & INT_MAX; - if (ring_stats->tx_stats.prev_pkt == packets) { + packets = ice_stats_read(ring_stats, pkts) & INT_MAX; + if (ring_stats->tx.prev_pkt == packets) { /* Trigger sw interrupt to revive the queue */ ice_trigger_sw_intr(hw, tx_ring->q_vector); continue; @@ -170,7 +170,7 @@ static void ice_check_for_hang_subtask(struct ice_pf *pf) * to ice_get_tx_pending() */ smp_rmb(); - ring_stats->tx_stats.prev_pkt = + ring_stats->tx.prev_pkt = ice_get_tx_pending(tx_ring) ? packets : -1; } } @@ -6824,58 +6824,132 @@ int ice_up(struct ice_vsi *vsi) return err; } +struct ice_vsi_tx_stats { + u64 pkts; + u64 bytes; + u64 tx_restart_q; + u64 tx_busy; + u64 tx_linearize; +}; + +struct ice_vsi_rx_stats { + u64 pkts; + u64 bytes; + u64 rx_non_eop_descs; + u64 rx_page_failed; + u64 rx_buf_failed; +}; + /** - * ice_fetch_u64_stats_per_ring - get packets and bytes stats per ring - * @syncp: pointer to u64_stats_sync - * @stats: stats that pkts and bytes count will be taken from - * @pkts: packets stats counter - * @bytes: bytes stats counter + * ice_fetch_u64_tx_stats - get Tx stats from a ring + * @ring: the Tx ring to copy stats from + * @copy: temporary storage for the ring statistics * - * This function fetches stats from the ring considering the atomic operations - * that needs to be performed to read u64 values in 32 bit machine. + * Fetch the u64 stats from the ring using u64_stats_fetch. This ensures each + * stat value is self-consistent, though not necessarily consistent w.r.t + * other stats. */ -void -ice_fetch_u64_stats_per_ring(struct u64_stats_sync *syncp, - struct ice_q_stats stats, u64 *pkts, u64 *bytes) +static void ice_fetch_u64_tx_stats(struct ice_tx_ring *ring, + struct ice_vsi_tx_stats *copy) { + struct ice_ring_stats *stats = ring->ring_stats; unsigned int start; do { - start = u64_stats_fetch_begin(syncp); - *pkts = stats.pkts; - *bytes = stats.bytes; - } while (u64_stats_fetch_retry(syncp, start)); + start = u64_stats_fetch_begin(&stats->syncp); + copy->pkts = u64_stats_read(&stats->pkts); + copy->bytes = u64_stats_read(&stats->bytes); + copy->tx_restart_q = u64_stats_read(&stats->tx_restart_q); + copy->tx_busy = u64_stats_read(&stats->tx_busy); + copy->tx_linearize = u64_stats_read(&stats->tx_linearize); + } while (u64_stats_fetch_retry(&stats->syncp, start)); +} + +/** + * ice_fetch_u64_rx_stats - get Rx stats from a ring + * @ring: the Rx ring to copy stats from + * @copy: temporary storage for the ring statistics + * + * Fetch the u64 stats from the ring using u64_stats_fetch. This ensures each + * stat value is self-consistent, though not necessarily consistent w.r.t + * other stats. + */ +static void ice_fetch_u64_rx_stats(struct ice_rx_ring *ring, + struct ice_vsi_rx_stats *copy) +{ + struct ice_ring_stats *stats = ring->ring_stats; + unsigned int start; + + do { + start = u64_stats_fetch_begin(&stats->syncp); + copy->pkts = u64_stats_read(&stats->pkts); + copy->bytes = u64_stats_read(&stats->bytes); + copy->rx_non_eop_descs = + u64_stats_read(&stats->rx_non_eop_descs); + copy->rx_page_failed = u64_stats_read(&stats->rx_page_failed); + copy->rx_buf_failed = u64_stats_read(&stats->rx_buf_failed); + } while (u64_stats_fetch_retry(&stats->syncp, start)); } /** * ice_update_vsi_tx_ring_stats - Update VSI Tx ring stats counters * @vsi: the VSI to be updated - * @vsi_stats: the stats struct to be updated + * @vsi_stats: accumulated stats for this VSI * @rings: rings to work on * @count: number of rings */ -static void -ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi, - struct rtnl_link_stats64 *vsi_stats, - struct ice_tx_ring **rings, u16 count) +static void ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi, + struct ice_vsi_tx_stats *vsi_stats, + struct ice_tx_ring **rings, u16 count) { + struct ice_vsi_tx_stats copy = {}; u16 i; for (i = 0; i < count; i++) { struct ice_tx_ring *ring; - u64 pkts = 0, bytes = 0; ring = READ_ONCE(rings[i]); if (!ring || !ring->ring_stats) continue; - ice_fetch_u64_stats_per_ring(&ring->ring_stats->syncp, - ring->ring_stats->stats, &pkts, - &bytes); - vsi_stats->tx_packets += pkts; - vsi_stats->tx_bytes += bytes; - vsi->tx_restart += ring->ring_stats->tx_stats.restart_q; - vsi->tx_busy += ring->ring_stats->tx_stats.tx_busy; - vsi->tx_linearize += ring->ring_stats->tx_stats.tx_linearize; + + ice_fetch_u64_tx_stats(ring, ©); + + vsi_stats->pkts += copy.pkts; + vsi_stats->bytes += copy.bytes; + vsi_stats->tx_restart_q += copy.tx_restart_q; + vsi_stats->tx_busy += copy.tx_busy; + vsi_stats->tx_linearize += copy.tx_linearize; + } +} + +/** + * ice_update_vsi_rx_ring_stats - Update VSI Rx ring stats counters + * @vsi: the VSI to be updated + * @vsi_stats: accumulated stats for this VSI + * @rings: rings to work on + * @count: number of rings + */ +static void ice_update_vsi_rx_ring_stats(struct ice_vsi *vsi, + struct ice_vsi_rx_stats *vsi_stats, + struct ice_rx_ring **rings, u16 count) +{ + struct ice_vsi_rx_stats copy = {}; + u16 i; + + for (i = 0; i < count; i++) { + struct ice_rx_ring *ring; + + ring = READ_ONCE(rings[i]); + if (!ring || !ring->ring_stats) + continue; + + ice_fetch_u64_rx_stats(ring, ©); + + vsi_stats->pkts += copy.pkts; + vsi_stats->bytes += copy.bytes; + vsi_stats->rx_non_eop_descs += copy.rx_non_eop_descs; + vsi_stats->rx_page_failed += copy.rx_page_failed; + vsi_stats->rx_buf_failed += copy.rx_buf_failed; } } @@ -6886,50 +6960,34 @@ ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi, static void ice_update_vsi_ring_stats(struct ice_vsi *vsi) { struct rtnl_link_stats64 *net_stats, *stats_prev; - struct rtnl_link_stats64 *vsi_stats; + struct ice_vsi_tx_stats tx_stats = {}; + struct ice_vsi_rx_stats rx_stats = {}; struct ice_pf *pf = vsi->back; - u64 pkts, bytes; - int i; - - vsi_stats = kzalloc(sizeof(*vsi_stats), GFP_ATOMIC); - if (!vsi_stats) - return; - - /* reset non-netdev (extended) stats */ - vsi->tx_restart = 0; - vsi->tx_busy = 0; - vsi->tx_linearize = 0; - vsi->rx_buf_failed = 0; - vsi->rx_page_failed = 0; rcu_read_lock(); /* update Tx rings counters */ - ice_update_vsi_tx_ring_stats(vsi, vsi_stats, vsi->tx_rings, + ice_update_vsi_tx_ring_stats(vsi, &tx_stats, vsi->tx_rings, vsi->num_txq); /* update Rx rings counters */ - ice_for_each_rxq(vsi, i) { - struct ice_rx_ring *ring = READ_ONCE(vsi->rx_rings[i]); - struct ice_ring_stats *ring_stats; - - ring_stats = ring->ring_stats; - ice_fetch_u64_stats_per_ring(&ring_stats->syncp, - ring_stats->stats, &pkts, - &bytes); - vsi_stats->rx_packets += pkts; - vsi_stats->rx_bytes += bytes; - vsi->rx_buf_failed += ring_stats->rx_stats.alloc_buf_failed; - vsi->rx_page_failed += ring_stats->rx_stats.alloc_page_failed; - } + ice_update_vsi_rx_ring_stats(vsi, &rx_stats, vsi->rx_rings, + vsi->num_rxq); /* update XDP Tx rings counters */ if (ice_is_xdp_ena_vsi(vsi)) - ice_update_vsi_tx_ring_stats(vsi, vsi_stats, vsi->xdp_rings, + ice_update_vsi_tx_ring_stats(vsi, &tx_stats, vsi->xdp_rings, vsi->num_xdp_txq); rcu_read_unlock(); + /* Save non-netdev (extended) stats */ + vsi->tx_restart = tx_stats.tx_restart_q; + vsi->tx_busy = tx_stats.tx_busy; + vsi->tx_linearize = tx_stats.tx_linearize; + vsi->rx_buf_failed = rx_stats.rx_buf_failed; + vsi->rx_page_failed = rx_stats.rx_page_failed; + net_stats = &vsi->net_stats; stats_prev = &vsi->net_stats_prev; @@ -6939,18 +6997,16 @@ static void ice_update_vsi_ring_stats(struct ice_vsi *vsi) * let's skip this round. */ if (likely(pf->stat_prev_loaded)) { - net_stats->tx_packets += vsi_stats->tx_packets - stats_prev->tx_packets; - net_stats->tx_bytes += vsi_stats->tx_bytes - stats_prev->tx_bytes; - net_stats->rx_packets += vsi_stats->rx_packets - stats_prev->rx_packets; - net_stats->rx_bytes += vsi_stats->rx_bytes - stats_prev->rx_bytes; + net_stats->tx_packets += tx_stats.pkts - stats_prev->tx_packets; + net_stats->tx_bytes += tx_stats.bytes - stats_prev->tx_bytes; + net_stats->rx_packets += rx_stats.pkts - stats_prev->rx_packets; + net_stats->rx_bytes += rx_stats.bytes - stats_prev->rx_bytes; } - stats_prev->tx_packets = vsi_stats->tx_packets; - stats_prev->tx_bytes = vsi_stats->tx_bytes; - stats_prev->rx_packets = vsi_stats->rx_packets; - stats_prev->rx_bytes = vsi_stats->rx_bytes; - - kfree(vsi_stats); + stats_prev->tx_packets = tx_stats.pkts; + stats_prev->tx_bytes = tx_stats.bytes; + stats_prev->rx_packets = rx_stats.pkts; + stats_prev->rx_bytes = rx_stats.bytes; } /** diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c index 272683001476..22c3986b910a 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -1296,6 +1296,38 @@ void ice_ptp_link_change(struct ice_pf *pf, bool linkup) if (pf->hw.reset_ongoing) return; + if (hw->mac_type == ICE_MAC_GENERIC_3K_E825) { + int pin, err; + + if (!test_bit(ICE_FLAG_DPLL, pf->flags)) + return; + + mutex_lock(&pf->dplls.lock); + for (pin = 0; pin < ICE_SYNCE_CLK_NUM; pin++) { + enum ice_synce_clk clk_pin; + bool active; + u8 port_num; + + port_num = ptp_port->port_num; + clk_pin = (enum ice_synce_clk)pin; + err = ice_tspll_bypass_mux_active_e825c(hw, + port_num, + &active, + clk_pin); + if (WARN_ON_ONCE(err)) { + mutex_unlock(&pf->dplls.lock); + return; + } + + err = ice_tspll_cfg_synce_ethdiv_e825c(hw, clk_pin); + if (active && WARN_ON_ONCE(err)) { + mutex_unlock(&pf->dplls.lock); + return; + } + } + mutex_unlock(&pf->dplls.lock); + } + switch (hw->mac_type) { case ICE_MAC_E810: case ICE_MAC_E830: diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c index 35680dbe4a7f..61c0a0d93ea8 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c @@ -5903,7 +5903,14 @@ int ice_get_cgu_rclk_pin_info(struct ice_hw *hw, u8 *base_idx, u8 *pin_num) *base_idx = SI_REF1P; else ret = -ENODEV; - + break; + case ICE_DEV_ID_E825C_BACKPLANE: + case ICE_DEV_ID_E825C_QSFP: + case ICE_DEV_ID_E825C_SFP: + case ICE_DEV_ID_E825C_SGMII: + *pin_num = ICE_SYNCE_CLK_NUM; + *base_idx = 0; + ret = 0; break; default: ret = -ENODEV; diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.c b/drivers/net/ethernet/intel/ice/ice_tspll.c index 66320a4ab86f..fd4b58eb9bc0 100644 --- a/drivers/net/ethernet/intel/ice/ice_tspll.c +++ b/drivers/net/ethernet/intel/ice/ice_tspll.c @@ -624,3 +624,220 @@ int ice_tspll_init(struct ice_hw *hw) return err; } + +/** + * ice_tspll_bypass_mux_active_e825c - check if the given port is set active + * @hw: Pointer to the HW struct + * @port: Number of the port + * @active: Output flag showing if port is active + * @output: Output pin, we have two in E825C + * + * Check if given port is selected as recovered clock source for given output. + * + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active, + enum ice_synce_clk output) +{ + u8 active_clk; + u32 val; + int err; + + switch (output) { + case ICE_SYNCE_CLK0: + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val); + if (err) + return err; + active_clk = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, val); + break; + case ICE_SYNCE_CLK1: + err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val); + if (err) + return err; + active_clk = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, val); + break; + default: + return -EINVAL; + } + + if (active_clk == port % hw->ptp.ports_per_phy + + ICE_CGU_BYPASS_MUX_OFFSET_E825C) + *active = true; + else + *active = false; + + return 0; +} + +/** + * ice_tspll_cfg_bypass_mux_e825c - configure reference clock mux + * @hw: Pointer to the HW struct + * @ena: true to enable the reference, false if disable + * @port_num: Number of the port + * @output: Output pin, we have two in E825C + * + * Set reference clock source and output clock selection. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num, + enum ice_synce_clk output) +{ + u8 first_mux; + int err; + u32 r10; + + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &r10); + if (err) + return err; + + if (!ena) + first_mux = ICE_CGU_NET_REF_CLK0; + else + first_mux = port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C; + + r10 &= ~(ICE_CGU_R10_SYNCE_DCK_RST | ICE_CGU_R10_SYNCE_DCK2_RST); + + switch (output) { + case ICE_SYNCE_CLK0: + r10 &= ~(ICE_CGU_R10_SYNCE_ETHCLKO_SEL | + ICE_CGU_R10_SYNCE_ETHDIV_LOAD | + ICE_CGU_R10_SYNCE_S_REF_CLK); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_S_REF_CLK, first_mux); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHCLKO_SEL, + ICE_CGU_REF_CLK_BYP0_DIV); + break; + case ICE_SYNCE_CLK1: + { + u32 val; + + err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val); + if (err) + return err; + val &= ~ICE_CGU_R11_SYNCE_S_BYP_CLK; + val |= FIELD_PREP(ICE_CGU_R11_SYNCE_S_BYP_CLK, first_mux); + err = ice_write_cgu_reg(hw, ICE_CGU_R11, val); + if (err) + return err; + r10 &= ~(ICE_CGU_R10_SYNCE_CLKODIV_LOAD | + ICE_CGU_R10_SYNCE_CLKO_SEL); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKO_SEL, + ICE_CGU_REF_CLK_BYP1_DIV); + break; + } + default: + return -EINVAL; + } + + err = ice_write_cgu_reg(hw, ICE_CGU_R10, r10); + if (err) + return err; + + return 0; +} + +/** + * ice_tspll_get_div_e825c - get the divider for the given speed + * @link_speed: link speed of the port + * @divider: output value, calculated divider + * + * Get CGU divider value based on the link speed. + * + * Return: + * * 0 - success + * * negative - error + */ +static int ice_tspll_get_div_e825c(u16 link_speed, unsigned int *divider) +{ + switch (link_speed) { + case ICE_AQ_LINK_SPEED_100GB: + case ICE_AQ_LINK_SPEED_50GB: + case ICE_AQ_LINK_SPEED_25GB: + *divider = 10; + break; + case ICE_AQ_LINK_SPEED_40GB: + case ICE_AQ_LINK_SPEED_10GB: + *divider = 4; + break; + case ICE_AQ_LINK_SPEED_5GB: + case ICE_AQ_LINK_SPEED_2500MB: + case ICE_AQ_LINK_SPEED_1000MB: + *divider = 2; + break; + case ICE_AQ_LINK_SPEED_100MB: + *divider = 1; + break; + default: + return -EOPNOTSUPP; + } + + return 0; +} + +/** + * ice_tspll_cfg_synce_ethdiv_e825c - set the divider on the mux + * @hw: Pointer to the HW struct + * @output: Output pin, we have two in E825C + * + * Set the correct CGU divider for RCLKA or RCLKB. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw, + enum ice_synce_clk output) +{ + unsigned int divider; + u16 link_speed; + u32 val; + int err; + + link_speed = hw->port_info->phy.link_info.link_speed; + if (!link_speed) + return 0; + + err = ice_tspll_get_div_e825c(link_speed, ÷r); + if (err) + return err; + + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val); + if (err) + return err; + + /* programmable divider value (from 2 to 16) minus 1 for ETHCLKOUT */ + switch (output) { + case ICE_SYNCE_CLK0: + val &= ~(ICE_CGU_R10_SYNCE_ETHDIV_M1 | + ICE_CGU_R10_SYNCE_ETHDIV_LOAD); + val |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHDIV_M1, divider - 1); + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + val |= ICE_CGU_R10_SYNCE_ETHDIV_LOAD; + break; + case ICE_SYNCE_CLK1: + val &= ~(ICE_CGU_R10_SYNCE_CLKODIV_M1 | + ICE_CGU_R10_SYNCE_CLKODIV_LOAD); + val |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKODIV_M1, divider - 1); + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + val |= ICE_CGU_R10_SYNCE_CLKODIV_LOAD; + break; + default: + return -EINVAL; + } + + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + + return 0; +} diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.h b/drivers/net/ethernet/intel/ice/ice_tspll.h index c0b1232cc07c..d650867004d1 100644 --- a/drivers/net/ethernet/intel/ice/ice_tspll.h +++ b/drivers/net/ethernet/intel/ice/ice_tspll.h @@ -21,11 +21,22 @@ struct ice_tspll_params_e82x { u32 frac_n_div; }; +#define ICE_CGU_NET_REF_CLK0 0x0 +#define ICE_CGU_REF_CLK_BYP0 0x5 +#define ICE_CGU_REF_CLK_BYP0_DIV 0x0 +#define ICE_CGU_REF_CLK_BYP1 0x4 +#define ICE_CGU_REF_CLK_BYP1_DIV 0x1 + #define ICE_TSPLL_CK_REFCLKFREQ_E825 0x1F #define ICE_TSPLL_NDIVRATIO_E825 5 #define ICE_TSPLL_FBDIV_INTGR_E825 256 int ice_tspll_cfg_pps_out_e825c(struct ice_hw *hw, bool enable); int ice_tspll_init(struct ice_hw *hw); - +int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active, + enum ice_synce_clk output); +int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num, + enum ice_synce_clk output); +int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw, + enum ice_synce_clk output); #endif /* _ICE_TSPLL_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index ad76768a4232..6fa201a14f51 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -379,7 +379,7 @@ static bool ice_clean_tx_irq(struct ice_tx_ring *tx_ring, int napi_budget) if (netif_tx_queue_stopped(txring_txq(tx_ring)) && !test_bit(ICE_VSI_DOWN, vsi->state)) { netif_tx_wake_queue(txring_txq(tx_ring)); - ++tx_ring->ring_stats->tx_stats.restart_q; + ice_stats_inc(tx_ring->ring_stats, tx_restart_q); } } @@ -499,7 +499,7 @@ int ice_setup_tx_ring(struct ice_tx_ring *tx_ring) tx_ring->next_to_use = 0; tx_ring->next_to_clean = 0; - tx_ring->ring_stats->tx_stats.prev_pkt = -1; + tx_ring->ring_stats->tx.prev_pkt = -1; return 0; err: @@ -574,7 +574,6 @@ rx_skip_free: PAGE_SIZE); memset(rx_ring->desc, 0, size); - rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; } @@ -849,7 +848,7 @@ bool ice_alloc_rx_bufs(struct ice_rx_ring *rx_ring, unsigned int cleaned_count) addr = libeth_rx_alloc(&fq, ntu); if (addr == DMA_MAPPING_ERROR) { - rx_ring->ring_stats->rx_stats.alloc_page_failed++; + ice_stats_inc(rx_ring->ring_stats, rx_page_failed); break; } @@ -863,7 +862,7 @@ bool ice_alloc_rx_bufs(struct ice_rx_ring *rx_ring, unsigned int cleaned_count) addr = libeth_rx_alloc(&hdr_fq, ntu); if (addr == DMA_MAPPING_ERROR) { - rx_ring->ring_stats->rx_stats.alloc_page_failed++; + ice_stats_inc(rx_ring->ring_stats, rx_page_failed); libeth_rx_recycle_slow(fq.fqes[ntu].netmem); break; @@ -1045,7 +1044,7 @@ construct_skb: /* exit if we failed to retrieve a buffer */ if (!skb) { libeth_xdp_return_buff_slow(xdp); - rx_ring->ring_stats->rx_stats.alloc_buf_failed++; + ice_stats_inc(rx_ring->ring_stats, rx_buf_failed); continue; } @@ -1087,35 +1086,36 @@ static void __ice_update_sample(struct ice_q_vector *q_vector, struct dim_sample *sample, bool is_tx) { - u64 packets = 0, bytes = 0; + u64 total_packets = 0, total_bytes = 0, pkts, bytes; if (is_tx) { struct ice_tx_ring *tx_ring; ice_for_each_tx_ring(tx_ring, *rc) { - struct ice_ring_stats *ring_stats; - - ring_stats = tx_ring->ring_stats; - if (!ring_stats) + if (!tx_ring->ring_stats) continue; - packets += ring_stats->stats.pkts; - bytes += ring_stats->stats.bytes; + + ice_fetch_tx_ring_stats(tx_ring, &pkts, &bytes); + + total_packets += pkts; + total_bytes += bytes; } } else { struct ice_rx_ring *rx_ring; ice_for_each_rx_ring(rx_ring, *rc) { - struct ice_ring_stats *ring_stats; - - ring_stats = rx_ring->ring_stats; - if (!ring_stats) + if (!rx_ring->ring_stats) continue; - packets += ring_stats->stats.pkts; - bytes += ring_stats->stats.bytes; + + ice_fetch_rx_ring_stats(rx_ring, &pkts, &bytes); + + total_packets += pkts; + total_bytes += bytes; } } - dim_update_sample(q_vector->total_events, packets, bytes, sample); + dim_update_sample(q_vector->total_events, + total_packets, total_bytes, sample); sample->comp_ctr = 0; /* if dim settings get stale, like when not updated for 1 @@ -1362,7 +1362,7 @@ static int __ice_maybe_stop_tx(struct ice_tx_ring *tx_ring, unsigned int size) /* A reprieve! - use start_queue because it doesn't call schedule */ netif_tx_start_queue(txring_txq(tx_ring)); - ++tx_ring->ring_stats->tx_stats.restart_q; + ice_stats_inc(tx_ring->ring_stats, tx_restart_q); return 0; } @@ -2156,15 +2156,12 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring) ice_trace(xmit_frame_ring, tx_ring, skb); - if (unlikely(ipv6_hopopt_jumbo_remove(skb))) - goto out_drop; - count = ice_xmit_desc_count(skb); if (ice_chk_linearize(skb, count)) { if (__skb_linearize(skb)) goto out_drop; count = ice_txd_use_count(skb->len); - tx_ring->ring_stats->tx_stats.tx_linearize++; + ice_stats_inc(tx_ring->ring_stats, tx_linearize); } /* need: 1 descriptor per page * PAGE_SIZE/ICE_MAX_DATA_PER_TXD, @@ -2175,7 +2172,7 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring) */ if (ice_maybe_stop_tx(tx_ring, count + ICE_DESCS_PER_CACHE_LINE + ICE_DESCS_FOR_CTX_DESC)) { - tx_ring->ring_stats->tx_stats.tx_busy++; + ice_stats_inc(tx_ring->ring_stats, tx_busy); return NETDEV_TX_BUSY; } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index e440c55d9e9f..b6547e1b7c42 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -129,34 +129,65 @@ struct ice_tx_offload_params { u8 header_len; }; -struct ice_q_stats { - u64 pkts; - u64 bytes; -}; - -struct ice_txq_stats { - u64 restart_q; - u64 tx_busy; - u64 tx_linearize; - int prev_pkt; /* negative if no pending Tx descriptors */ -}; - -struct ice_rxq_stats { - u64 non_eop_descs; - u64 alloc_page_failed; - u64 alloc_buf_failed; -}; - struct ice_ring_stats { struct rcu_head rcu; /* to avoid race on free */ - struct ice_q_stats stats; struct u64_stats_sync syncp; - union { - struct ice_txq_stats tx_stats; - struct ice_rxq_stats rx_stats; - }; + struct_group(stats, + u64_stats_t pkts; + u64_stats_t bytes; + union { + struct_group(tx, + u64_stats_t tx_restart_q; + u64_stats_t tx_busy; + u64_stats_t tx_linearize; + /* negative if no pending Tx descriptors */ + int prev_pkt; + ); + struct_group(rx, + u64_stats_t rx_non_eop_descs; + u64_stats_t rx_page_failed; + u64_stats_t rx_buf_failed; + ); + }; + ); }; +/** + * ice_stats_read - Read a single ring stat value + * @stats: pointer to ring_stats structure for a queue + * @member: the ice_ring_stats member to read + * + * Shorthand for reading a single 64-bit stat value from struct + * ice_ring_stats. + * + * Return: the value of the requested stat. + */ +#define ice_stats_read(stats, member) ({ \ + struct ice_ring_stats *__stats = (stats); \ + unsigned int start; \ + u64 val; \ + do { \ + start = u64_stats_fetch_begin(&__stats->syncp); \ + val = u64_stats_read(&__stats->member); \ + } while (u64_stats_fetch_retry(&__stats->syncp, start)); \ + val; \ +}) + +/** + * ice_stats_inc - Increment a single ring stat value + * @stats: pointer to the ring_stats structure for a queue + * @member: the ice_ring_stats member to increment + * + * Shorthand for incrementing a single 64-bit stat value in struct + * ice_ring_stats. + */ +#define ice_stats_inc(stats, member) do { \ + struct ice_ring_stats *__stats = (stats); \ + u64_stats_update_begin(&__stats->syncp); \ + u64_stats_inc(&__stats->member); \ + u64_stats_update_end(&__stats->syncp); \ +} while (0) + enum ice_ring_state_t { ICE_TX_XPS_INIT_DONE, ICE_TX_NBITS, @@ -236,34 +267,49 @@ struct ice_tstamp_ring { } ____cacheline_internodealigned_in_smp; struct ice_rx_ring { - /* CL1 - 1st cacheline starts here */ + __cacheline_group_begin_aligned(read_mostly); void *desc; /* Descriptor ring memory */ struct page_pool *pp; struct net_device *netdev; /* netdev ring maps to */ - struct ice_vsi *vsi; /* Backreference to associated VSI */ struct ice_q_vector *q_vector; /* Backreference to associated vector */ u8 __iomem *tail; - u16 q_index; /* Queue number of ring */ - - u16 count; /* Number of descriptors */ - u16 reg_idx; /* HW register index of the ring */ - u16 next_to_alloc; union { struct libeth_fqe *rx_fqes; struct xdp_buff **xdp_buf; }; - /* CL2 - 2nd cacheline starts here */ - struct libeth_fqe *hdr_fqes; + u16 count; /* Number of descriptors */ + u8 ptp_rx; + + u8 flags; +#define ICE_RX_FLAGS_CRC_STRIP_DIS BIT(2) +#define ICE_RX_FLAGS_MULTIDEV BIT(3) +#define ICE_RX_FLAGS_RING_GCS BIT(4) + + u32 truesize; + struct page_pool *hdr_pp; + struct libeth_fqe *hdr_fqes; + + struct bpf_prog *xdp_prog; + struct ice_tx_ring *xdp_ring; + struct xsk_buff_pool *xsk_pool; + + /* stats structs */ + struct ice_ring_stats *ring_stats; + struct ice_rx_ring *next; /* pointer to next ring in q_vector */ + + u32 hdr_truesize; + + struct xdp_rxq_info xdp_rxq; + __cacheline_group_end_aligned(read_mostly); + __cacheline_group_begin_aligned(read_write); union { struct libeth_xdp_buff_stash xdp; struct libeth_xdp_buff *xsk; }; - - /* CL3 - 3rd cacheline starts here */ union { struct ice_pkt_ctx pkt_ctx; struct { @@ -271,75 +317,78 @@ struct ice_rx_ring { __be16 vlan_proto; }; }; - struct bpf_prog *xdp_prog; /* used in interrupt processing */ u16 next_to_use; u16 next_to_clean; + __cacheline_group_end_aligned(read_write); - u32 hdr_truesize; - u32 truesize; - - /* stats structs */ - struct ice_ring_stats *ring_stats; - + __cacheline_group_begin_aligned(cold); struct rcu_head rcu; /* to avoid race on free */ - /* CL4 - 4th cacheline starts here */ + struct ice_vsi *vsi; /* Backreference to associated VSI */ struct ice_channel *ch; - struct ice_tx_ring *xdp_ring; - struct ice_rx_ring *next; /* pointer to next ring in q_vector */ - struct xsk_buff_pool *xsk_pool; - u16 rx_hdr_len; - u16 rx_buf_len; + dma_addr_t dma; /* physical address of ring */ + u16 q_index; /* Queue number of ring */ + u16 reg_idx; /* HW register index of the ring */ u8 dcb_tc; /* Traffic class of ring */ - u8 ptp_rx; -#define ICE_RX_FLAGS_CRC_STRIP_DIS BIT(2) -#define ICE_RX_FLAGS_MULTIDEV BIT(3) -#define ICE_RX_FLAGS_RING_GCS BIT(4) - u8 flags; - /* CL5 - 5th cacheline starts here */ - struct xdp_rxq_info xdp_rxq; + + u16 rx_hdr_len; + u16 rx_buf_len; + __cacheline_group_end_aligned(cold); } ____cacheline_internodealigned_in_smp; struct ice_tx_ring { - /* CL1 - 1st cacheline starts here */ - struct ice_tx_ring *next; /* pointer to next ring in q_vector */ + __cacheline_group_begin_aligned(read_mostly); void *desc; /* Descriptor ring memory */ struct device *dev; /* Used for DMA mapping */ u8 __iomem *tail; struct ice_tx_buf *tx_buf; + struct ice_q_vector *q_vector; /* Backreference to associated vector */ struct net_device *netdev; /* netdev ring maps to */ struct ice_vsi *vsi; /* Backreference to associated VSI */ - /* CL2 - 2nd cacheline starts here */ - dma_addr_t dma; /* physical address of ring */ - struct xsk_buff_pool *xsk_pool; - u16 next_to_use; - u16 next_to_clean; - u16 q_handle; /* Queue handle per TC */ - u16 reg_idx; /* HW register index of the ring */ + u16 count; /* Number of descriptors */ u16 q_index; /* Queue number of ring */ - u16 xdp_tx_active; + + u8 flags; +#define ICE_TX_FLAGS_RING_XDP BIT(0) +#define ICE_TX_FLAGS_RING_VLAN_L2TAG1 BIT(1) +#define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2) +#define ICE_TX_FLAGS_TXTIME BIT(3) + + struct xsk_buff_pool *xsk_pool; + /* stats structs */ struct ice_ring_stats *ring_stats; - /* CL3 - 3rd cacheline starts here */ + struct ice_tx_ring *next; /* pointer to next ring in q_vector */ + + struct ice_tstamp_ring *tstamp_ring; + struct ice_ptp_tx *tx_tstamps; + __cacheline_group_end_aligned(read_mostly); + + __cacheline_group_begin_aligned(read_write); + u16 next_to_use; + u16 next_to_clean; + + u16 xdp_tx_active; + spinlock_t tx_lock; + __cacheline_group_end_aligned(read_write); + + __cacheline_group_begin_aligned(cold); struct rcu_head rcu; /* to avoid race on free */ DECLARE_BITMAP(xps_state, ICE_TX_NBITS); /* XPS Config State */ struct ice_channel *ch; - struct ice_ptp_tx *tx_tstamps; - spinlock_t tx_lock; - u32 txq_teid; /* Added Tx queue TEID */ - /* CL4 - 4th cacheline starts here */ - struct ice_tstamp_ring *tstamp_ring; -#define ICE_TX_FLAGS_RING_XDP BIT(0) -#define ICE_TX_FLAGS_RING_VLAN_L2TAG1 BIT(1) -#define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2) -#define ICE_TX_FLAGS_TXTIME BIT(3) - u8 flags; + + dma_addr_t dma; /* physical address of ring */ + u16 q_handle; /* Queue handle per TC */ + u16 reg_idx; /* HW register index of the ring */ u8 dcb_tc; /* Traffic class of ring */ + u16 quanta_prof_id; + u32 txq_teid; /* Added Tx queue TEID */ + __cacheline_group_end_aligned(cold); } ____cacheline_internodealigned_in_smp; static inline bool ice_ring_ch_enabled(struct ice_tx_ring *ring) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 956da38d63b0..e695a664e53d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -20,9 +20,6 @@ void ice_release_rx_desc(struct ice_rx_ring *rx_ring, u16 val) rx_ring->next_to_use = val; - /* update next to alloc since we have filled the ring */ - rx_ring->next_to_alloc = val; - /* QRX_TAIL will be updated with any tail value, but hardware ignores * the lower 3 bits. This makes it so we only bump tail on meaningful * boundaries. Also, this allows us to bump tail on intervals of 8 up to @@ -480,7 +477,7 @@ dma_unmap: return ICE_XDP_CONSUMED; busy: - xdp_ring->ring_stats->tx_stats.tx_busy++; + ice_stats_inc(xdp_ring->ring_stats, tx_busy); return ICE_XDP_CONSUMED; } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h index 6a3f10f7a53f..f17990b68b62 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h @@ -38,7 +38,7 @@ ice_is_non_eop(const struct ice_rx_ring *rx_ring, if (likely(ice_test_staterr(rx_desc->wb.status_error0, ICE_RXD_EOF))) return false; - rx_ring->ring_stats->rx_stats.non_eop_descs++; + ice_stats_inc(rx_ring->ring_stats, rx_non_eop_descs); return true; } diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index 6a2ec8389a8f..1e82f4c40b32 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -349,6 +349,12 @@ enum ice_clk_src { NUM_ICE_CLK_SRC }; +enum ice_synce_clk { + ICE_SYNCE_CLK0, + ICE_SYNCE_CLK1, + ICE_SYNCE_CLK_NUM +}; + struct ice_ts_func_info { /* Function specific info */ enum ice_tspll_freq time_ref; diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 989ff1fd9110..953e68ed0f9a 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -497,7 +497,7 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, return ICE_XDP_TX; busy: - xdp_ring->ring_stats->tx_stats.tx_busy++; + ice_stats_inc(xdp_ring->ring_stats, tx_busy); return ICE_XDP_CONSUMED; } @@ -659,7 +659,7 @@ construct_skb: xsk_buff_free(first); first = NULL; - rx_ring->ring_stats->rx_stats.alloc_buf_failed++; + ice_stats_inc(rx_ring->ring_stats, rx_buf_failed); continue; } diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 1bf7934d4e28..b206fba092c8 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -8,6 +8,8 @@ struct idpf_adapter; struct idpf_vport; struct idpf_vport_max_q; +struct idpf_q_vec_rsrc; +struct idpf_rss_data; #include <net/pkt_sched.h> #include <linux/aer.h> @@ -201,7 +203,8 @@ struct idpf_vport_max_q { struct idpf_reg_ops { void (*ctlq_reg_init)(struct idpf_adapter *adapter, struct idpf_ctlq_create_info *cq); - int (*intr_reg_init)(struct idpf_vport *vport); + int (*intr_reg_init)(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); void (*mb_intr_reg_init)(struct idpf_adapter *adapter); void (*reset_reg_init)(struct idpf_adapter *adapter); void (*trigger_reset)(struct idpf_adapter *adapter, @@ -288,54 +291,88 @@ struct idpf_fsteer_fltr { }; /** - * struct idpf_vport - Handle for netdevices and queue resources - * @num_txq: Number of allocated TX queues - * @num_complq: Number of allocated completion queues + * struct idpf_q_vec_rsrc - handle for queue and vector resources + * @dev: device pointer for DMA mapping + * @q_vectors: array of queue vectors + * @q_vector_idxs: starting index of queue vectors + * @num_q_vectors: number of IRQ vectors allocated + * @noirq_v_idx: ID of the NOIRQ vector + * @noirq_dyn_ctl_ena: value to write to the above to enable it + * @noirq_dyn_ctl: register to enable/disable the vector for NOIRQ queues + * @txq_grps: array of TX queue groups * @txq_desc_count: TX queue descriptor count - * @complq_desc_count: Completion queue descriptor count - * @compln_clean_budget: Work budget for completion clean - * @num_txq_grp: Number of TX queue groups - * @txq_grps: Array of TX queue groups - * @txq_model: Split queue or single queue queuing model - * @txqs: Used only in hotpath to get to the right queue very fast - * @crc_enable: Enable CRC insertion offload - * @xdpsq_share: whether XDPSQ sharing is enabled - * @num_xdp_txq: number of XDPSQs + * @complq_desc_count: completion queue descriptor count + * @txq_model: split queue or single queue queuing model + * @num_txq: number of allocated TX queues + * @num_complq: number of allocated completion queues + * @num_txq_grp: number of TX queue groups * @xdp_txq_offset: index of the first XDPSQ (== number of regular SQs) - * @xdp_prog: installed XDP program - * @num_rxq: Number of allocated RX queues - * @num_bufq: Number of allocated buffer queues + * @num_rxq_grp: number of RX queues in a group + * @rxq_model: splitq queue or single queue queuing model + * @rxq_grps: total number of RX groups. Number of groups * number of RX per + * group will yield total number of RX queues. + * @num_rxq: number of allocated RX queues + * @num_bufq: number of allocated buffer queues * @rxq_desc_count: RX queue descriptor count. *MUST* have enough descriptors * to complete all buffer descriptors for all buffer queues in * the worst case. - * @num_bufqs_per_qgrp: Buffer queues per RX queue in a given grouping - * @bufq_desc_count: Buffer queue descriptor count - * @num_rxq_grp: Number of RX queues in a group - * @rxq_grps: Total number of RX groups. Number of groups * number of RX per - * group will yield total number of RX queues. - * @rxq_model: Splitq queue or single queue queuing model - * @rx_ptype_lkup: Lookup table for ptypes on RX + * @bufq_desc_count: buffer queue descriptor count + * @num_bufqs_per_qgrp: buffer queues per RX queue in a given grouping + * @base_rxd: true if the driver should use base descriptors instead of flex + */ +struct idpf_q_vec_rsrc { + struct device *dev; + struct idpf_q_vector *q_vectors; + u16 *q_vector_idxs; + u16 num_q_vectors; + u16 noirq_v_idx; + u32 noirq_dyn_ctl_ena; + void __iomem *noirq_dyn_ctl; + + struct idpf_txq_group *txq_grps; + u32 txq_desc_count; + u32 complq_desc_count; + u32 txq_model; + u16 num_txq; + u16 num_complq; + u16 num_txq_grp; + u16 xdp_txq_offset; + + u16 num_rxq_grp; + u32 rxq_model; + struct idpf_rxq_group *rxq_grps; + u16 num_rxq; + u16 num_bufq; + u32 rxq_desc_count; + u32 bufq_desc_count[IDPF_MAX_BUFQS_PER_RXQ_GRP]; + u8 num_bufqs_per_qgrp; + bool base_rxd; +}; + +/** + * struct idpf_vport - Handle for netdevices and queue resources + * @dflt_qv_rsrc: contains default queue and vector resources + * @txqs: Used only in hotpath to get to the right queue very fast + * @num_txq: Number of allocated TX queues + * @num_xdp_txq: number of XDPSQs + * @xdpsq_share: whether XDPSQ sharing is enabled + * @xdp_prog: installed XDP program * @vdev_info: IDC vport device info pointer * @adapter: back pointer to associated adapter * @netdev: Associated net_device. Each vport should have one and only one * associated netdev. * @flags: See enum idpf_vport_flags - * @vport_type: Default SRIOV, SIOV, etc. + * @compln_clean_budget: Work budget for completion clean * @vport_id: Device given vport identifier + * @vport_type: Default SRIOV, SIOV, etc. * @idx: Software index in adapter vports struct - * @default_vport: Use this vport if one isn't specified - * @base_rxd: True if the driver should use base descriptors instead of flex - * @num_q_vectors: Number of IRQ vectors allocated - * @q_vectors: Array of queue vectors - * @q_vector_idxs: Starting index of queue vectors - * @noirq_dyn_ctl: register to enable/disable the vector for NOIRQ queues - * @noirq_dyn_ctl_ena: value to write to the above to enable it - * @noirq_v_idx: ID of the NOIRQ vector * @max_mtu: device given max possible MTU * @default_mac_addr: device will give a default MAC to use * @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation * @tx_itr_profile: TX profiles for Dynamic Interrupt Moderation * @port_stats: per port csum, header split, and other offload stats + * @default_vport: Use this vport if one isn't specified + * @crc_enable: Enable CRC insertion offload * @link_up: True if link is up * @tx_tstamp_caps: Capabilities negotiated for Tx timestamping * @tstamp_config: The Tx tstamp config @@ -343,57 +380,31 @@ struct idpf_fsteer_fltr { * @tstamp_stats: Tx timestamping statistics */ struct idpf_vport { - u16 num_txq; - u16 num_complq; - u32 txq_desc_count; - u32 complq_desc_count; - u32 compln_clean_budget; - u16 num_txq_grp; - struct idpf_txq_group *txq_grps; - u32 txq_model; + struct idpf_q_vec_rsrc dflt_qv_rsrc; struct idpf_tx_queue **txqs; - bool crc_enable; - - bool xdpsq_share; + u16 num_txq; u16 num_xdp_txq; - u16 xdp_txq_offset; + bool xdpsq_share; struct bpf_prog *xdp_prog; - u16 num_rxq; - u16 num_bufq; - u32 rxq_desc_count; - u8 num_bufqs_per_qgrp; - u32 bufq_desc_count[IDPF_MAX_BUFQS_PER_RXQ_GRP]; - u16 num_rxq_grp; - struct idpf_rxq_group *rxq_grps; - u32 rxq_model; - struct libeth_rx_pt *rx_ptype_lkup; - struct iidc_rdma_vport_dev_info *vdev_info; struct idpf_adapter *adapter; struct net_device *netdev; DECLARE_BITMAP(flags, IDPF_VPORT_FLAGS_NBITS); - u16 vport_type; + u32 compln_clean_budget; u32 vport_id; + u16 vport_type; u16 idx; - bool default_vport; - bool base_rxd; - - u16 num_q_vectors; - struct idpf_q_vector *q_vectors; - u16 *q_vector_idxs; - - void __iomem *noirq_dyn_ctl; - u32 noirq_dyn_ctl_ena; - u16 noirq_v_idx; u16 max_mtu; u8 default_mac_addr[ETH_ALEN]; u16 rx_itr_profile[IDPF_DIM_PROFILE_SLOTS]; u16 tx_itr_profile[IDPF_DIM_PROFILE_SLOTS]; - struct idpf_port_stats port_stats; + struct idpf_port_stats port_stats; + bool default_vport; + bool crc_enable; bool link_up; struct idpf_ptp_vport_tx_tstamp_caps *tx_tstamp_caps; @@ -550,10 +561,37 @@ struct idpf_vector_lifo { }; /** + * struct idpf_queue_id_reg_chunk - individual queue ID and register chunk + * @qtail_reg_start: queue tail register offset + * @qtail_reg_spacing: queue tail register spacing + * @type: queue type of the queues in the chunk + * @start_queue_id: starting queue ID in the chunk + * @num_queues: number of queues in the chunk + */ +struct idpf_queue_id_reg_chunk { + u64 qtail_reg_start; + u32 qtail_reg_spacing; + u32 type; + u32 start_queue_id; + u32 num_queues; +}; + +/** + * struct idpf_queue_id_reg_info - queue ID and register chunk info received + * over the mailbox + * @num_chunks: number of chunks + * @queue_chunks: array of chunks + */ +struct idpf_queue_id_reg_info { + u16 num_chunks; + struct idpf_queue_id_reg_chunk *queue_chunks; +}; + +/** * struct idpf_vport_config - Vport configuration data * @user_config: see struct idpf_vport_user_config_data * @max_q: Maximum possible queues - * @req_qs_chunks: Queue chunk data for requested queues + * @qid_reg_info: Struct to store the queue ID and register info * @mac_filter_list_lock: Lock to protect mac filters * @flow_steer_list_lock: Lock to protect fsteer filters * @flags: See enum idpf_vport_config_flags @@ -561,7 +599,7 @@ struct idpf_vector_lifo { struct idpf_vport_config { struct idpf_vport_user_config_data user_config; struct idpf_vport_max_q max_q; - struct virtchnl2_add_queues *req_qs_chunks; + struct idpf_queue_id_reg_info qid_reg_info; spinlock_t mac_filter_list_lock; spinlock_t flow_steer_list_lock; DECLARE_BITMAP(flags, IDPF_VPORT_CONFIG_FLAGS_NBITS); @@ -603,6 +641,8 @@ struct idpf_vc_xn_manager; * @vport_params_reqd: Vport params requested * @vport_params_recvd: Vport params received * @vport_ids: Array of device given vport identifiers + * @singleq_pt_lkup: Lookup table for singleq RX ptypes + * @splitq_pt_lkup: Lookup table for splitq RX ptypes * @vport_config: Vport config parameters * @max_vports: Maximum vports that can be allocated * @num_alloc_vports: Current number of vports allocated @@ -661,6 +701,9 @@ struct idpf_adapter { struct virtchnl2_create_vport **vport_params_recvd; u32 *vport_ids; + struct libeth_rx_pt *singleq_pt_lkup; + struct libeth_rx_pt *splitq_pt_lkup; + struct idpf_vport_config **vport_config; u16 max_vports; u16 num_alloc_vports; diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c index 3a04a6bd0d7c..a4625638cf3f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c @@ -70,11 +70,13 @@ static void idpf_mb_intr_reg_init(struct idpf_adapter *adapter) /** * idpf_intr_reg_init - Initialize interrupt registers * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources */ -static int idpf_intr_reg_init(struct idpf_vport *vport) +static int idpf_intr_reg_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_adapter *adapter = vport->adapter; - int num_vecs = vport->num_q_vectors; + u16 num_vecs = rsrc->num_q_vectors; struct idpf_vec_regs *reg_vals; int num_regs, i, err = 0; u32 rx_itr, tx_itr, val; @@ -86,15 +88,15 @@ static int idpf_intr_reg_init(struct idpf_vport *vport) if (!reg_vals) return -ENOMEM; - num_regs = idpf_get_reg_intr_vecs(vport, reg_vals); + num_regs = idpf_get_reg_intr_vecs(adapter, reg_vals); if (num_regs < num_vecs) { err = -EINVAL; goto free_reg_vals; } for (i = 0; i < num_vecs; i++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[i]; - u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC; + struct idpf_q_vector *q_vector = &rsrc->q_vectors[i]; + u16 vec_id = rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC; struct idpf_intr_reg *intr = &q_vector->intr_reg; u32 spacing; @@ -123,12 +125,12 @@ static int idpf_intr_reg_init(struct idpf_vport *vport) /* Data vector for NOIRQ queues */ - val = reg_vals[vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC].dyn_ctl_reg; - vport->noirq_dyn_ctl = idpf_get_reg_addr(adapter, val); + val = reg_vals[rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC].dyn_ctl_reg; + rsrc->noirq_dyn_ctl = idpf_get_reg_addr(adapter, val); val = PF_GLINT_DYN_CTL_WB_ON_ITR_M | PF_GLINT_DYN_CTL_INTENA_MSK_M | FIELD_PREP(PF_GLINT_DYN_CTL_ITR_INDX_M, IDPF_NO_ITR_UPDATE_IDX); - vport->noirq_dyn_ctl_ena = val; + rsrc->noirq_dyn_ctl_ena = val; free_reg_vals: kfree(reg_vals); diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c index 2efa3c08aba5..1d78a621d65b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c @@ -18,7 +18,7 @@ static u32 idpf_get_rx_ring_count(struct net_device *netdev) idpf_vport_ctrl_lock(netdev); vport = idpf_netdev_to_vport(netdev); - num_rxq = vport->num_rxq; + num_rxq = vport->dflt_qv_rsrc.num_rxq; idpf_vport_ctrl_unlock(netdev); return num_rxq; @@ -503,7 +503,7 @@ static int idpf_set_rxfh(struct net_device *netdev, } if (test_bit(IDPF_VPORT_UP, np->state)) - err = idpf_config_rss(vport); + err = idpf_config_rss(vport, rss_data); unlock_mutex: idpf_vport_ctrl_unlock(netdev); @@ -644,8 +644,8 @@ static void idpf_get_ringparam(struct net_device *netdev, ring->rx_max_pending = IDPF_MAX_RXQ_DESC; ring->tx_max_pending = IDPF_MAX_TXQ_DESC; - ring->rx_pending = vport->rxq_desc_count; - ring->tx_pending = vport->txq_desc_count; + ring->rx_pending = vport->dflt_qv_rsrc.rxq_desc_count; + ring->tx_pending = vport->dflt_qv_rsrc.txq_desc_count; kring->tcp_data_split = idpf_vport_get_hsplit(vport); @@ -669,8 +669,9 @@ static int idpf_set_ringparam(struct net_device *netdev, { struct idpf_vport_user_config_data *config_data; u32 new_rx_count, new_tx_count; + struct idpf_q_vec_rsrc *rsrc; struct idpf_vport *vport; - int i, err = 0; + int err = 0; u16 idx; idpf_vport_ctrl_lock(netdev); @@ -704,8 +705,9 @@ static int idpf_set_ringparam(struct net_device *netdev, netdev_info(netdev, "Requested Tx descriptor count rounded up to %u\n", new_tx_count); - if (new_tx_count == vport->txq_desc_count && - new_rx_count == vport->rxq_desc_count && + rsrc = &vport->dflt_qv_rsrc; + if (new_tx_count == rsrc->txq_desc_count && + new_rx_count == rsrc->rxq_desc_count && kring->tcp_data_split == idpf_vport_get_hsplit(vport)) goto unlock_mutex; @@ -724,10 +726,10 @@ static int idpf_set_ringparam(struct net_device *netdev, /* Since we adjusted the RX completion queue count, the RX buffer queue * descriptor count needs to be adjusted as well */ - for (i = 0; i < vport->num_bufqs_per_qgrp; i++) - vport->bufq_desc_count[i] = + for (unsigned int i = 0; i < rsrc->num_bufqs_per_qgrp; i++) + rsrc->bufq_desc_count[i] = IDPF_RX_BUFQ_DESC_COUNT(new_rx_count, - vport->num_bufqs_per_qgrp); + rsrc->num_bufqs_per_qgrp); err = idpf_initiate_soft_reset(vport, IDPF_SR_Q_DESC_CHANGE); @@ -1104,7 +1106,7 @@ static void idpf_add_port_stats(struct idpf_vport *vport, u64 **data) static void idpf_collect_queue_stats(struct idpf_vport *vport) { struct idpf_port_stats *pstats = &vport->port_stats; - int i, j; + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; /* zero out port stats since they're actually tracked in per * queue stats; this is only for reporting @@ -1120,22 +1122,22 @@ static void idpf_collect_queue_stats(struct idpf_vport *vport) u64_stats_set(&pstats->tx_dma_map_errs, 0); u64_stats_update_end(&pstats->stats_sync); - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i]; + for (unsigned int i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rxq_grp = &rsrc->rxq_grps[i]; u16 num_rxq; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) num_rxq = rxq_grp->splitq.num_rxq_sets; else num_rxq = rxq_grp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++) { + for (unsigned int j = 0; j < num_rxq; j++) { u64 hw_csum_err, hsplit, hsplit_hbo, bad_descs; struct idpf_rx_queue_stats *stats; struct idpf_rx_queue *rxq; unsigned int start; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) rxq = &rxq_grp->splitq.rxq_sets[j]->rxq; else rxq = rxq_grp->singleq.rxqs[j]; @@ -1162,10 +1164,10 @@ static void idpf_collect_queue_stats(struct idpf_vport *vport) } } - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; + for (unsigned int i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i]; - for (j = 0; j < txq_grp->num_txq; j++) { + for (unsigned int j = 0; j < txq_grp->num_txq; j++) { u64 linearize, qbusy, skb_drops, dma_map_errs; struct idpf_tx_queue *txq = txq_grp->txqs[j]; struct idpf_tx_queue_stats *stats; @@ -1208,9 +1210,9 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, { struct idpf_netdev_priv *np = netdev_priv(netdev); struct idpf_vport_config *vport_config; + struct idpf_q_vec_rsrc *rsrc; struct idpf_vport *vport; unsigned int total = 0; - unsigned int i, j; bool is_splitq; u16 qtype; @@ -1228,12 +1230,13 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, idpf_collect_queue_stats(vport); idpf_add_port_stats(vport, &data); - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; + rsrc = &vport->dflt_qv_rsrc; + for (unsigned int i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i]; qtype = VIRTCHNL2_QUEUE_TYPE_TX; - for (j = 0; j < txq_grp->num_txq; j++, total++) { + for (unsigned int j = 0; j < txq_grp->num_txq; j++, total++) { struct idpf_tx_queue *txq = txq_grp->txqs[j]; if (!txq) @@ -1253,10 +1256,10 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_TX); total = 0; - is_splitq = idpf_is_queue_model_split(vport->rxq_model); + is_splitq = idpf_is_queue_model_split(rsrc->rxq_model); - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i]; + for (unsigned int i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rxq_grp = &rsrc->rxq_grps[i]; u16 num_rxq; qtype = VIRTCHNL2_QUEUE_TYPE_RX; @@ -1266,7 +1269,7 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, else num_rxq = rxq_grp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++, total++) { + for (unsigned int j = 0; j < num_rxq; j++, total++) { struct idpf_rx_queue *rxq; if (is_splitq) @@ -1298,15 +1301,16 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport, u32 q_num) { + const struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; int q_grp, q_idx; - if (!idpf_is_queue_model_split(vport->rxq_model)) - return vport->rxq_grps->singleq.rxqs[q_num]->q_vector; + if (!idpf_is_queue_model_split(rsrc->rxq_model)) + return rsrc->rxq_grps->singleq.rxqs[q_num]->q_vector; q_grp = q_num / IDPF_DFLT_SPLITQ_RXQ_PER_GROUP; q_idx = q_num % IDPF_DFLT_SPLITQ_RXQ_PER_GROUP; - return vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq.q_vector; + return rsrc->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq.q_vector; } /** @@ -1319,14 +1323,15 @@ struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport, struct idpf_q_vector *idpf_find_txq_vec(const struct idpf_vport *vport, u32 q_num) { + const struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; int q_grp; - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(rsrc->txq_model)) return vport->txqs[q_num]->q_vector; q_grp = q_num / IDPF_DFLT_SPLITQ_TXQ_PER_GROUP; - return vport->txq_grps[q_grp].complq->q_vector; + return rsrc->txq_grps[q_grp].complq->q_vector; } /** @@ -1363,7 +1368,8 @@ static int idpf_get_q_coalesce(struct net_device *netdev, u32 q_num) { const struct idpf_netdev_priv *np = netdev_priv(netdev); - const struct idpf_vport *vport; + struct idpf_q_vec_rsrc *rsrc; + struct idpf_vport *vport; int err = 0; idpf_vport_ctrl_lock(netdev); @@ -1372,16 +1378,17 @@ static int idpf_get_q_coalesce(struct net_device *netdev, if (!test_bit(IDPF_VPORT_UP, np->state)) goto unlock_mutex; - if (q_num >= vport->num_rxq && q_num >= vport->num_txq) { + rsrc = &vport->dflt_qv_rsrc; + if (q_num >= rsrc->num_rxq && q_num >= rsrc->num_txq) { err = -EINVAL; goto unlock_mutex; } - if (q_num < vport->num_rxq) + if (q_num < rsrc->num_rxq) __idpf_get_q_coalesce(ec, idpf_find_rxq_vec(vport, q_num), VIRTCHNL2_QUEUE_TYPE_RX); - if (q_num < vport->num_txq) + if (q_num < rsrc->num_txq) __idpf_get_q_coalesce(ec, idpf_find_txq_vec(vport, q_num), VIRTCHNL2_QUEUE_TYPE_TX); @@ -1549,8 +1556,9 @@ static int idpf_set_coalesce(struct net_device *netdev, struct idpf_netdev_priv *np = netdev_priv(netdev); struct idpf_vport_user_config_data *user_config; struct idpf_q_coalesce *q_coal; + struct idpf_q_vec_rsrc *rsrc; struct idpf_vport *vport; - int i, err = 0; + int err = 0; user_config = &np->adapter->vport_config[np->vport_idx]->user_config; @@ -1560,14 +1568,15 @@ static int idpf_set_coalesce(struct net_device *netdev, if (!test_bit(IDPF_VPORT_UP, np->state)) goto unlock_mutex; - for (i = 0; i < vport->num_txq; i++) { + rsrc = &vport->dflt_qv_rsrc; + for (unsigned int i = 0; i < rsrc->num_txq; i++) { q_coal = &user_config->q_coalesce[i]; err = idpf_set_q_coalesce(vport, q_coal, ec, i, false); if (err) goto unlock_mutex; } - for (i = 0; i < vport->num_rxq; i++) { + for (unsigned int i = 0; i < rsrc->num_rxq; i++) { q_coal = &user_config->q_coalesce[i]; err = idpf_set_q_coalesce(vport, q_coal, ec, i, true); if (err) @@ -1748,6 +1757,7 @@ static void idpf_get_ts_stats(struct net_device *netdev, struct ethtool_ts_stats *ts_stats) { struct idpf_netdev_priv *np = netdev_priv(netdev); + struct idpf_q_vec_rsrc *rsrc; struct idpf_vport *vport; unsigned int start; @@ -1763,8 +1773,9 @@ static void idpf_get_ts_stats(struct net_device *netdev, if (!test_bit(IDPF_VPORT_UP, np->state)) goto exit; - for (u16 i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; + rsrc = &vport->dflt_qv_rsrc; + for (u16 i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i]; for (u16 j = 0; j < txq_grp->num_txq; j++) { struct idpf_tx_queue *txq = txq_grp->txqs[j]; diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 131a8121839b..94da5fbd56f1 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -545,7 +545,9 @@ static int idpf_del_mac_filter(struct idpf_vport *vport, if (test_bit(IDPF_VPORT_UP, np->state)) { int err; - err = idpf_add_del_mac_filters(vport, np, false, async); + err = idpf_add_del_mac_filters(np->adapter, vport_config, + vport->default_mac_addr, + np->vport_id, false, async); if (err) return err; } @@ -614,7 +616,9 @@ static int idpf_add_mac_filter(struct idpf_vport *vport, return err; if (test_bit(IDPF_VPORT_UP, np->state)) - err = idpf_add_del_mac_filters(vport, np, true, async); + err = idpf_add_del_mac_filters(np->adapter, vport_config, + vport->default_mac_addr, + np->vport_id, true, async); return err; } @@ -662,7 +666,8 @@ static void idpf_restore_mac_filters(struct idpf_vport *vport) spin_unlock_bh(&vport_config->mac_filter_list_lock); - idpf_add_del_mac_filters(vport, netdev_priv(vport->netdev), + idpf_add_del_mac_filters(vport->adapter, vport_config, + vport->default_mac_addr, vport->vport_id, true, false); } @@ -686,7 +691,8 @@ static void idpf_remove_mac_filters(struct idpf_vport *vport) spin_unlock_bh(&vport_config->mac_filter_list_lock); - idpf_add_del_mac_filters(vport, netdev_priv(vport->netdev), + idpf_add_del_mac_filters(vport->adapter, vport_config, + vport->default_mac_addr, vport->vport_id, false, false); } @@ -975,6 +981,10 @@ static void idpf_remove_features(struct idpf_vport *vport) static void idpf_vport_stop(struct idpf_vport *vport, bool rtnl) { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; + struct idpf_adapter *adapter = vport->adapter; + struct idpf_queue_id_reg_info *chunks; + u32 vport_id = vport->vport_id; if (!test_bit(IDPF_VPORT_UP, np->state)) return; @@ -985,24 +995,26 @@ static void idpf_vport_stop(struct idpf_vport *vport, bool rtnl) netif_carrier_off(vport->netdev); netif_tx_disable(vport->netdev); - idpf_send_disable_vport_msg(vport); + chunks = &adapter->vport_config[vport->idx]->qid_reg_info; + + idpf_send_disable_vport_msg(adapter, vport_id); idpf_send_disable_queues_msg(vport); - idpf_send_map_unmap_queue_vector_msg(vport, false); + idpf_send_map_unmap_queue_vector_msg(adapter, rsrc, vport_id, false); /* Normally we ask for queues in create_vport, but if the number of * initially requested queues have changed, for example via ethtool * set channels, we do delete queues and then add the queues back * instead of deleting and reallocating the vport. */ if (test_and_clear_bit(IDPF_VPORT_DEL_QUEUES, vport->flags)) - idpf_send_delete_queues_msg(vport); + idpf_send_delete_queues_msg(adapter, chunks, vport_id); idpf_remove_features(vport); vport->link_up = false; - idpf_vport_intr_deinit(vport); - idpf_xdp_rxq_info_deinit_all(vport); - idpf_vport_queues_rel(vport); - idpf_vport_intr_rel(vport); + idpf_vport_intr_deinit(vport, rsrc); + idpf_xdp_rxq_info_deinit_all(rsrc); + idpf_vport_queues_rel(vport, rsrc); + idpf_vport_intr_rel(rsrc); clear_bit(IDPF_VPORT_UP, np->state); if (rtnl) @@ -1046,9 +1058,6 @@ static void idpf_decfg_netdev(struct idpf_vport *vport) struct idpf_adapter *adapter = vport->adapter; u16 idx = vport->idx; - kfree(vport->rx_ptype_lkup); - vport->rx_ptype_lkup = NULL; - if (test_and_clear_bit(IDPF_VPORT_REG_NETDEV, adapter->vport_config[idx]->flags)) { unregister_netdev(vport->netdev); @@ -1065,6 +1074,7 @@ static void idpf_decfg_netdev(struct idpf_vport *vport) */ static void idpf_vport_rel(struct idpf_vport *vport) { + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_adapter *adapter = vport->adapter; struct idpf_vport_config *vport_config; struct idpf_vector_info vec_info; @@ -1073,12 +1083,12 @@ static void idpf_vport_rel(struct idpf_vport *vport) u16 idx = vport->idx; vport_config = adapter->vport_config[vport->idx]; - idpf_deinit_rss_lut(vport); rss_data = &vport_config->user_config.rss_data; + idpf_deinit_rss_lut(rss_data); kfree(rss_data->rss_key); rss_data->rss_key = NULL; - idpf_send_destroy_vport_msg(vport); + idpf_send_destroy_vport_msg(adapter, vport->vport_id); /* Release all max queues allocated to the adapter's pool */ max_q.max_rxq = vport_config->max_q.max_rxq; @@ -1089,24 +1099,21 @@ static void idpf_vport_rel(struct idpf_vport *vport) /* Release all the allocated vectors on the stack */ vec_info.num_req_vecs = 0; - vec_info.num_curr_vecs = vport->num_q_vectors; + vec_info.num_curr_vecs = rsrc->num_q_vectors; vec_info.default_vport = vport->default_vport; - idpf_req_rel_vector_indexes(adapter, vport->q_vector_idxs, &vec_info); + idpf_req_rel_vector_indexes(adapter, rsrc->q_vector_idxs, &vec_info); - kfree(vport->q_vector_idxs); - vport->q_vector_idxs = NULL; + kfree(rsrc->q_vector_idxs); + rsrc->q_vector_idxs = NULL; + + idpf_vport_deinit_queue_reg_chunks(vport_config); kfree(adapter->vport_params_recvd[idx]); adapter->vport_params_recvd[idx] = NULL; kfree(adapter->vport_params_reqd[idx]); adapter->vport_params_reqd[idx] = NULL; - if (adapter->vport_config[idx]) { - kfree(adapter->vport_config[idx]->req_qs_chunks); - adapter->vport_config[idx]->req_qs_chunks = NULL; - } - kfree(vport->rx_ptype_lkup); - vport->rx_ptype_lkup = NULL; + kfree(vport); adapter->num_alloc_vports--; } @@ -1155,7 +1162,7 @@ static void idpf_vport_dealloc(struct idpf_vport *vport) */ static bool idpf_is_hsplit_supported(const struct idpf_vport *vport) { - return idpf_is_queue_model_split(vport->rxq_model) && + return idpf_is_queue_model_split(vport->dflt_qv_rsrc.rxq_model) && idpf_is_cap_ena_all(vport->adapter, IDPF_HSPLIT_CAPS, IDPF_CAP_HSPLIT); } @@ -1224,6 +1231,7 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, { struct idpf_rss_data *rss_data; u16 idx = adapter->next_vport; + struct idpf_q_vec_rsrc *rsrc; struct idpf_vport *vport; u16 num_max_q; int err; @@ -1271,11 +1279,15 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, vport->default_vport = adapter->num_alloc_vports < idpf_get_default_vports(adapter); - vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL); - if (!vport->q_vector_idxs) + rsrc = &vport->dflt_qv_rsrc; + rsrc->dev = &adapter->pdev->dev; + rsrc->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL); + if (!rsrc->q_vector_idxs) goto free_vport; - idpf_vport_init(vport, max_q); + err = idpf_vport_init(vport, max_q); + if (err) + goto free_vector_idxs; /* LUT and key are both initialized here. Key is not strictly dependent * on how many queues we have. If we change number of queues and soft @@ -1286,13 +1298,13 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, rss_data = &adapter->vport_config[idx]->user_config.rss_data; rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL); if (!rss_data->rss_key) - goto free_vector_idxs; + goto free_qreg_chunks; - /* Initialize default rss key */ + /* Initialize default RSS key */ netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size); - /* Initialize default rss LUT */ - err = idpf_init_rss_lut(vport); + /* Initialize default RSS LUT */ + err = idpf_init_rss_lut(vport, rss_data); if (err) goto free_rss_key; @@ -1308,8 +1320,10 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter, free_rss_key: kfree(rss_data->rss_key); +free_qreg_chunks: + idpf_vport_deinit_queue_reg_chunks(adapter->vport_config[idx]); free_vector_idxs: - kfree(vport->q_vector_idxs); + kfree(rsrc->q_vector_idxs); free_vport: kfree(vport); @@ -1346,7 +1360,8 @@ void idpf_statistics_task(struct work_struct *work) struct idpf_vport *vport = adapter->vports[i]; if (vport && !test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags)) - idpf_send_get_stats_msg(vport); + idpf_send_get_stats_msg(netdev_priv(vport->netdev), + &vport->port_stats); } queue_delayed_work(adapter->stats_wq, &adapter->stats_task, @@ -1369,7 +1384,7 @@ void idpf_mbx_task(struct work_struct *work) queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task, usecs_to_jiffies(300)); - idpf_recv_mb_msg(adapter); + idpf_recv_mb_msg(adapter, adapter->hw.arq); } /** @@ -1417,9 +1432,10 @@ static void idpf_restore_features(struct idpf_vport *vport) */ static int idpf_set_real_num_queues(struct idpf_vport *vport) { - int err, txq = vport->num_txq - vport->num_xdp_txq; + int err, txq = vport->dflt_qv_rsrc.num_txq - vport->num_xdp_txq; - err = netif_set_real_num_rx_queues(vport->netdev, vport->num_rxq); + err = netif_set_real_num_rx_queues(vport->netdev, + vport->dflt_qv_rsrc.num_rxq); if (err) return err; @@ -1429,10 +1445,8 @@ static int idpf_set_real_num_queues(struct idpf_vport *vport) /** * idpf_up_complete - Complete interface up sequence * @vport: virtual port structure - * - * Returns 0 on success, negative on failure. */ -static int idpf_up_complete(struct idpf_vport *vport) +static void idpf_up_complete(struct idpf_vport *vport) { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); @@ -1442,30 +1456,26 @@ static int idpf_up_complete(struct idpf_vport *vport) } set_bit(IDPF_VPORT_UP, np->state); - - return 0; } /** * idpf_rx_init_buf_tail - Write initial buffer ring tail value - * @vport: virtual port struct + * @rsrc: pointer to queue and vector resources */ -static void idpf_rx_init_buf_tail(struct idpf_vport *vport) +static void idpf_rx_init_buf_tail(struct idpf_q_vec_rsrc *rsrc) { - int i, j; + for (unsigned int i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *grp = &rsrc->rxq_grps[i]; - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *grp = &vport->rxq_grps[i]; - - if (idpf_is_queue_model_split(vport->rxq_model)) { - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + if (idpf_is_queue_model_split(rsrc->rxq_model)) { + for (unsigned int j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { const struct idpf_buf_queue *q = &grp->splitq.bufq_sets[j].bufq; writel(q->next_to_alloc, q->tail); } } else { - for (j = 0; j < grp->singleq.num_rxq; j++) { + for (unsigned int j = 0; j < grp->singleq.num_rxq; j++) { const struct idpf_rx_queue *q = grp->singleq.rxqs[j]; @@ -1483,7 +1493,12 @@ static void idpf_rx_init_buf_tail(struct idpf_vport *vport) static int idpf_vport_open(struct idpf_vport *vport, bool rtnl) { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_adapter *adapter = vport->adapter; + struct idpf_vport_config *vport_config; + struct idpf_queue_id_reg_info *chunks; + struct idpf_rss_data *rss_data; + u32 vport_id = vport->vport_id; int err; if (test_bit(IDPF_VPORT_UP, np->state)) @@ -1495,48 +1510,51 @@ static int idpf_vport_open(struct idpf_vport *vport, bool rtnl) /* we do not allow interface up just yet */ netif_carrier_off(vport->netdev); - err = idpf_vport_intr_alloc(vport); + err = idpf_vport_intr_alloc(vport, rsrc); if (err) { dev_err(&adapter->pdev->dev, "Failed to allocate interrupts for vport %u: %d\n", vport->vport_id, err); goto err_rtnl_unlock; } - err = idpf_vport_queues_alloc(vport); + err = idpf_vport_queues_alloc(vport, rsrc); if (err) goto intr_rel; - err = idpf_vport_queue_ids_init(vport); + vport_config = adapter->vport_config[vport->idx]; + chunks = &vport_config->qid_reg_info; + + err = idpf_vport_queue_ids_init(vport, rsrc, chunks); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize queue ids for vport %u: %d\n", vport->vport_id, err); goto queues_rel; } - err = idpf_vport_intr_init(vport); + err = idpf_vport_intr_init(vport, rsrc); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize interrupts for vport %u: %d\n", vport->vport_id, err); goto queues_rel; } - err = idpf_queue_reg_init(vport); + err = idpf_queue_reg_init(vport, rsrc, chunks); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n", vport->vport_id, err); goto intr_deinit; } - err = idpf_rx_bufs_init_all(vport); + err = idpf_rx_bufs_init_all(vport, rsrc); if (err) { dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport %u: %d\n", vport->vport_id, err); goto intr_deinit; } - idpf_rx_init_buf_tail(vport); + idpf_rx_init_buf_tail(rsrc); - err = idpf_xdp_rxq_info_init_all(vport); + err = idpf_xdp_rxq_info_init_all(rsrc); if (err) { netdev_err(vport->netdev, "Failed to initialize XDP RxQ info for vport %u: %pe\n", @@ -1544,16 +1562,17 @@ static int idpf_vport_open(struct idpf_vport *vport, bool rtnl) goto intr_deinit; } - idpf_vport_intr_ena(vport); + idpf_vport_intr_ena(vport, rsrc); - err = idpf_send_config_queues_msg(vport); + err = idpf_send_config_queues_msg(adapter, rsrc, vport_id); if (err) { dev_err(&adapter->pdev->dev, "Failed to configure queues for vport %u, %d\n", vport->vport_id, err); goto rxq_deinit; } - err = idpf_send_map_unmap_queue_vector_msg(vport, true); + err = idpf_send_map_unmap_queue_vector_msg(adapter, rsrc, vport_id, + true); if (err) { dev_err(&adapter->pdev->dev, "Failed to map queue vectors for vport %u: %d\n", vport->vport_id, err); @@ -1567,7 +1586,7 @@ static int idpf_vport_open(struct idpf_vport *vport, bool rtnl) goto unmap_queue_vectors; } - err = idpf_send_enable_vport_msg(vport); + err = idpf_send_enable_vport_msg(adapter, vport_id); if (err) { dev_err(&adapter->pdev->dev, "Failed to enable vport %u: %d\n", vport->vport_id, err); @@ -1577,19 +1596,15 @@ static int idpf_vport_open(struct idpf_vport *vport, bool rtnl) idpf_restore_features(vport); - err = idpf_config_rss(vport); + rss_data = &vport_config->user_config.rss_data; + err = idpf_config_rss(vport, rss_data); if (err) { dev_err(&adapter->pdev->dev, "Failed to configure RSS for vport %u: %d\n", vport->vport_id, err); goto disable_vport; } - err = idpf_up_complete(vport); - if (err) { - dev_err(&adapter->pdev->dev, "Failed to complete interface up for vport %u: %d\n", - vport->vport_id, err); - goto disable_vport; - } + idpf_up_complete(vport); if (rtnl) rtnl_unlock(); @@ -1597,19 +1612,19 @@ static int idpf_vport_open(struct idpf_vport *vport, bool rtnl) return 0; disable_vport: - idpf_send_disable_vport_msg(vport); + idpf_send_disable_vport_msg(adapter, vport_id); disable_queues: idpf_send_disable_queues_msg(vport); unmap_queue_vectors: - idpf_send_map_unmap_queue_vector_msg(vport, false); + idpf_send_map_unmap_queue_vector_msg(adapter, rsrc, vport_id, false); rxq_deinit: - idpf_xdp_rxq_info_deinit_all(vport); + idpf_xdp_rxq_info_deinit_all(rsrc); intr_deinit: - idpf_vport_intr_deinit(vport); + idpf_vport_intr_deinit(vport, rsrc); queues_rel: - idpf_vport_queues_rel(vport); + idpf_vport_queues_rel(vport, rsrc); intr_rel: - idpf_vport_intr_rel(vport); + idpf_vport_intr_rel(rsrc); err_rtnl_unlock: if (rtnl) @@ -1667,10 +1682,6 @@ void idpf_init_task(struct work_struct *work) goto unwind_vports; } - err = idpf_send_get_rx_ptype_msg(vport); - if (err) - goto unwind_vports; - index = vport->idx; vport_config = adapter->vport_config[index]; @@ -1996,9 +2007,13 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); bool vport_is_up = test_bit(IDPF_VPORT_UP, np->state); + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_adapter *adapter = vport->adapter; + struct idpf_vport_config *vport_config; + struct idpf_q_vec_rsrc *new_rsrc; + u32 vport_id = vport->vport_id; struct idpf_vport *new_vport; - int err; + int err, tmp_err = 0; /* If the system is low on memory, we can end up in bad state if we * free all the memory for queue resources and try to allocate them @@ -2023,16 +2038,18 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, */ memcpy(new_vport, vport, offsetof(struct idpf_vport, link_up)); + new_rsrc = &new_vport->dflt_qv_rsrc; + /* Adjust resource parameters prior to reallocating resources */ switch (reset_cause) { case IDPF_SR_Q_CHANGE: - err = idpf_vport_adjust_qs(new_vport); + err = idpf_vport_adjust_qs(new_vport, new_rsrc); if (err) goto free_vport; break; case IDPF_SR_Q_DESC_CHANGE: /* Update queue parameters before allocating resources */ - idpf_vport_calc_num_q_desc(new_vport); + idpf_vport_calc_num_q_desc(new_vport, new_rsrc); break; case IDPF_SR_MTU_CHANGE: idpf_idc_vdev_mtu_event(vport->vdev_info, @@ -2046,41 +2063,40 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, goto free_vport; } + vport_config = adapter->vport_config[vport->idx]; + if (!vport_is_up) { - idpf_send_delete_queues_msg(vport); + idpf_send_delete_queues_msg(adapter, &vport_config->qid_reg_info, + vport_id); } else { set_bit(IDPF_VPORT_DEL_QUEUES, vport->flags); idpf_vport_stop(vport, false); } - /* We're passing in vport here because we need its wait_queue - * to send a message and it should be getting all the vport - * config data out of the adapter but we need to be careful not - * to add code to add_queues to change the vport config within - * vport itself as it will be wiped with a memcpy later. - */ - err = idpf_send_add_queues_msg(vport, new_vport->num_txq, - new_vport->num_complq, - new_vport->num_rxq, - new_vport->num_bufq); + err = idpf_send_add_queues_msg(adapter, vport_config, new_rsrc, + vport_id); if (err) goto err_reset; - /* Same comment as above regarding avoiding copying the wait_queues and - * mutexes applies here. We do not want to mess with those if possible. + /* Avoid copying the wait_queues and mutexes. We do not want to mess + * with those if possible. */ memcpy(vport, new_vport, offsetof(struct idpf_vport, link_up)); if (reset_cause == IDPF_SR_Q_CHANGE) - idpf_vport_alloc_vec_indexes(vport); + idpf_vport_alloc_vec_indexes(vport, &vport->dflt_qv_rsrc); err = idpf_set_real_num_queues(vport); if (err) goto err_open; if (reset_cause == IDPF_SR_Q_CHANGE && - !netif_is_rxfh_configured(vport->netdev)) - idpf_fill_dflt_rss_lut(vport); + !netif_is_rxfh_configured(vport->netdev)) { + struct idpf_rss_data *rss_data; + + rss_data = &vport_config->user_config.rss_data; + idpf_fill_dflt_rss_lut(vport, rss_data); + } if (vport_is_up) err = idpf_vport_open(vport, false); @@ -2088,11 +2104,11 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, goto free_vport; err_reset: - idpf_send_add_queues_msg(vport, vport->num_txq, vport->num_complq, - vport->num_rxq, vport->num_bufq); + tmp_err = idpf_send_add_queues_msg(adapter, vport_config, rsrc, + vport_id); err_open: - if (vport_is_up) + if (!tmp_err && vport_is_up) idpf_vport_open(vport, false); free_vport: @@ -2258,7 +2274,12 @@ static int idpf_set_features(struct net_device *netdev, * the HW when the interface is brought up. */ if (test_bit(IDPF_VPORT_UP, np->state)) { - err = idpf_config_rss(vport); + struct idpf_vport_config *vport_config; + struct idpf_rss_data *rss_data; + + vport_config = adapter->vport_config[vport->idx]; + rss_data = &vport_config->user_config.rss_data; + err = idpf_config_rss(vport, rss_data); if (err) goto unlock_mutex; } @@ -2272,8 +2293,13 @@ static int idpf_set_features(struct net_device *netdev, } if (changed & NETIF_F_LOOPBACK) { + bool loopback_ena; + netdev->features ^= NETIF_F_LOOPBACK; - err = idpf_send_ena_dis_loopback_msg(vport); + loopback_ena = idpf_is_feature_ena(vport, NETIF_F_LOOPBACK); + + err = idpf_send_ena_dis_loopback_msg(adapter, vport->vport_id, + loopback_ena); } unlock_mutex: diff --git a/drivers/net/ethernet/intel/idpf/idpf_ptp.c b/drivers/net/ethernet/intel/idpf/idpf_ptp.c index 0a8b50350b86..4a805a9541f0 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ptp.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ptp.c @@ -384,15 +384,17 @@ static int idpf_ptp_update_cached_phctime(struct idpf_adapter *adapter) WRITE_ONCE(adapter->ptp->cached_phc_jiffies, jiffies); idpf_for_each_vport(adapter, vport) { + struct idpf_q_vec_rsrc *rsrc; bool split; - if (!vport || !vport->rxq_grps) + if (!vport || !vport->dflt_qv_rsrc.rxq_grps) continue; - split = idpf_is_queue_model_split(vport->rxq_model); + rsrc = &vport->dflt_qv_rsrc; + split = idpf_is_queue_model_split(rsrc->rxq_model); - for (u16 i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *grp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *grp = &rsrc->rxq_grps[i]; idpf_ptp_update_phctime_rxq_grp(grp, split, systime); } @@ -681,9 +683,10 @@ int idpf_ptp_request_ts(struct idpf_tx_queue *tx_q, struct sk_buff *skb, */ static void idpf_ptp_set_rx_tstamp(struct idpf_vport *vport, int rx_filter) { + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; bool enable = true, splitq; - splitq = idpf_is_queue_model_split(vport->rxq_model); + splitq = idpf_is_queue_model_split(rsrc->rxq_model); if (rx_filter == HWTSTAMP_FILTER_NONE) { enable = false; @@ -692,8 +695,8 @@ static void idpf_ptp_set_rx_tstamp(struct idpf_vport *vport, int rx_filter) vport->tstamp_config.rx_filter = HWTSTAMP_FILTER_ALL; } - for (u16 i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *grp = &vport->rxq_grps[i]; + for (u16 i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *grp = &rsrc->rxq_grps[i]; struct idpf_rx_queue *rx_queue; u16 j, num_rxq; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index f58f616d87fc..376050308b06 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -19,6 +19,8 @@ LIBETH_SQE_CHECK_PRIV(u32); * Make sure we don't exceed maximum scatter gather buffers for a single * packet. * TSO case has been handled earlier from idpf_features_check(). + * + * Return: %true if skb exceeds max descriptors per packet, %false otherwise. */ static bool idpf_chk_linearize(const struct sk_buff *skb, unsigned int max_bufs, @@ -146,24 +148,22 @@ static void idpf_compl_desc_rel(struct idpf_compl_queue *complq) /** * idpf_tx_desc_rel_all - Free Tx Resources for All Queues - * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources * * Free all transmit software resources */ -static void idpf_tx_desc_rel_all(struct idpf_vport *vport) +static void idpf_tx_desc_rel_all(struct idpf_q_vec_rsrc *rsrc) { - int i, j; - - if (!vport->txq_grps) + if (!rsrc->txq_grps) return; - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; + for (unsigned int i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i]; - for (j = 0; j < txq_grp->num_txq; j++) + for (unsigned int j = 0; j < txq_grp->num_txq; j++) idpf_tx_desc_rel(txq_grp->txqs[j]); - if (idpf_is_queue_model_split(vport->txq_model)) + if (idpf_is_queue_model_split(rsrc->txq_model)) idpf_compl_desc_rel(txq_grp->complq); } } @@ -172,7 +172,7 @@ static void idpf_tx_desc_rel_all(struct idpf_vport *vport) * idpf_tx_buf_alloc_all - Allocate memory for all buffer resources * @tx_q: queue for which the buffers are allocated * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q) { @@ -196,7 +196,7 @@ static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q) * @vport: vport to allocate resources for * @tx_q: the tx ring to set up * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ static int idpf_tx_desc_alloc(const struct idpf_vport *vport, struct idpf_tx_queue *tx_q) @@ -263,7 +263,7 @@ err_alloc: /** * idpf_compl_desc_alloc - allocate completion descriptors - * @vport: vport to allocate resources for + * @vport: virtual port private structure * @complq: completion queue to set up * * Return: 0 on success, -errno on failure. @@ -296,20 +296,21 @@ static int idpf_compl_desc_alloc(const struct idpf_vport *vport, /** * idpf_tx_desc_alloc_all - allocate all queues Tx resources * @vport: virtual port private structure + * @rsrc: pointer to queue and vector resources * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -static int idpf_tx_desc_alloc_all(struct idpf_vport *vport) +static int idpf_tx_desc_alloc_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { int err = 0; - int i, j; /* Setup buffer queues. In single queue model buffer queues and * completion queues will be same */ - for (i = 0; i < vport->num_txq_grp; i++) { - for (j = 0; j < vport->txq_grps[i].num_txq; j++) { - struct idpf_tx_queue *txq = vport->txq_grps[i].txqs[j]; + for (unsigned int i = 0; i < rsrc->num_txq_grp; i++) { + for (unsigned int j = 0; j < rsrc->txq_grps[i].num_txq; j++) { + struct idpf_tx_queue *txq = rsrc->txq_grps[i].txqs[j]; err = idpf_tx_desc_alloc(vport, txq); if (err) { @@ -320,11 +321,11 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport) } } - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(rsrc->txq_model)) continue; /* Setup completion queues */ - err = idpf_compl_desc_alloc(vport, vport->txq_grps[i].complq); + err = idpf_compl_desc_alloc(vport, rsrc->txq_grps[i].complq); if (err) { pci_err(vport->adapter->pdev, "Allocation for Tx Completion Queue %u failed\n", @@ -335,7 +336,7 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport) err_out: if (err) - idpf_tx_desc_rel_all(vport); + idpf_tx_desc_rel_all(rsrc); return err; } @@ -488,38 +489,38 @@ static void idpf_rx_desc_rel_bufq(struct idpf_buf_queue *bufq, /** * idpf_rx_desc_rel_all - Free Rx Resources for All Queues * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources * * Free all rx queues resources */ -static void idpf_rx_desc_rel_all(struct idpf_vport *vport) +static void idpf_rx_desc_rel_all(struct idpf_q_vec_rsrc *rsrc) { - struct device *dev = &vport->adapter->pdev->dev; + struct device *dev = rsrc->dev; struct idpf_rxq_group *rx_qgrp; u16 num_rxq; - int i, j; - if (!vport->rxq_grps) + if (!rsrc->rxq_grps) return; - for (i = 0; i < vport->num_rxq_grp; i++) { - rx_qgrp = &vport->rxq_grps[i]; + for (unsigned int i = 0; i < rsrc->num_rxq_grp; i++) { + rx_qgrp = &rsrc->rxq_grps[i]; - if (!idpf_is_queue_model_split(vport->rxq_model)) { - for (j = 0; j < rx_qgrp->singleq.num_rxq; j++) + if (!idpf_is_queue_model_split(rsrc->rxq_model)) { + for (unsigned int j = 0; j < rx_qgrp->singleq.num_rxq; j++) idpf_rx_desc_rel(rx_qgrp->singleq.rxqs[j], dev, VIRTCHNL2_QUEUE_MODEL_SINGLE); continue; } num_rxq = rx_qgrp->splitq.num_rxq_sets; - for (j = 0; j < num_rxq; j++) + for (unsigned int j = 0; j < num_rxq; j++) idpf_rx_desc_rel(&rx_qgrp->splitq.rxq_sets[j]->rxq, dev, VIRTCHNL2_QUEUE_MODEL_SPLIT); if (!rx_qgrp->splitq.bufq_sets) continue; - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (unsigned int j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { struct idpf_bufq_set *bufq_set = &rx_qgrp->splitq.bufq_sets[j]; @@ -548,7 +549,7 @@ static void idpf_rx_buf_hw_update(struct idpf_buf_queue *bufq, u32 val) * idpf_rx_hdr_buf_alloc_all - Allocate memory for header buffers * @bufq: ring to use * - * Returns 0 on success, negative on failure. + * Return: 0 on success, negative on failure. */ static int idpf_rx_hdr_buf_alloc_all(struct idpf_buf_queue *bufq) { @@ -600,7 +601,7 @@ static void idpf_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id) * @bufq: buffer queue to post to * @buf_id: buffer id to post * - * Returns false if buffer could not be allocated, true otherwise. + * Return: %false if buffer could not be allocated, %true otherwise. */ static bool idpf_rx_post_buf_desc(struct idpf_buf_queue *bufq, u16 buf_id) { @@ -649,7 +650,7 @@ static bool idpf_rx_post_buf_desc(struct idpf_buf_queue *bufq, u16 buf_id) * @bufq: buffer queue to post working set to * @working_set: number of buffers to put in working set * - * Returns true if @working_set bufs were posted successfully, false otherwise. + * Return: %true if @working_set bufs were posted successfully, %false otherwise. */ static bool idpf_rx_post_init_bufs(struct idpf_buf_queue *bufq, u16 working_set) @@ -718,7 +719,7 @@ static int idpf_rx_bufs_init_singleq(struct idpf_rx_queue *rxq) * idpf_rx_buf_alloc_all - Allocate memory for all buffer resources * @rxbufq: queue for which the buffers are allocated * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ static int idpf_rx_buf_alloc_all(struct idpf_buf_queue *rxbufq) { @@ -746,7 +747,7 @@ rx_buf_alloc_all_out: * @bufq: buffer queue to create page pool for * @type: type of Rx buffers to allocate * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ static int idpf_rx_bufs_init(struct idpf_buf_queue *bufq, enum libeth_fqe_type type) @@ -779,26 +780,28 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *bufq, /** * idpf_rx_bufs_init_all - Initialize all RX bufs - * @vport: virtual port struct + * @vport: pointer to vport struct + * @rsrc: pointer to queue and vector resources * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -int idpf_rx_bufs_init_all(struct idpf_vport *vport) +int idpf_rx_bufs_init_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { - bool split = idpf_is_queue_model_split(vport->rxq_model); - int i, j, err; + bool split = idpf_is_queue_model_split(rsrc->rxq_model); + int err; - idpf_xdp_copy_prog_to_rqs(vport, vport->xdp_prog); + idpf_xdp_copy_prog_to_rqs(rsrc, vport->xdp_prog); - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (unsigned int i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u32 truesize = 0; /* Allocate bufs for the rxq itself in singleq */ if (!split) { int num_rxq = rx_qgrp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++) { + for (unsigned int j = 0; j < num_rxq; j++) { struct idpf_rx_queue *q; q = rx_qgrp->singleq.rxqs[j]; @@ -811,7 +814,7 @@ int idpf_rx_bufs_init_all(struct idpf_vport *vport) } /* Otherwise, allocate bufs for the buffer queues */ - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (unsigned int j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { enum libeth_fqe_type type; struct idpf_buf_queue *q; @@ -836,7 +839,7 @@ int idpf_rx_bufs_init_all(struct idpf_vport *vport) * @vport: vport to allocate resources for * @rxq: Rx queue for which the resources are setup * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ static int idpf_rx_desc_alloc(const struct idpf_vport *vport, struct idpf_rx_queue *rxq) @@ -897,26 +900,28 @@ static int idpf_bufq_desc_alloc(const struct idpf_vport *vport, /** * idpf_rx_desc_alloc_all - allocate all RX queues resources * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -static int idpf_rx_desc_alloc_all(struct idpf_vport *vport) +static int idpf_rx_desc_alloc_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_rxq_group *rx_qgrp; - int i, j, err; u16 num_rxq; + int err; - for (i = 0; i < vport->num_rxq_grp; i++) { - rx_qgrp = &vport->rxq_grps[i]; - if (idpf_is_queue_model_split(vport->rxq_model)) + for (unsigned int i = 0; i < rsrc->num_rxq_grp; i++) { + rx_qgrp = &rsrc->rxq_grps[i]; + if (idpf_is_queue_model_split(rsrc->rxq_model)) num_rxq = rx_qgrp->splitq.num_rxq_sets; else num_rxq = rx_qgrp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++) { + for (unsigned int j = 0; j < num_rxq; j++) { struct idpf_rx_queue *q; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) q = &rx_qgrp->splitq.rxq_sets[j]->rxq; else q = rx_qgrp->singleq.rxqs[j]; @@ -930,10 +935,10 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport) } } - if (!idpf_is_queue_model_split(vport->rxq_model)) + if (!idpf_is_queue_model_split(rsrc->rxq_model)) continue; - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (unsigned int j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { struct idpf_buf_queue *q; q = &rx_qgrp->splitq.bufq_sets[j].bufq; @@ -951,18 +956,18 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport) return 0; err_out: - idpf_rx_desc_rel_all(vport); + idpf_rx_desc_rel_all(rsrc); return err; } -static int idpf_init_queue_set(const struct idpf_queue_set *qs) +static int idpf_init_queue_set(const struct idpf_vport *vport, + const struct idpf_queue_set *qs) { - const struct idpf_vport *vport = qs->vport; bool splitq; int err; - splitq = idpf_is_queue_model_split(vport->rxq_model); + splitq = idpf_is_queue_model_split(qs->qv_rsrc->rxq_model); for (u32 i = 0; i < qs->num; i++) { const struct idpf_queue_ptr *q = &qs->qs[i]; @@ -1032,19 +1037,18 @@ static int idpf_init_queue_set(const struct idpf_queue_set *qs) static void idpf_clean_queue_set(const struct idpf_queue_set *qs) { - const struct idpf_vport *vport = qs->vport; - struct device *dev = vport->netdev->dev.parent; + const struct idpf_q_vec_rsrc *rsrc = qs->qv_rsrc; for (u32 i = 0; i < qs->num; i++) { const struct idpf_queue_ptr *q = &qs->qs[i]; switch (q->type) { case VIRTCHNL2_QUEUE_TYPE_RX: - idpf_xdp_rxq_info_deinit(q->rxq, vport->rxq_model); - idpf_rx_desc_rel(q->rxq, dev, vport->rxq_model); + idpf_xdp_rxq_info_deinit(q->rxq, rsrc->rxq_model); + idpf_rx_desc_rel(q->rxq, rsrc->dev, rsrc->rxq_model); break; case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: - idpf_rx_desc_rel_bufq(q->bufq, dev); + idpf_rx_desc_rel_bufq(q->bufq, rsrc->dev); break; case VIRTCHNL2_QUEUE_TYPE_TX: idpf_tx_desc_rel(q->txq); @@ -1111,7 +1115,8 @@ static void idpf_qvec_ena_irq(struct idpf_q_vector *qv) static struct idpf_queue_set * idpf_vector_to_queue_set(struct idpf_q_vector *qv) { - bool xdp = qv->vport->xdp_txq_offset && !qv->num_xsksq; + u32 xdp_txq_offset = qv->vport->dflt_qv_rsrc.xdp_txq_offset; + bool xdp = xdp_txq_offset && !qv->num_xsksq; struct idpf_vport *vport = qv->vport; struct idpf_queue_set *qs; u32 num; @@ -1121,7 +1126,8 @@ idpf_vector_to_queue_set(struct idpf_q_vector *qv) if (!num) return NULL; - qs = idpf_alloc_queue_set(vport, num); + qs = idpf_alloc_queue_set(vport->adapter, &vport->dflt_qv_rsrc, + vport->vport_id, num); if (!qs) return NULL; @@ -1147,12 +1153,12 @@ idpf_vector_to_queue_set(struct idpf_q_vector *qv) qs->qs[num++].complq = qv->complq[i]; } - if (!vport->xdp_txq_offset) + if (!xdp_txq_offset) goto finalize; if (xdp) { for (u32 i = 0; i < qv->num_rxq; i++) { - u32 idx = vport->xdp_txq_offset + qv->rx[i]->idx; + u32 idx = xdp_txq_offset + qv->rx[i]->idx; qs->qs[num].type = VIRTCHNL2_QUEUE_TYPE_TX; qs->qs[num++].txq = vport->txqs[idx]; @@ -1179,26 +1185,27 @@ finalize: return qs; } -static int idpf_qp_enable(const struct idpf_queue_set *qs, u32 qid) +static int idpf_qp_enable(const struct idpf_vport *vport, + const struct idpf_queue_set *qs, u32 qid) { - struct idpf_vport *vport = qs->vport; + const struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_q_vector *q_vector; int err; q_vector = idpf_find_rxq_vec(vport, qid); - err = idpf_init_queue_set(qs); + err = idpf_init_queue_set(vport, qs); if (err) { netdev_err(vport->netdev, "Could not initialize queues in pair %u: %pe\n", qid, ERR_PTR(err)); return err; } - if (!vport->xdp_txq_offset) + if (!rsrc->xdp_txq_offset) goto config; - q_vector->xsksq = kcalloc(DIV_ROUND_UP(vport->num_rxq_grp, - vport->num_q_vectors), + q_vector->xsksq = kcalloc(DIV_ROUND_UP(rsrc->num_rxq_grp, + rsrc->num_q_vectors), sizeof(*q_vector->xsksq), GFP_KERNEL); if (!q_vector->xsksq) return -ENOMEM; @@ -1241,9 +1248,9 @@ config: return 0; } -static int idpf_qp_disable(const struct idpf_queue_set *qs, u32 qid) +static int idpf_qp_disable(const struct idpf_vport *vport, + const struct idpf_queue_set *qs, u32 qid) { - struct idpf_vport *vport = qs->vport; struct idpf_q_vector *q_vector; int err; @@ -1288,30 +1295,28 @@ int idpf_qp_switch(struct idpf_vport *vport, u32 qid, bool en) if (!qs) return -ENOMEM; - return en ? idpf_qp_enable(qs, qid) : idpf_qp_disable(qs, qid); + return en ? idpf_qp_enable(vport, qs, qid) : + idpf_qp_disable(vport, qs, qid); } /** * idpf_txq_group_rel - Release all resources for txq groups - * @vport: vport to release txq groups on + * @rsrc: pointer to queue and vector resources */ -static void idpf_txq_group_rel(struct idpf_vport *vport) +static void idpf_txq_group_rel(struct idpf_q_vec_rsrc *rsrc) { - bool split, flow_sch_en; - int i, j; + bool split; - if (!vport->txq_grps) + if (!rsrc->txq_grps) return; - split = idpf_is_queue_model_split(vport->txq_model); - flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS, - VIRTCHNL2_CAP_SPLITQ_QSCHED); + split = idpf_is_queue_model_split(rsrc->txq_model); - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; + for (unsigned int i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i]; - for (j = 0; j < txq_grp->num_txq; j++) { - if (flow_sch_en) { + for (unsigned int j = 0; j < txq_grp->num_txq; j++) { + if (idpf_queue_has(FLOW_SCH_EN, txq_grp->txqs[j])) { kfree(txq_grp->txqs[j]->refillq); txq_grp->txqs[j]->refillq = NULL; } @@ -1326,8 +1331,8 @@ static void idpf_txq_group_rel(struct idpf_vport *vport) kfree(txq_grp->complq); txq_grp->complq = NULL; } - kfree(vport->txq_grps); - vport->txq_grps = NULL; + kfree(rsrc->txq_grps); + rsrc->txq_grps = NULL; } /** @@ -1336,12 +1341,10 @@ static void idpf_txq_group_rel(struct idpf_vport *vport) */ static void idpf_rxq_sw_queue_rel(struct idpf_rxq_group *rx_qgrp) { - int i, j; - - for (i = 0; i < rx_qgrp->vport->num_bufqs_per_qgrp; i++) { + for (unsigned int i = 0; i < rx_qgrp->splitq.num_bufq_sets; i++) { struct idpf_bufq_set *bufq_set = &rx_qgrp->splitq.bufq_sets[i]; - for (j = 0; j < bufq_set->num_refillqs; j++) { + for (unsigned int j = 0; j < bufq_set->num_refillqs; j++) { kfree(bufq_set->refillqs[j].ring); bufq_set->refillqs[j].ring = NULL; } @@ -1352,23 +1355,20 @@ static void idpf_rxq_sw_queue_rel(struct idpf_rxq_group *rx_qgrp) /** * idpf_rxq_group_rel - Release all resources for rxq groups - * @vport: vport to release rxq groups on + * @rsrc: pointer to queue and vector resources */ -static void idpf_rxq_group_rel(struct idpf_vport *vport) +static void idpf_rxq_group_rel(struct idpf_q_vec_rsrc *rsrc) { - int i; - - if (!vport->rxq_grps) + if (!rsrc->rxq_grps) return; - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (unsigned int i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u16 num_rxq; - int j; - if (idpf_is_queue_model_split(vport->rxq_model)) { + if (idpf_is_queue_model_split(rsrc->rxq_model)) { num_rxq = rx_qgrp->splitq.num_rxq_sets; - for (j = 0; j < num_rxq; j++) { + for (unsigned int j = 0; j < num_rxq; j++) { kfree(rx_qgrp->splitq.rxq_sets[j]); rx_qgrp->splitq.rxq_sets[j] = NULL; } @@ -1378,41 +1378,44 @@ static void idpf_rxq_group_rel(struct idpf_vport *vport) rx_qgrp->splitq.bufq_sets = NULL; } else { num_rxq = rx_qgrp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++) { + for (unsigned int j = 0; j < num_rxq; j++) { kfree(rx_qgrp->singleq.rxqs[j]); rx_qgrp->singleq.rxqs[j] = NULL; } } } - kfree(vport->rxq_grps); - vport->rxq_grps = NULL; + kfree(rsrc->rxq_grps); + rsrc->rxq_grps = NULL; } /** * idpf_vport_queue_grp_rel_all - Release all queue groups * @vport: vport to release queue groups for + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_queue_grp_rel_all(struct idpf_vport *vport) +static void idpf_vport_queue_grp_rel_all(struct idpf_q_vec_rsrc *rsrc) { - idpf_txq_group_rel(vport); - idpf_rxq_group_rel(vport); + idpf_txq_group_rel(rsrc); + idpf_rxq_group_rel(rsrc); } /** * idpf_vport_queues_rel - Free memory for all queues * @vport: virtual port + * @rsrc: pointer to queue and vector resources * * Free the memory allocated for queues associated to a vport */ -void idpf_vport_queues_rel(struct idpf_vport *vport) +void idpf_vport_queues_rel(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { - idpf_xdp_copy_prog_to_rqs(vport, NULL); + idpf_xdp_copy_prog_to_rqs(rsrc, NULL); - idpf_tx_desc_rel_all(vport); - idpf_rx_desc_rel_all(vport); + idpf_tx_desc_rel_all(rsrc); + idpf_rx_desc_rel_all(rsrc); idpf_xdpsqs_put(vport); - idpf_vport_queue_grp_rel_all(vport); + idpf_vport_queue_grp_rel_all(rsrc); kfree(vport->txqs); vport->txqs = NULL; @@ -1421,29 +1424,31 @@ void idpf_vport_queues_rel(struct idpf_vport *vport) /** * idpf_vport_init_fast_path_txqs - Initialize fast path txq array * @vport: vport to init txqs on + * @rsrc: pointer to queue and vector resources * * We get a queue index from skb->queue_mapping and we need a fast way to * dereference the queue from queue groups. This allows us to quickly pull a * txq based on a queue index. * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport) +static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_ptp_vport_tx_tstamp_caps *caps = vport->tx_tstamp_caps; struct work_struct *tstamp_task = &vport->tstamp_task; - int i, j, k = 0; + int k = 0; - vport->txqs = kcalloc(vport->num_txq, sizeof(*vport->txqs), + vport->txqs = kcalloc(rsrc->num_txq, sizeof(*vport->txqs), GFP_KERNEL); - if (!vport->txqs) return -ENOMEM; - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_grp = &vport->txq_grps[i]; + vport->num_txq = rsrc->num_txq; + for (unsigned int i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *tx_grp = &rsrc->txq_grps[i]; - for (j = 0; j < tx_grp->num_txq; j++, k++) { + for (unsigned int j = 0; j < tx_grp->num_txq; j++, k++) { vport->txqs[k] = tx_grp->txqs[j]; vport->txqs[k]->idx = k; @@ -1462,16 +1467,18 @@ static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport) * idpf_vport_init_num_qs - Initialize number of queues * @vport: vport to initialize queues * @vport_msg: data to be filled into vport + * @rsrc: pointer to queue and vector resources */ void idpf_vport_init_num_qs(struct idpf_vport *vport, - struct virtchnl2_create_vport *vport_msg) + struct virtchnl2_create_vport *vport_msg, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_vport_user_config_data *config_data; u16 idx = vport->idx; config_data = &vport->adapter->vport_config[idx]->user_config; - vport->num_txq = le16_to_cpu(vport_msg->num_tx_q); - vport->num_rxq = le16_to_cpu(vport_msg->num_rx_q); + rsrc->num_txq = le16_to_cpu(vport_msg->num_tx_q); + rsrc->num_rxq = le16_to_cpu(vport_msg->num_rx_q); /* number of txqs and rxqs in config data will be zeros only in the * driver load path and we dont update them there after */ @@ -1480,74 +1487,75 @@ void idpf_vport_init_num_qs(struct idpf_vport *vport, config_data->num_req_rx_qs = le16_to_cpu(vport_msg->num_rx_q); } - if (idpf_is_queue_model_split(vport->txq_model)) - vport->num_complq = le16_to_cpu(vport_msg->num_tx_complq); - if (idpf_is_queue_model_split(vport->rxq_model)) - vport->num_bufq = le16_to_cpu(vport_msg->num_rx_bufq); + if (idpf_is_queue_model_split(rsrc->txq_model)) + rsrc->num_complq = le16_to_cpu(vport_msg->num_tx_complq); + if (idpf_is_queue_model_split(rsrc->rxq_model)) + rsrc->num_bufq = le16_to_cpu(vport_msg->num_rx_bufq); vport->xdp_prog = config_data->xdp_prog; if (idpf_xdp_enabled(vport)) { - vport->xdp_txq_offset = config_data->num_req_tx_qs; + rsrc->xdp_txq_offset = config_data->num_req_tx_qs; vport->num_xdp_txq = le16_to_cpu(vport_msg->num_tx_q) - - vport->xdp_txq_offset; + rsrc->xdp_txq_offset; vport->xdpsq_share = libeth_xdpsq_shared(vport->num_xdp_txq); } else { - vport->xdp_txq_offset = 0; + rsrc->xdp_txq_offset = 0; vport->num_xdp_txq = 0; vport->xdpsq_share = false; } /* Adjust number of buffer queues per Rx queue group. */ - if (!idpf_is_queue_model_split(vport->rxq_model)) { - vport->num_bufqs_per_qgrp = 0; + if (!idpf_is_queue_model_split(rsrc->rxq_model)) { + rsrc->num_bufqs_per_qgrp = 0; return; } - vport->num_bufqs_per_qgrp = IDPF_MAX_BUFQS_PER_RXQ_GRP; + rsrc->num_bufqs_per_qgrp = IDPF_MAX_BUFQS_PER_RXQ_GRP; } /** * idpf_vport_calc_num_q_desc - Calculate number of queue groups * @vport: vport to calculate q groups for + * @rsrc: pointer to queue and vector resources */ -void idpf_vport_calc_num_q_desc(struct idpf_vport *vport) +void idpf_vport_calc_num_q_desc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_vport_user_config_data *config_data; - int num_bufqs = vport->num_bufqs_per_qgrp; + u8 num_bufqs = rsrc->num_bufqs_per_qgrp; u32 num_req_txq_desc, num_req_rxq_desc; u16 idx = vport->idx; - int i; config_data = &vport->adapter->vport_config[idx]->user_config; num_req_txq_desc = config_data->num_req_txq_desc; num_req_rxq_desc = config_data->num_req_rxq_desc; - vport->complq_desc_count = 0; + rsrc->complq_desc_count = 0; if (num_req_txq_desc) { - vport->txq_desc_count = num_req_txq_desc; - if (idpf_is_queue_model_split(vport->txq_model)) { - vport->complq_desc_count = num_req_txq_desc; - if (vport->complq_desc_count < IDPF_MIN_TXQ_COMPLQ_DESC) - vport->complq_desc_count = + rsrc->txq_desc_count = num_req_txq_desc; + if (idpf_is_queue_model_split(rsrc->txq_model)) { + rsrc->complq_desc_count = num_req_txq_desc; + if (rsrc->complq_desc_count < IDPF_MIN_TXQ_COMPLQ_DESC) + rsrc->complq_desc_count = IDPF_MIN_TXQ_COMPLQ_DESC; } } else { - vport->txq_desc_count = IDPF_DFLT_TX_Q_DESC_COUNT; - if (idpf_is_queue_model_split(vport->txq_model)) - vport->complq_desc_count = + rsrc->txq_desc_count = IDPF_DFLT_TX_Q_DESC_COUNT; + if (idpf_is_queue_model_split(rsrc->txq_model)) + rsrc->complq_desc_count = IDPF_DFLT_TX_COMPLQ_DESC_COUNT; } if (num_req_rxq_desc) - vport->rxq_desc_count = num_req_rxq_desc; + rsrc->rxq_desc_count = num_req_rxq_desc; else - vport->rxq_desc_count = IDPF_DFLT_RX_Q_DESC_COUNT; + rsrc->rxq_desc_count = IDPF_DFLT_RX_Q_DESC_COUNT; - for (i = 0; i < num_bufqs; i++) { - if (!vport->bufq_desc_count[i]) - vport->bufq_desc_count[i] = - IDPF_RX_BUFQ_DESC_COUNT(vport->rxq_desc_count, + for (unsigned int i = 0; i < num_bufqs; i++) { + if (!rsrc->bufq_desc_count[i]) + rsrc->bufq_desc_count[i] = + IDPF_RX_BUFQ_DESC_COUNT(rsrc->rxq_desc_count, num_bufqs); } } @@ -1559,7 +1567,7 @@ void idpf_vport_calc_num_q_desc(struct idpf_vport *vport) * @vport_msg: message to fill with data * @max_q: vport max queue info * - * Return 0 on success, error value on failure. + * Return: 0 on success, error value on failure. */ int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_idx, struct virtchnl2_create_vport *vport_msg, @@ -1636,54 +1644,54 @@ int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_idx, /** * idpf_vport_calc_num_q_groups - Calculate number of queue groups - * @vport: vport to calculate q groups for + * @rsrc: pointer to queue and vector resources */ -void idpf_vport_calc_num_q_groups(struct idpf_vport *vport) +void idpf_vport_calc_num_q_groups(struct idpf_q_vec_rsrc *rsrc) { - if (idpf_is_queue_model_split(vport->txq_model)) - vport->num_txq_grp = vport->num_txq; + if (idpf_is_queue_model_split(rsrc->txq_model)) + rsrc->num_txq_grp = rsrc->num_txq; else - vport->num_txq_grp = IDPF_DFLT_SINGLEQ_TX_Q_GROUPS; + rsrc->num_txq_grp = IDPF_DFLT_SINGLEQ_TX_Q_GROUPS; - if (idpf_is_queue_model_split(vport->rxq_model)) - vport->num_rxq_grp = vport->num_rxq; + if (idpf_is_queue_model_split(rsrc->rxq_model)) + rsrc->num_rxq_grp = rsrc->num_rxq; else - vport->num_rxq_grp = IDPF_DFLT_SINGLEQ_RX_Q_GROUPS; + rsrc->num_rxq_grp = IDPF_DFLT_SINGLEQ_RX_Q_GROUPS; } /** * idpf_vport_calc_numq_per_grp - Calculate number of queues per group - * @vport: vport to calculate queues for + * @rsrc: pointer to queue and vector resources * @num_txq: return parameter for number of TX queues * @num_rxq: return parameter for number of RX queues */ -static void idpf_vport_calc_numq_per_grp(struct idpf_vport *vport, +static void idpf_vport_calc_numq_per_grp(struct idpf_q_vec_rsrc *rsrc, u16 *num_txq, u16 *num_rxq) { - if (idpf_is_queue_model_split(vport->txq_model)) + if (idpf_is_queue_model_split(rsrc->txq_model)) *num_txq = IDPF_DFLT_SPLITQ_TXQ_PER_GROUP; else - *num_txq = vport->num_txq; + *num_txq = rsrc->num_txq; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) *num_rxq = IDPF_DFLT_SPLITQ_RXQ_PER_GROUP; else - *num_rxq = vport->num_rxq; + *num_rxq = rsrc->num_rxq; } /** * idpf_rxq_set_descids - set the descids supported by this queue - * @vport: virtual port data structure + * @rsrc: pointer to queue and vector resources * @q: rx queue for which descids are set * */ -static void idpf_rxq_set_descids(const struct idpf_vport *vport, +static void idpf_rxq_set_descids(struct idpf_q_vec_rsrc *rsrc, struct idpf_rx_queue *q) { - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) return; - if (vport->base_rxd) + if (rsrc->base_rxd) q->rxdids = VIRTCHNL2_RXDID_1_32B_BASE_M; else q->rxdids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M; @@ -1692,44 +1700,45 @@ static void idpf_rxq_set_descids(const struct idpf_vport *vport, /** * idpf_txq_group_alloc - Allocate all txq group resources * @vport: vport to allocate txq groups for + * @rsrc: pointer to queue and vector resources * @num_txq: number of txqs to allocate for each group * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) +static int idpf_txq_group_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, + u16 num_txq) { bool split, flow_sch_en; - int i; - vport->txq_grps = kcalloc(vport->num_txq_grp, - sizeof(*vport->txq_grps), GFP_KERNEL); - if (!vport->txq_grps) + rsrc->txq_grps = kcalloc(rsrc->num_txq_grp, + sizeof(*rsrc->txq_grps), GFP_KERNEL); + if (!rsrc->txq_grps) return -ENOMEM; - split = idpf_is_queue_model_split(vport->txq_model); + split = idpf_is_queue_model_split(rsrc->txq_model); flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_SPLITQ_QSCHED); - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (unsigned int i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; struct idpf_adapter *adapter = vport->adapter; - int j; tx_qgrp->vport = vport; tx_qgrp->num_txq = num_txq; - for (j = 0; j < tx_qgrp->num_txq; j++) { + for (unsigned int j = 0; j < tx_qgrp->num_txq; j++) { tx_qgrp->txqs[j] = kzalloc(sizeof(*tx_qgrp->txqs[j]), GFP_KERNEL); if (!tx_qgrp->txqs[j]) goto err_alloc; } - for (j = 0; j < tx_qgrp->num_txq; j++) { + for (unsigned int j = 0; j < tx_qgrp->num_txq; j++) { struct idpf_tx_queue *q = tx_qgrp->txqs[j]; q->dev = &adapter->pdev->dev; - q->desc_count = vport->txq_desc_count; + q->desc_count = rsrc->txq_desc_count; q->tx_max_bufs = idpf_get_max_tx_bufs(adapter); q->tx_min_pkt_len = idpf_get_min_tx_pkt_len(adapter); q->netdev = vport->netdev; @@ -1764,7 +1773,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) if (!tx_qgrp->complq) goto err_alloc; - tx_qgrp->complq->desc_count = vport->complq_desc_count; + tx_qgrp->complq->desc_count = rsrc->complq_desc_count; tx_qgrp->complq->txq_grp = tx_qgrp; tx_qgrp->complq->netdev = vport->netdev; tx_qgrp->complq->clean_budget = vport->compln_clean_budget; @@ -1776,7 +1785,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) return 0; err_alloc: - idpf_txq_group_rel(vport); + idpf_txq_group_rel(rsrc); return -ENOMEM; } @@ -1784,30 +1793,34 @@ err_alloc: /** * idpf_rxq_group_alloc - Allocate all rxq group resources * @vport: vport to allocate rxq groups for + * @rsrc: pointer to queue and vector resources * @num_rxq: number of rxqs to allocate for each group * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) +static int idpf_rxq_group_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, + u16 num_rxq) { - int i, k, err = 0; - bool hs; + struct idpf_adapter *adapter = vport->adapter; + bool hs, rsc; + int err = 0; - vport->rxq_grps = kcalloc(vport->num_rxq_grp, - sizeof(struct idpf_rxq_group), GFP_KERNEL); - if (!vport->rxq_grps) + rsrc->rxq_grps = kcalloc(rsrc->num_rxq_grp, + sizeof(struct idpf_rxq_group), GFP_KERNEL); + if (!rsrc->rxq_grps) return -ENOMEM; hs = idpf_vport_get_hsplit(vport) == ETHTOOL_TCP_DATA_SPLIT_ENABLED; + rsc = idpf_is_feature_ena(vport, NETIF_F_GRO_HW); - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - int j; + for (unsigned int i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; rx_qgrp->vport = vport; - if (!idpf_is_queue_model_split(vport->rxq_model)) { + if (!idpf_is_queue_model_split(rsrc->rxq_model)) { rx_qgrp->singleq.num_rxq = num_rxq; - for (j = 0; j < num_rxq; j++) { + for (unsigned int j = 0; j < num_rxq; j++) { rx_qgrp->singleq.rxqs[j] = kzalloc(sizeof(*rx_qgrp->singleq.rxqs[j]), GFP_KERNEL); @@ -1820,7 +1833,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) } rx_qgrp->splitq.num_rxq_sets = num_rxq; - for (j = 0; j < num_rxq; j++) { + for (unsigned int j = 0; j < num_rxq; j++) { rx_qgrp->splitq.rxq_sets[j] = kzalloc(sizeof(struct idpf_rxq_set), GFP_KERNEL); @@ -1830,25 +1843,27 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) } } - rx_qgrp->splitq.bufq_sets = kcalloc(vport->num_bufqs_per_qgrp, + rx_qgrp->splitq.bufq_sets = kcalloc(rsrc->num_bufqs_per_qgrp, sizeof(struct idpf_bufq_set), GFP_KERNEL); if (!rx_qgrp->splitq.bufq_sets) { err = -ENOMEM; goto err_alloc; } + rx_qgrp->splitq.num_bufq_sets = rsrc->num_bufqs_per_qgrp; - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (unsigned int j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { struct idpf_bufq_set *bufq_set = &rx_qgrp->splitq.bufq_sets[j]; int swq_size = sizeof(struct idpf_sw_queue); struct idpf_buf_queue *q; q = &rx_qgrp->splitq.bufq_sets[j].bufq; - q->desc_count = vport->bufq_desc_count[j]; + q->desc_count = rsrc->bufq_desc_count[j]; q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK; idpf_queue_assign(HSPLIT_EN, q, hs); + idpf_queue_assign(RSC_EN, q, rsc); bufq_set->num_refillqs = num_rxq; bufq_set->refillqs = kcalloc(num_rxq, swq_size, @@ -1857,12 +1872,12 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) err = -ENOMEM; goto err_alloc; } - for (k = 0; k < bufq_set->num_refillqs; k++) { + for (unsigned int k = 0; k < bufq_set->num_refillqs; k++) { struct idpf_sw_queue *refillq = &bufq_set->refillqs[k]; refillq->desc_count = - vport->bufq_desc_count[j]; + rsrc->bufq_desc_count[j]; idpf_queue_set(GEN_CHK, refillq); idpf_queue_set(RFL_GEN_CHK, refillq); refillq->ring = kcalloc(refillq->desc_count, @@ -1876,37 +1891,39 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) } skip_splitq_rx_init: - for (j = 0; j < num_rxq; j++) { + for (unsigned int j = 0; j < num_rxq; j++) { struct idpf_rx_queue *q; - if (!idpf_is_queue_model_split(vport->rxq_model)) { + if (!idpf_is_queue_model_split(rsrc->rxq_model)) { q = rx_qgrp->singleq.rxqs[j]; + q->rx_ptype_lkup = adapter->singleq_pt_lkup; goto setup_rxq; } q = &rx_qgrp->splitq.rxq_sets[j]->rxq; rx_qgrp->splitq.rxq_sets[j]->refillq[0] = &rx_qgrp->splitq.bufq_sets[0].refillqs[j]; - if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) + if (rsrc->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) rx_qgrp->splitq.rxq_sets[j]->refillq[1] = &rx_qgrp->splitq.bufq_sets[1].refillqs[j]; idpf_queue_assign(HSPLIT_EN, q, hs); + idpf_queue_assign(RSC_EN, q, rsc); + q->rx_ptype_lkup = adapter->splitq_pt_lkup; setup_rxq: - q->desc_count = vport->rxq_desc_count; - q->rx_ptype_lkup = vport->rx_ptype_lkup; + q->desc_count = rsrc->rxq_desc_count; q->bufq_sets = rx_qgrp->splitq.bufq_sets; q->idx = (i * num_rxq) + j; q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK; q->rx_max_pkt_size = vport->netdev->mtu + LIBETH_RX_LL_LEN; - idpf_rxq_set_descids(vport, q); + idpf_rxq_set_descids(rsrc, q); } } err_alloc: if (err) - idpf_rxq_group_rel(vport); + idpf_rxq_group_rel(rsrc); return err; } @@ -1914,28 +1931,30 @@ err_alloc: /** * idpf_vport_queue_grp_alloc_all - Allocate all queue groups/resources * @vport: vport with qgrps to allocate + * @rsrc: pointer to queue and vector resources * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -static int idpf_vport_queue_grp_alloc_all(struct idpf_vport *vport) +static int idpf_vport_queue_grp_alloc_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { u16 num_txq, num_rxq; int err; - idpf_vport_calc_numq_per_grp(vport, &num_txq, &num_rxq); + idpf_vport_calc_numq_per_grp(rsrc, &num_txq, &num_rxq); - err = idpf_txq_group_alloc(vport, num_txq); + err = idpf_txq_group_alloc(vport, rsrc, num_txq); if (err) goto err_out; - err = idpf_rxq_group_alloc(vport, num_rxq); + err = idpf_rxq_group_alloc(vport, rsrc, num_rxq); if (err) goto err_out; return 0; err_out: - idpf_vport_queue_grp_rel_all(vport); + idpf_vport_queue_grp_rel_all(rsrc); return err; } @@ -1943,19 +1962,22 @@ err_out: /** * idpf_vport_queues_alloc - Allocate memory for all queues * @vport: virtual port + * @rsrc: pointer to queue and vector resources + * + * Allocate memory for queues associated with a vport. * - * Allocate memory for queues associated with a vport. Returns 0 on success, - * negative on failure. + * Return: 0 on success, negative on failure. */ -int idpf_vport_queues_alloc(struct idpf_vport *vport) +int idpf_vport_queues_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { int err; - err = idpf_vport_queue_grp_alloc_all(vport); + err = idpf_vport_queue_grp_alloc_all(vport, rsrc); if (err) goto err_out; - err = idpf_vport_init_fast_path_txqs(vport); + err = idpf_vport_init_fast_path_txqs(vport, rsrc); if (err) goto err_out; @@ -1963,18 +1985,18 @@ int idpf_vport_queues_alloc(struct idpf_vport *vport) if (err) goto err_out; - err = idpf_tx_desc_alloc_all(vport); + err = idpf_tx_desc_alloc_all(vport, rsrc); if (err) goto err_out; - err = idpf_rx_desc_alloc_all(vport); + err = idpf_rx_desc_alloc_all(vport, rsrc); if (err) goto err_out; return 0; err_out: - idpf_vport_queues_rel(vport); + idpf_vport_queues_rel(vport, rsrc); return err; } @@ -2172,7 +2194,7 @@ static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq, * @budget: Used to determine if we are in netpoll * @cleaned: returns number of packets cleaned * - * Returns true if there's any budget left (e.g. the clean is finished) + * Return: %true if there's any budget left (e.g. the clean is finished) */ static bool idpf_tx_clean_complq(struct idpf_compl_queue *complq, int budget, int *cleaned) @@ -2398,7 +2420,7 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc, } /** - * idpf_tx_splitq_has_room - check if enough Tx splitq resources are available + * idpf_txq_has_room - check if enough Tx splitq resources are available * @tx_q: the queue to be checked * @descs_needed: number of descriptors required for this packet * @bufs_needed: number of Tx buffers required for this packet @@ -2529,6 +2551,8 @@ unsigned int idpf_tx_res_count_required(struct idpf_tx_queue *txq, * idpf_tx_splitq_bump_ntu - adjust NTU and generation * @txq: the tx ring to wrap * @ntu: ring index to bump + * + * Return: the next ring index hopping to 0 when wraps around */ static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_tx_queue *txq, u16 ntu) { @@ -2797,7 +2821,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, * @skb: pointer to skb * @off: pointer to struct that holds offload parameters * - * Returns error (negative) if TSO was requested but cannot be applied to the + * Return: error (negative) if TSO was requested but cannot be applied to the * given skb, 0 if TSO does not apply to the given skb, or 1 otherwise. */ int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off) @@ -2875,6 +2899,8 @@ int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off) * * Since the TX buffer rings mimics the descriptor ring, update the tx buffer * ring entry to reflect that this index is a context descriptor + * + * Return: pointer to the next descriptor */ static union idpf_flex_tx_ctx_desc * idpf_tx_splitq_get_ctx_desc(struct idpf_tx_queue *txq) @@ -2893,6 +2919,8 @@ idpf_tx_splitq_get_ctx_desc(struct idpf_tx_queue *txq) * idpf_tx_drop_skb - free the SKB and bump tail if necessary * @tx_q: queue to send buffer on * @skb: pointer to skb + * + * Return: always NETDEV_TX_OK */ netdev_tx_t idpf_tx_drop_skb(struct idpf_tx_queue *tx_q, struct sk_buff *skb) { @@ -2994,7 +3022,7 @@ static bool idpf_tx_splitq_need_re(struct idpf_tx_queue *tx_q) * @skb: send buffer * @tx_q: queue to send buffer on * - * Returns NETDEV_TX_OK if sent, else an error code + * Return: NETDEV_TX_OK if sent, else an error code */ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, struct idpf_tx_queue *tx_q) @@ -3120,7 +3148,7 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, * @skb: send buffer * @netdev: network interface device structure * - * Returns NETDEV_TX_OK if sent, else an error code + * Return: NETDEV_TX_OK if sent, else an error code */ netdev_tx_t idpf_tx_start(struct sk_buff *skb, struct net_device *netdev) { @@ -3145,7 +3173,7 @@ netdev_tx_t idpf_tx_start(struct sk_buff *skb, struct net_device *netdev) return NETDEV_TX_OK; } - if (idpf_is_queue_model_split(vport->txq_model)) + if (idpf_is_queue_model_split(vport->dflt_qv_rsrc.txq_model)) return idpf_tx_splitq_frame(skb, tx_q); else return idpf_tx_singleq_frame(skb, tx_q); @@ -3270,10 +3298,10 @@ idpf_rx_splitq_extract_csum_bits(const struct virtchnl2_rx_flex_desc_adv_nic_3 * * @rx_desc: Receive descriptor * @decoded: Decoded Rx packet type related fields * - * Return 0 on success and error code on failure - * * Populate the skb fields with the total number of RSC segments, RSC payload * length and packet type. + * + * Return: 0 on success and error code on failure */ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb, const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, @@ -3371,6 +3399,8 @@ idpf_rx_hwtstamp(const struct idpf_rx_queue *rxq, * This function checks the ring, descriptor, and packet information in * order to populate the hash, checksum, protocol, and * other fields within the skb. + * + * Return: 0 on success and error code on failure */ static int __idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb, @@ -3465,6 +3495,7 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr, * @stat_err_field: field from descriptor to test bits in * @stat_err_bits: value to mask * + * Return: %true if any of given @stat_err_bits are set, %false otherwise. */ static bool idpf_rx_splitq_test_staterr(const u8 stat_err_field, const u8 stat_err_bits) @@ -3476,8 +3507,8 @@ static bool idpf_rx_splitq_test_staterr(const u8 stat_err_field, * idpf_rx_splitq_is_eop - process handling of EOP buffers * @rx_desc: Rx descriptor for current buffer * - * If the buffer is an EOP buffer, this function exits returning true, - * otherwise return false indicating that this is in fact a non-EOP buffer. + * Return: %true if the buffer is an EOP buffer, %false otherwise, indicating + * that this is in fact a non-EOP buffer. */ static bool idpf_rx_splitq_is_eop(struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc) { @@ -3496,7 +3527,7 @@ static bool idpf_rx_splitq_is_eop(struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_de * expensive overhead for IOMMU access this provides a means of avoiding * it by maintaining the mapping of the page to the system. * - * Returns amount of work completed + * Return: amount of work completed */ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget) { @@ -3626,7 +3657,7 @@ payload: * @buf_id: buffer ID * @buf_desc: Buffer queue descriptor * - * Return 0 on success and negative on failure. + * Return: 0 on success and negative on failure. */ static int idpf_rx_update_bufq_desc(struct idpf_buf_queue *bufq, u32 buf_id, struct virtchnl2_splitq_rx_buf_desc *buf_desc) @@ -3753,6 +3784,7 @@ static void idpf_rx_clean_refillq_all(struct idpf_buf_queue *bufq, int nid) * @irq: interrupt number * @data: pointer to a q_vector * + * Return: always IRQ_HANDLED */ static irqreturn_t idpf_vport_intr_clean_queues(int __always_unused irq, void *data) @@ -3767,39 +3799,34 @@ static irqreturn_t idpf_vport_intr_clean_queues(int __always_unused irq, /** * idpf_vport_intr_napi_del_all - Unregister napi for all q_vectors in vport - * @vport: virtual port structure - * + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_napi_del_all(struct idpf_vport *vport) +static void idpf_vport_intr_napi_del_all(struct idpf_q_vec_rsrc *rsrc) { - u16 v_idx; - - for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) - netif_napi_del(&vport->q_vectors[v_idx].napi); + for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) + netif_napi_del(&rsrc->q_vectors[v_idx].napi); } /** * idpf_vport_intr_napi_dis_all - Disable NAPI for all q_vectors in the vport - * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_napi_dis_all(struct idpf_vport *vport) +static void idpf_vport_intr_napi_dis_all(struct idpf_q_vec_rsrc *rsrc) { - int v_idx; - - for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) - napi_disable(&vport->q_vectors[v_idx].napi); + for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) + napi_disable(&rsrc->q_vectors[v_idx].napi); } /** * idpf_vport_intr_rel - Free memory allocated for interrupt vectors - * @vport: virtual port + * @rsrc: pointer to queue and vector resources * * Free the memory allocated for interrupt vectors associated to a vport */ -void idpf_vport_intr_rel(struct idpf_vport *vport) +void idpf_vport_intr_rel(struct idpf_q_vec_rsrc *rsrc) { - for (u32 v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx]; + for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) { + struct idpf_q_vector *q_vector = &rsrc->q_vectors[v_idx]; kfree(q_vector->xsksq); q_vector->xsksq = NULL; @@ -3813,8 +3840,8 @@ void idpf_vport_intr_rel(struct idpf_vport *vport) q_vector->rx = NULL; } - kfree(vport->q_vectors); - vport->q_vectors = NULL; + kfree(rsrc->q_vectors); + rsrc->q_vectors = NULL; } static void idpf_q_vector_set_napi(struct idpf_q_vector *q_vector, bool link) @@ -3834,21 +3861,22 @@ static void idpf_q_vector_set_napi(struct idpf_q_vector *q_vector, bool link) /** * idpf_vport_intr_rel_irq - Free the IRQ association with the OS * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_rel_irq(struct idpf_vport *vport) +static void idpf_vport_intr_rel_irq(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_adapter *adapter = vport->adapter; - int vector; - for (vector = 0; vector < vport->num_q_vectors; vector++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[vector]; + for (u16 vector = 0; vector < rsrc->num_q_vectors; vector++) { + struct idpf_q_vector *q_vector = &rsrc->q_vectors[vector]; int irq_num, vidx; /* free only the irqs that were actually requested */ if (!q_vector) continue; - vidx = vport->q_vector_idxs[vector]; + vidx = rsrc->q_vector_idxs[vector]; irq_num = adapter->msix_entries[vidx].vector; idpf_q_vector_set_napi(q_vector, false); @@ -3858,22 +3886,23 @@ static void idpf_vport_intr_rel_irq(struct idpf_vport *vport) /** * idpf_vport_intr_dis_irq_all - Disable all interrupt - * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_dis_irq_all(struct idpf_vport *vport) +static void idpf_vport_intr_dis_irq_all(struct idpf_q_vec_rsrc *rsrc) { - struct idpf_q_vector *q_vector = vport->q_vectors; - int q_idx; + struct idpf_q_vector *q_vector = rsrc->q_vectors; - writel(0, vport->noirq_dyn_ctl); + writel(0, rsrc->noirq_dyn_ctl); - for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) + for (u16 q_idx = 0; q_idx < rsrc->num_q_vectors; q_idx++) writel(0, q_vector[q_idx].intr_reg.dyn_ctl); } /** * idpf_vport_intr_buildreg_itr - Enable default interrupt generation settings * @q_vector: pointer to q_vector + * + * Return: value to be written back to HW to enable interrupt generation */ static u32 idpf_vport_intr_buildreg_itr(struct idpf_q_vector *q_vector) { @@ -4011,8 +4040,12 @@ void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector) /** * idpf_vport_intr_req_irq - get MSI-X vectors from the OS for the vport * @vport: main vport structure + * @rsrc: pointer to queue and vector resources + * + * Return: 0 on success, negative on failure */ -static int idpf_vport_intr_req_irq(struct idpf_vport *vport) +static int idpf_vport_intr_req_irq(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_adapter *adapter = vport->adapter; const char *drv_name, *if_name, *vec_name; @@ -4021,11 +4054,11 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport) drv_name = dev_driver_string(&adapter->pdev->dev); if_name = netdev_name(vport->netdev); - for (vector = 0; vector < vport->num_q_vectors; vector++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[vector]; + for (vector = 0; vector < rsrc->num_q_vectors; vector++) { + struct idpf_q_vector *q_vector = &rsrc->q_vectors[vector]; char *name; - vidx = vport->q_vector_idxs[vector]; + vidx = rsrc->q_vector_idxs[vector]; irq_num = adapter->msix_entries[vidx].vector; if (q_vector->num_rxq && q_vector->num_txq) @@ -4055,9 +4088,9 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport) free_q_irqs: while (--vector >= 0) { - vidx = vport->q_vector_idxs[vector]; + vidx = rsrc->q_vector_idxs[vector]; irq_num = adapter->msix_entries[vidx].vector; - kfree(free_irq(irq_num, &vport->q_vectors[vector])); + kfree(free_irq(irq_num, &rsrc->q_vectors[vector])); } return err; @@ -4086,15 +4119,16 @@ void idpf_vport_intr_write_itr(struct idpf_q_vector *q_vector, u16 itr, bool tx) /** * idpf_vport_intr_ena_irq_all - Enable IRQ for the given vport * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_ena_irq_all(struct idpf_vport *vport) +static void idpf_vport_intr_ena_irq_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { bool dynamic; - int q_idx; u16 itr; - for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { - struct idpf_q_vector *qv = &vport->q_vectors[q_idx]; + for (u16 q_idx = 0; q_idx < rsrc->num_q_vectors; q_idx++) { + struct idpf_q_vector *qv = &rsrc->q_vectors[q_idx]; /* Set the initial ITR values */ if (qv->num_txq) { @@ -4117,19 +4151,21 @@ static void idpf_vport_intr_ena_irq_all(struct idpf_vport *vport) idpf_vport_intr_update_itr_ena_irq(qv); } - writel(vport->noirq_dyn_ctl_ena, vport->noirq_dyn_ctl); + writel(rsrc->noirq_dyn_ctl_ena, rsrc->noirq_dyn_ctl); } /** * idpf_vport_intr_deinit - Release all vector associations for the vport * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -void idpf_vport_intr_deinit(struct idpf_vport *vport) +void idpf_vport_intr_deinit(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { - idpf_vport_intr_dis_irq_all(vport); - idpf_vport_intr_napi_dis_all(vport); - idpf_vport_intr_napi_del_all(vport); - idpf_vport_intr_rel_irq(vport); + idpf_vport_intr_dis_irq_all(rsrc); + idpf_vport_intr_napi_dis_all(rsrc); + idpf_vport_intr_napi_del_all(rsrc); + idpf_vport_intr_rel_irq(vport, rsrc); } /** @@ -4201,14 +4237,12 @@ static void idpf_init_dim(struct idpf_q_vector *qv) /** * idpf_vport_intr_napi_ena_all - Enable NAPI for all q_vectors in the vport - * @vport: main vport structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_napi_ena_all(struct idpf_vport *vport) +static void idpf_vport_intr_napi_ena_all(struct idpf_q_vec_rsrc *rsrc) { - int q_idx; - - for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[q_idx]; + for (u16 q_idx = 0; q_idx < rsrc->num_q_vectors; q_idx++) { + struct idpf_q_vector *q_vector = &rsrc->q_vectors[q_idx]; idpf_init_dim(q_vector); napi_enable(&q_vector->napi); @@ -4221,7 +4255,7 @@ static void idpf_vport_intr_napi_ena_all(struct idpf_vport *vport) * @budget: Used to determine if we are in netpoll * @cleaned: returns number of packets cleaned * - * Returns false if clean is not complete else returns true + * Return: %false if clean is not complete else returns %true */ static bool idpf_tx_splitq_clean_all(struct idpf_q_vector *q_vec, int budget, int *cleaned) @@ -4248,7 +4282,7 @@ static bool idpf_tx_splitq_clean_all(struct idpf_q_vector *q_vec, * @budget: Used to determine if we are in netpoll * @cleaned: returns number of packets cleaned * - * Returns false if clean is not complete else returns true + * Return: %false if clean is not complete else returns %true */ static bool idpf_rx_splitq_clean_all(struct idpf_q_vector *q_vec, int budget, int *cleaned) @@ -4291,6 +4325,8 @@ static bool idpf_rx_splitq_clean_all(struct idpf_q_vector *q_vec, int budget, * idpf_vport_splitq_napi_poll - NAPI handler * @napi: struct from which you get q_vector * @budget: budget provided by stack + * + * Return: how many packets were cleaned */ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget) { @@ -4336,24 +4372,26 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget) /** * idpf_vport_intr_map_vector_to_qs - Map vectors to queues * @vport: virtual port + * @rsrc: pointer to queue and vector resources * * Mapping for vectors to queues */ -static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) +static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { - u16 num_txq_grp = vport->num_txq_grp - vport->num_xdp_txq; - bool split = idpf_is_queue_model_split(vport->rxq_model); + u16 num_txq_grp = rsrc->num_txq_grp - vport->num_xdp_txq; + bool split = idpf_is_queue_model_split(rsrc->rxq_model); struct idpf_rxq_group *rx_qgrp; struct idpf_txq_group *tx_qgrp; - u32 i, qv_idx, q_index; + u32 q_index; - for (i = 0, qv_idx = 0; i < vport->num_rxq_grp; i++) { + for (unsigned int i = 0, qv_idx = 0; i < rsrc->num_rxq_grp; i++) { u16 num_rxq; - if (qv_idx >= vport->num_q_vectors) + if (qv_idx >= rsrc->num_q_vectors) qv_idx = 0; - rx_qgrp = &vport->rxq_grps[i]; + rx_qgrp = &rsrc->rxq_grps[i]; if (split) num_rxq = rx_qgrp->splitq.num_rxq_sets; else @@ -4366,7 +4404,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) q = &rx_qgrp->splitq.rxq_sets[j]->rxq; else q = rx_qgrp->singleq.rxqs[j]; - q->q_vector = &vport->q_vectors[qv_idx]; + q->q_vector = &rsrc->q_vectors[qv_idx]; q_index = q->q_vector->num_rxq; q->q_vector->rx[q_index] = q; q->q_vector->num_rxq++; @@ -4376,11 +4414,11 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) } if (split) { - for (u32 j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (u32 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { struct idpf_buf_queue *bufq; bufq = &rx_qgrp->splitq.bufq_sets[j].bufq; - bufq->q_vector = &vport->q_vectors[qv_idx]; + bufq->q_vector = &rsrc->q_vectors[qv_idx]; q_index = bufq->q_vector->num_bufq; bufq->q_vector->bufq[q_index] = bufq; bufq->q_vector->num_bufq++; @@ -4390,40 +4428,40 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) qv_idx++; } - split = idpf_is_queue_model_split(vport->txq_model); + split = idpf_is_queue_model_split(rsrc->txq_model); - for (i = 0, qv_idx = 0; i < num_txq_grp; i++) { + for (unsigned int i = 0, qv_idx = 0; i < num_txq_grp; i++) { u16 num_txq; - if (qv_idx >= vport->num_q_vectors) + if (qv_idx >= rsrc->num_q_vectors) qv_idx = 0; - tx_qgrp = &vport->txq_grps[i]; + tx_qgrp = &rsrc->txq_grps[i]; num_txq = tx_qgrp->num_txq; for (u32 j = 0; j < num_txq; j++) { struct idpf_tx_queue *q; q = tx_qgrp->txqs[j]; - q->q_vector = &vport->q_vectors[qv_idx]; + q->q_vector = &rsrc->q_vectors[qv_idx]; q->q_vector->tx[q->q_vector->num_txq++] = q; } if (split) { struct idpf_compl_queue *q = tx_qgrp->complq; - q->q_vector = &vport->q_vectors[qv_idx]; + q->q_vector = &rsrc->q_vectors[qv_idx]; q->q_vector->complq[q->q_vector->num_complq++] = q; } qv_idx++; } - for (i = 0; i < vport->num_xdp_txq; i++) { + for (unsigned int i = 0; i < vport->num_xdp_txq; i++) { struct idpf_tx_queue *xdpsq; struct idpf_q_vector *qv; - xdpsq = vport->txqs[vport->xdp_txq_offset + i]; + xdpsq = vport->txqs[rsrc->xdp_txq_offset + i]; if (!idpf_queue_has(XSK, xdpsq)) continue; @@ -4438,10 +4476,14 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) /** * idpf_vport_intr_init_vec_idx - Initialize the vector indexes * @vport: virtual port + * @rsrc: pointer to queue and vector resources * - * Initialize vector indexes with values returened over mailbox + * Initialize vector indexes with values returned over mailbox. + * + * Return: 0 on success, negative on failure */ -static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport) +static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_adapter *adapter = vport->adapter; struct virtchnl2_alloc_vectors *ac; @@ -4450,10 +4492,10 @@ static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport) ac = adapter->req_vec_chunks; if (!ac) { - for (i = 0; i < vport->num_q_vectors; i++) - vport->q_vectors[i].v_idx = vport->q_vector_idxs[i]; + for (i = 0; i < rsrc->num_q_vectors; i++) + rsrc->q_vectors[i].v_idx = rsrc->q_vector_idxs[i]; - vport->noirq_v_idx = vport->q_vector_idxs[i]; + rsrc->noirq_v_idx = rsrc->q_vector_idxs[i]; return 0; } @@ -4465,10 +4507,10 @@ static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport) idpf_get_vec_ids(adapter, vecids, total_vecs, &ac->vchunks); - for (i = 0; i < vport->num_q_vectors; i++) - vport->q_vectors[i].v_idx = vecids[vport->q_vector_idxs[i]]; + for (i = 0; i < rsrc->num_q_vectors; i++) + rsrc->q_vectors[i].v_idx = vecids[rsrc->q_vector_idxs[i]]; - vport->noirq_v_idx = vecids[vport->q_vector_idxs[i]]; + rsrc->noirq_v_idx = vecids[rsrc->q_vector_idxs[i]]; kfree(vecids); @@ -4478,21 +4520,24 @@ static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport) /** * idpf_vport_intr_napi_add_all- Register napi handler for all qvectors * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources */ -static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport) +static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { int (*napi_poll)(struct napi_struct *napi, int budget); - u16 v_idx, qv_idx; int irq_num; + u16 qv_idx; - if (idpf_is_queue_model_split(vport->txq_model)) + if (idpf_is_queue_model_split(rsrc->txq_model)) napi_poll = idpf_vport_splitq_napi_poll; else napi_poll = idpf_vport_singleq_napi_poll; - for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx]; - qv_idx = vport->q_vector_idxs[v_idx]; + for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) { + struct idpf_q_vector *q_vector = &rsrc->q_vectors[v_idx]; + + qv_idx = rsrc->q_vector_idxs[v_idx]; irq_num = vport->adapter->msix_entries[qv_idx].vector; netif_napi_add_config(vport->netdev, &q_vector->napi, @@ -4504,37 +4549,41 @@ static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport) /** * idpf_vport_intr_alloc - Allocate memory for interrupt vectors * @vport: virtual port + * @rsrc: pointer to queue and vector resources + * + * Allocate one q_vector per queue interrupt. * - * We allocate one q_vector per queue interrupt. If allocation fails we - * return -ENOMEM. + * Return: 0 on success, if allocation fails we return -ENOMEM. */ -int idpf_vport_intr_alloc(struct idpf_vport *vport) +int idpf_vport_intr_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { u16 txqs_per_vector, rxqs_per_vector, bufqs_per_vector; struct idpf_vport_user_config_data *user_config; struct idpf_q_vector *q_vector; struct idpf_q_coalesce *q_coal; - u32 complqs_per_vector, v_idx; + u32 complqs_per_vector; u16 idx = vport->idx; user_config = &vport->adapter->vport_config[idx]->user_config; - vport->q_vectors = kcalloc(vport->num_q_vectors, - sizeof(struct idpf_q_vector), GFP_KERNEL); - if (!vport->q_vectors) + + rsrc->q_vectors = kcalloc(rsrc->num_q_vectors, + sizeof(struct idpf_q_vector), GFP_KERNEL); + if (!rsrc->q_vectors) return -ENOMEM; - txqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp, - vport->num_q_vectors); - rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq_grp, - vport->num_q_vectors); - bufqs_per_vector = vport->num_bufqs_per_qgrp * - DIV_ROUND_UP(vport->num_rxq_grp, - vport->num_q_vectors); - complqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp, - vport->num_q_vectors); - - for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { - q_vector = &vport->q_vectors[v_idx]; + txqs_per_vector = DIV_ROUND_UP(rsrc->num_txq_grp, + rsrc->num_q_vectors); + rxqs_per_vector = DIV_ROUND_UP(rsrc->num_rxq_grp, + rsrc->num_q_vectors); + bufqs_per_vector = rsrc->num_bufqs_per_qgrp * + DIV_ROUND_UP(rsrc->num_rxq_grp, + rsrc->num_q_vectors); + complqs_per_vector = DIV_ROUND_UP(rsrc->num_txq_grp, + rsrc->num_q_vectors); + + for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) { + q_vector = &rsrc->q_vectors[v_idx]; q_coal = &user_config->q_coalesce[v_idx]; q_vector->vport = vport; @@ -4556,7 +4605,7 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport) if (!q_vector->rx) goto error; - if (!idpf_is_queue_model_split(vport->rxq_model)) + if (!idpf_is_queue_model_split(rsrc->rxq_model)) continue; q_vector->bufq = kcalloc(bufqs_per_vector, @@ -4571,7 +4620,7 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport) if (!q_vector->complq) goto error; - if (!vport->xdp_txq_offset) + if (!rsrc->xdp_txq_offset) continue; q_vector->xsksq = kcalloc(rxqs_per_vector, @@ -4584,7 +4633,7 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport) return 0; error: - idpf_vport_intr_rel(vport); + idpf_vport_intr_rel(rsrc); return -ENOMEM; } @@ -4592,72 +4641,74 @@ error: /** * idpf_vport_intr_init - Setup all vectors for the given vport * @vport: virtual port + * @rsrc: pointer to queue and vector resources * - * Returns 0 on success or negative on failure + * Return: 0 on success or negative on failure */ -int idpf_vport_intr_init(struct idpf_vport *vport) +int idpf_vport_intr_init(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc) { int err; - err = idpf_vport_intr_init_vec_idx(vport); + err = idpf_vport_intr_init_vec_idx(vport, rsrc); if (err) return err; - idpf_vport_intr_map_vector_to_qs(vport); - idpf_vport_intr_napi_add_all(vport); + idpf_vport_intr_map_vector_to_qs(vport, rsrc); + idpf_vport_intr_napi_add_all(vport, rsrc); - err = vport->adapter->dev_ops.reg_ops.intr_reg_init(vport); + err = vport->adapter->dev_ops.reg_ops.intr_reg_init(vport, rsrc); if (err) goto unroll_vectors_alloc; - err = idpf_vport_intr_req_irq(vport); + err = idpf_vport_intr_req_irq(vport, rsrc); if (err) goto unroll_vectors_alloc; return 0; unroll_vectors_alloc: - idpf_vport_intr_napi_del_all(vport); + idpf_vport_intr_napi_del_all(rsrc); return err; } -void idpf_vport_intr_ena(struct idpf_vport *vport) +void idpf_vport_intr_ena(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc) { - idpf_vport_intr_napi_ena_all(vport); - idpf_vport_intr_ena_irq_all(vport); + idpf_vport_intr_napi_ena_all(rsrc); + idpf_vport_intr_ena_irq_all(vport, rsrc); } /** * idpf_config_rss - Send virtchnl messages to configure RSS * @vport: virtual port + * @rss_data: pointer to RSS key and lut info * - * Return 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -int idpf_config_rss(struct idpf_vport *vport) +int idpf_config_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data) { + struct idpf_adapter *adapter = vport->adapter; + u32 vport_id = vport->vport_id; int err; - err = idpf_send_get_set_rss_key_msg(vport, false); + err = idpf_send_get_set_rss_key_msg(adapter, rss_data, vport_id, false); if (err) return err; - return idpf_send_get_set_rss_lut_msg(vport, false); + return idpf_send_get_set_rss_lut_msg(adapter, rss_data, vport_id, false); } /** * idpf_fill_dflt_rss_lut - Fill the indirection table with the default values * @vport: virtual port structure + * @rss_data: pointer to RSS key and lut info */ -void idpf_fill_dflt_rss_lut(struct idpf_vport *vport) +void idpf_fill_dflt_rss_lut(struct idpf_vport *vport, + struct idpf_rss_data *rss_data) { - struct idpf_adapter *adapter = vport->adapter; - u16 num_active_rxq = vport->num_rxq; - struct idpf_rss_data *rss_data; + u16 num_active_rxq = vport->dflt_qv_rsrc.num_rxq; int i; - rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; - for (i = 0; i < rss_data->rss_lut_size; i++) rss_data->rss_lut[i] = i % num_active_rxq; } @@ -4665,15 +4716,12 @@ void idpf_fill_dflt_rss_lut(struct idpf_vport *vport) /** * idpf_init_rss_lut - Allocate and initialize RSS LUT * @vport: virtual port + * @rss_data: pointer to RSS key and lut info * * Return: 0 on success, negative on failure */ -int idpf_init_rss_lut(struct idpf_vport *vport) +int idpf_init_rss_lut(struct idpf_vport *vport, struct idpf_rss_data *rss_data) { - struct idpf_adapter *adapter = vport->adapter; - struct idpf_rss_data *rss_data; - - rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; if (!rss_data->rss_lut) { u32 lut_size; @@ -4684,21 +4732,17 @@ int idpf_init_rss_lut(struct idpf_vport *vport) } /* Fill the default RSS lut values */ - idpf_fill_dflt_rss_lut(vport); + idpf_fill_dflt_rss_lut(vport, rss_data); return 0; } /** * idpf_deinit_rss_lut - Release RSS LUT - * @vport: virtual port + * @rss_data: pointer to RSS key and lut info */ -void idpf_deinit_rss_lut(struct idpf_vport *vport) +void idpf_deinit_rss_lut(struct idpf_rss_data *rss_data) { - struct idpf_adapter *adapter = vport->adapter; - struct idpf_rss_data *rss_data; - - rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; kfree(rss_data->rss_lut); rss_data->rss_lut = NULL; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 423cc9486dce..4be5b3b6d3ed 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -283,6 +283,7 @@ struct idpf_ptype_state { * @__IDPF_Q_FLOW_SCH_EN: Enable flow scheduling * @__IDPF_Q_SW_MARKER: Used to indicate TX queue marker completions * @__IDPF_Q_CRC_EN: enable CRC offload in singleq mode + * @__IDPF_Q_RSC_EN: enable Receive Side Coalescing on Rx (splitq) * @__IDPF_Q_HSPLIT_EN: enable header split on Rx (splitq) * @__IDPF_Q_PTP: indicates whether the Rx timestamping is enabled for the * queue @@ -297,6 +298,7 @@ enum idpf_queue_flags_t { __IDPF_Q_FLOW_SCH_EN, __IDPF_Q_SW_MARKER, __IDPF_Q_CRC_EN, + __IDPF_Q_RSC_EN, __IDPF_Q_HSPLIT_EN, __IDPF_Q_PTP, __IDPF_Q_NOIRQ, @@ -925,6 +927,7 @@ struct idpf_bufq_set { * @singleq.rxqs: Array of RX queue pointers * @splitq: Struct with split queue related members * @splitq.num_rxq_sets: Number of RX queue sets + * @splitq.num_rxq_sets: Number of Buffer queue sets * @splitq.rxq_sets: Array of RX queue sets * @splitq.bufq_sets: Buffer queue set pointer * @@ -942,6 +945,7 @@ struct idpf_rxq_group { } singleq; struct { u16 num_rxq_sets; + u16 num_bufq_sets; struct idpf_rxq_set *rxq_sets[IDPF_LARGE_MAX_Q]; struct idpf_bufq_set *bufq_sets; } splitq; @@ -1072,25 +1076,35 @@ static inline u32 idpf_tx_splitq_get_free_bufs(struct idpf_sw_queue *refillq) int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget); void idpf_vport_init_num_qs(struct idpf_vport *vport, - struct virtchnl2_create_vport *vport_msg); -void idpf_vport_calc_num_q_desc(struct idpf_vport *vport); + struct virtchnl2_create_vport *vport_msg, + struct idpf_q_vec_rsrc *rsrc); +void idpf_vport_calc_num_q_desc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_index, struct virtchnl2_create_vport *vport_msg, struct idpf_vport_max_q *max_q); -void idpf_vport_calc_num_q_groups(struct idpf_vport *vport); -int idpf_vport_queues_alloc(struct idpf_vport *vport); -void idpf_vport_queues_rel(struct idpf_vport *vport); -void idpf_vport_intr_rel(struct idpf_vport *vport); -int idpf_vport_intr_alloc(struct idpf_vport *vport); +void idpf_vport_calc_num_q_groups(struct idpf_q_vec_rsrc *rsrc); +int idpf_vport_queues_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); +void idpf_vport_queues_rel(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); +void idpf_vport_intr_rel(struct idpf_q_vec_rsrc *rsrc); +int idpf_vport_intr_alloc(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector); -void idpf_vport_intr_deinit(struct idpf_vport *vport); -int idpf_vport_intr_init(struct idpf_vport *vport); -void idpf_vport_intr_ena(struct idpf_vport *vport); -void idpf_fill_dflt_rss_lut(struct idpf_vport *vport); -int idpf_config_rss(struct idpf_vport *vport); -int idpf_init_rss_lut(struct idpf_vport *vport); -void idpf_deinit_rss_lut(struct idpf_vport *vport); -int idpf_rx_bufs_init_all(struct idpf_vport *vport); +void idpf_vport_intr_deinit(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); +int idpf_vport_intr_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); +void idpf_vport_intr_ena(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); +void idpf_fill_dflt_rss_lut(struct idpf_vport *vport, + struct idpf_rss_data *rss_data); +int idpf_config_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data); +int idpf_init_rss_lut(struct idpf_vport *vport, struct idpf_rss_data *rss_data); +void idpf_deinit_rss_lut(struct idpf_rss_data *rss_data); +int idpf_rx_bufs_init_all(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport, u32 q_num); diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c index 4cc58c83688c..7527b967e2e7 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c @@ -69,11 +69,13 @@ static void idpf_vf_mb_intr_reg_init(struct idpf_adapter *adapter) /** * idpf_vf_intr_reg_init - Initialize interrupt registers * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources */ -static int idpf_vf_intr_reg_init(struct idpf_vport *vport) +static int idpf_vf_intr_reg_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_adapter *adapter = vport->adapter; - int num_vecs = vport->num_q_vectors; + u16 num_vecs = rsrc->num_q_vectors; struct idpf_vec_regs *reg_vals; int num_regs, i, err = 0; u32 rx_itr, tx_itr, val; @@ -85,15 +87,15 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport) if (!reg_vals) return -ENOMEM; - num_regs = idpf_get_reg_intr_vecs(vport, reg_vals); + num_regs = idpf_get_reg_intr_vecs(adapter, reg_vals); if (num_regs < num_vecs) { err = -EINVAL; goto free_reg_vals; } for (i = 0; i < num_vecs; i++) { - struct idpf_q_vector *q_vector = &vport->q_vectors[i]; - u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC; + struct idpf_q_vector *q_vector = &rsrc->q_vectors[i]; + u16 vec_id = rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC; struct idpf_intr_reg *intr = &q_vector->intr_reg; u32 spacing; @@ -122,12 +124,12 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport) /* Data vector for NOIRQ queues */ - val = reg_vals[vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC].dyn_ctl_reg; - vport->noirq_dyn_ctl = idpf_get_reg_addr(adapter, val); + val = reg_vals[rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC].dyn_ctl_reg; + rsrc->noirq_dyn_ctl = idpf_get_reg_addr(adapter, val); val = VF_INT_DYN_CTLN_WB_ON_ITR_M | VF_INT_DYN_CTLN_INTENA_MSK_M | FIELD_PREP(VF_INT_DYN_CTLN_ITR_INDX_M, IDPF_NO_ITR_UPDATE_IDX); - vport->noirq_dyn_ctl_ena = val; + rsrc->noirq_dyn_ctl_ena = val; free_reg_vals: kfree(reg_vals); @@ -156,7 +158,8 @@ static void idpf_vf_trigger_reset(struct idpf_adapter *adapter, /* Do not send VIRTCHNL2_OP_RESET_VF message on driver unload */ if (trig_cause == IDPF_HR_FUNC_RESET && !test_bit(IDPF_REMOVE_IN_PROG, adapter->flags)) - idpf_send_mb_msg(adapter, VIRTCHNL2_OP_RESET_VF, 0, NULL, 0); + idpf_send_mb_msg(adapter, adapter->hw.asq, + VIRTCHNL2_OP_RESET_VF, 0, NULL, 0); } /** diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index cb702eac86c8..d46affaf7185 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -117,13 +117,15 @@ static void idpf_recv_event_msg(struct idpf_adapter *adapter, /** * idpf_mb_clean - Reclaim the send mailbox queue entries - * @adapter: Driver specific private structure + * @adapter: driver specific private structure + * @asq: send control queue info * * Reclaim the send mailbox queue entries to be used to send further messages * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -static int idpf_mb_clean(struct idpf_adapter *adapter) +static int idpf_mb_clean(struct idpf_adapter *adapter, + struct idpf_ctlq_info *asq) { u16 i, num_q_msg = IDPF_DFLT_MBX_Q_LEN; struct idpf_ctlq_msg **q_msg; @@ -134,7 +136,7 @@ static int idpf_mb_clean(struct idpf_adapter *adapter) if (!q_msg) return -ENOMEM; - err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg); + err = idpf_ctlq_clean_sq(asq, &num_q_msg, q_msg); if (err) goto err_kfree; @@ -206,7 +208,8 @@ static void idpf_prepare_ptp_mb_msg(struct idpf_adapter *adapter, u32 op, /** * idpf_send_mb_msg - Send message over mailbox - * @adapter: Driver specific private structure + * @adapter: driver specific private structure + * @asq: control queue to send message to * @op: virtchnl opcode * @msg_size: size of the payload * @msg: pointer to buffer holding the payload @@ -214,10 +217,10 @@ static void idpf_prepare_ptp_mb_msg(struct idpf_adapter *adapter, u32 op, * * Will prepare the control queue message and initiates the send api * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op, - u16 msg_size, u8 *msg, u16 cookie) +int idpf_send_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *asq, + u32 op, u16 msg_size, u8 *msg, u16 cookie) { struct idpf_ctlq_msg *ctlq_msg; struct idpf_dma_mem *dma_mem; @@ -231,7 +234,7 @@ int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op, if (idpf_is_reset_detected(adapter)) return 0; - err = idpf_mb_clean(adapter); + err = idpf_mb_clean(adapter, asq); if (err) return err; @@ -267,7 +270,7 @@ int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op, ctlq_msg->ctx.indirect.payload = dma_mem; ctlq_msg->ctx.sw_cookie.data = cookie; - err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg); + err = idpf_ctlq_send(&adapter->hw, asq, 1, ctlq_msg); if (err) goto send_error; @@ -463,7 +466,7 @@ ssize_t idpf_vc_xn_exec(struct idpf_adapter *adapter, cookie = FIELD_PREP(IDPF_VC_XN_SALT_M, xn->salt) | FIELD_PREP(IDPF_VC_XN_IDX_M, xn->idx); - retval = idpf_send_mb_msg(adapter, params->vc_op, + retval = idpf_send_mb_msg(adapter, adapter->hw.asq, params->vc_op, send_buf->iov_len, send_buf->iov_base, cookie); if (retval) { @@ -662,12 +665,14 @@ out_unlock: /** * idpf_recv_mb_msg - Receive message over mailbox - * @adapter: Driver specific private structure + * @adapter: driver specific private structure + * @arq: control queue to receive message from + * + * Will receive control queue message and posts the receive buffer. * - * Will receive control queue message and posts the receive buffer. Returns 0 - * on success and negative on failure. + * Return: 0 on success and negative on failure. */ -int idpf_recv_mb_msg(struct idpf_adapter *adapter) +int idpf_recv_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *arq) { struct idpf_ctlq_msg ctlq_msg; struct idpf_dma_mem *dma_mem; @@ -679,7 +684,7 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter) * actually received on num_recv. */ num_recv = 1; - err = idpf_ctlq_recv(adapter->hw.arq, &num_recv, &ctlq_msg); + err = idpf_ctlq_recv(arq, &num_recv, &ctlq_msg); if (err || !num_recv) break; @@ -695,8 +700,7 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter) else err = idpf_vc_xn_forward_reply(adapter, &ctlq_msg); - post_err = idpf_ctlq_post_rx_buffs(&adapter->hw, - adapter->hw.arq, + post_err = idpf_ctlq_post_rx_buffs(&adapter->hw, arq, &num_recv, &dma_mem); /* If post failed clear the only buffer we supplied */ @@ -717,9 +721,8 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter) } struct idpf_chunked_msg_params { - u32 (*prepare_msg)(const struct idpf_vport *vport, - void *buf, const void *pos, - u32 num); + u32 (*prepare_msg)(u32 vport_id, void *buf, + const void *pos, u32 num); const void *chunks; u32 num_chunks; @@ -728,9 +731,12 @@ struct idpf_chunked_msg_params { u32 config_sz; u32 vc_op; + u32 vport_id; }; -struct idpf_queue_set *idpf_alloc_queue_set(struct idpf_vport *vport, u32 num) +struct idpf_queue_set *idpf_alloc_queue_set(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *qv_rsrc, + u32 vport_id, u32 num) { struct idpf_queue_set *qp; @@ -738,7 +744,9 @@ struct idpf_queue_set *idpf_alloc_queue_set(struct idpf_vport *vport, u32 num) if (!qp) return NULL; - qp->vport = vport; + qp->adapter = adapter; + qp->qv_rsrc = qv_rsrc; + qp->vport_id = vport_id; qp->num = num; return qp; @@ -746,7 +754,7 @@ struct idpf_queue_set *idpf_alloc_queue_set(struct idpf_vport *vport, u32 num) /** * idpf_send_chunked_msg - send VC message consisting of chunks - * @vport: virtual port data structure + * @adapter: Driver specific private structure * @params: message params * * Helper function for preparing a message describing queues to be enabled @@ -754,7 +762,7 @@ struct idpf_queue_set *idpf_alloc_queue_set(struct idpf_vport *vport, u32 num) * * Return: the total size of the prepared message. */ -static int idpf_send_chunked_msg(struct idpf_vport *vport, +static int idpf_send_chunked_msg(struct idpf_adapter *adapter, const struct idpf_chunked_msg_params *params) { struct idpf_vc_xn_params xn_params = { @@ -765,6 +773,7 @@ static int idpf_send_chunked_msg(struct idpf_vport *vport, u32 num_chunks, num_msgs, buf_sz; void *buf __free(kfree) = NULL; u32 totqs = params->num_chunks; + u32 vid = params->vport_id; num_chunks = min(IDPF_NUM_CHUNKS_PER_MSG(params->config_sz, params->chunk_sz), totqs); @@ -783,10 +792,10 @@ static int idpf_send_chunked_msg(struct idpf_vport *vport, memset(buf, 0, buf_sz); xn_params.send_buf.iov_len = buf_sz; - if (params->prepare_msg(vport, buf, pos, num_chunks) != buf_sz) + if (params->prepare_msg(vid, buf, pos, num_chunks) != buf_sz) return -EINVAL; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); if (reply_sz < 0) return reply_sz; @@ -809,6 +818,7 @@ static int idpf_send_chunked_msg(struct idpf_vport *vport, */ static int idpf_wait_for_marker_event_set(const struct idpf_queue_set *qs) { + struct net_device *netdev; struct idpf_tx_queue *txq; bool markers_rcvd = true; @@ -817,6 +827,8 @@ static int idpf_wait_for_marker_event_set(const struct idpf_queue_set *qs) case VIRTCHNL2_QUEUE_TYPE_TX: txq = qs->qs[i].txq; + netdev = txq->netdev; + idpf_queue_set(SW_MARKER, txq); idpf_wait_for_sw_marker_completion(txq); markers_rcvd &= !idpf_queue_has(SW_MARKER, txq); @@ -827,7 +839,7 @@ static int idpf_wait_for_marker_event_set(const struct idpf_queue_set *qs) } if (!markers_rcvd) { - netdev_warn(qs->vport->netdev, + netdev_warn(netdev, "Failed to receive marker packets\n"); return -ETIMEDOUT; } @@ -845,7 +857,8 @@ static int idpf_wait_for_marker_event(struct idpf_vport *vport) { struct idpf_queue_set *qs __free(kfree) = NULL; - qs = idpf_alloc_queue_set(vport, vport->num_txq); + qs = idpf_alloc_queue_set(vport->adapter, &vport->dflt_qv_rsrc, + vport->vport_id, vport->num_txq); if (!qs) return -ENOMEM; @@ -1263,13 +1276,52 @@ static void idpf_init_avail_queues(struct idpf_adapter *adapter) } /** + * idpf_vport_init_queue_reg_chunks - initialize queue register chunks + * @vport_config: persistent vport structure to store the queue register info + * @schunks: source chunks to copy data from + * + * Return: 0 on success, negative on failure. + */ +static int +idpf_vport_init_queue_reg_chunks(struct idpf_vport_config *vport_config, + struct virtchnl2_queue_reg_chunks *schunks) +{ + struct idpf_queue_id_reg_info *q_info = &vport_config->qid_reg_info; + u16 num_chunks = le16_to_cpu(schunks->num_chunks); + + kfree(q_info->queue_chunks); + + q_info->queue_chunks = kcalloc(num_chunks, sizeof(*q_info->queue_chunks), + GFP_KERNEL); + if (!q_info->queue_chunks) { + q_info->num_chunks = 0; + return -ENOMEM; + } + + q_info->num_chunks = num_chunks; + + for (u16 i = 0; i < num_chunks; i++) { + struct idpf_queue_id_reg_chunk *dchunk = &q_info->queue_chunks[i]; + struct virtchnl2_queue_reg_chunk *schunk = &schunks->chunks[i]; + + dchunk->qtail_reg_start = le64_to_cpu(schunk->qtail_reg_start); + dchunk->qtail_reg_spacing = le32_to_cpu(schunk->qtail_reg_spacing); + dchunk->type = le32_to_cpu(schunk->type); + dchunk->start_queue_id = le32_to_cpu(schunk->start_queue_id); + dchunk->num_queues = le32_to_cpu(schunk->num_queues); + } + + return 0; +} + +/** * idpf_get_reg_intr_vecs - Get vector queue register offset - * @vport: virtual port structure + * @adapter: adapter structure to get the vector chunks * @reg_vals: Register offsets to store in * - * Returns number of registers that got populated + * Return: number of registers that got populated */ -int idpf_get_reg_intr_vecs(struct idpf_vport *vport, +int idpf_get_reg_intr_vecs(struct idpf_adapter *adapter, struct idpf_vec_regs *reg_vals) { struct virtchnl2_vector_chunks *chunks; @@ -1277,7 +1329,7 @@ int idpf_get_reg_intr_vecs(struct idpf_vport *vport, u16 num_vchunks, num_vec; int num_regs = 0, i, j; - chunks = &vport->adapter->req_vec_chunks->vchunks; + chunks = &adapter->req_vec_chunks->vchunks; num_vchunks = le16_to_cpu(chunks->num_vchunks); for (j = 0; j < num_vchunks; j++) { @@ -1322,25 +1374,25 @@ int idpf_get_reg_intr_vecs(struct idpf_vport *vport, * are filled. */ static int idpf_vport_get_q_reg(u32 *reg_vals, int num_regs, u32 q_type, - struct virtchnl2_queue_reg_chunks *chunks) + struct idpf_queue_id_reg_info *chunks) { - u16 num_chunks = le16_to_cpu(chunks->num_chunks); + u16 num_chunks = chunks->num_chunks; int reg_filled = 0, i; u32 reg_val; while (num_chunks--) { - struct virtchnl2_queue_reg_chunk *chunk; + struct idpf_queue_id_reg_chunk *chunk; u16 num_q; - chunk = &chunks->chunks[num_chunks]; - if (le32_to_cpu(chunk->type) != q_type) + chunk = &chunks->queue_chunks[num_chunks]; + if (chunk->type != q_type) continue; - num_q = le32_to_cpu(chunk->num_queues); - reg_val = le64_to_cpu(chunk->qtail_reg_start); + num_q = chunk->num_queues; + reg_val = chunk->qtail_reg_start; for (i = 0; i < num_q && reg_filled < num_regs ; i++) { reg_vals[reg_filled++] = reg_val; - reg_val += le32_to_cpu(chunk->qtail_reg_spacing); + reg_val += chunk->qtail_reg_spacing; } } @@ -1350,13 +1402,15 @@ static int idpf_vport_get_q_reg(u32 *reg_vals, int num_regs, u32 q_type, /** * __idpf_queue_reg_init - initialize queue registers * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources * @reg_vals: registers we are initializing * @num_regs: how many registers there are in total * @q_type: queue model * * Return number of queues that are initialized */ -static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, +static int __idpf_queue_reg_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, u32 *reg_vals, int num_regs, u32 q_type) { struct idpf_adapter *adapter = vport->adapter; @@ -1364,8 +1418,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, switch (q_type) { case VIRTCHNL2_QUEUE_TYPE_TX: - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; for (j = 0; j < tx_qgrp->num_txq && k < num_regs; j++, k++) tx_qgrp->txqs[j]->tail = @@ -1373,8 +1427,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, } break; case VIRTCHNL2_QUEUE_TYPE_RX: - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u16 num_rxq = rx_qgrp->singleq.num_rxq; for (j = 0; j < num_rxq && k < num_regs; j++, k++) { @@ -1387,9 +1441,9 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, } break; case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - u8 num_bufqs = vport->num_bufqs_per_qgrp; + for (i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; + u8 num_bufqs = rsrc->num_bufqs_per_qgrp; for (j = 0; j < num_bufqs && k < num_regs; j++, k++) { struct idpf_buf_queue *q; @@ -1410,15 +1464,15 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals, /** * idpf_queue_reg_init - initialize queue registers * @vport: virtual port structure + * @rsrc: pointer to queue and vector resources + * @chunks: queue registers received over mailbox * - * Return 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -int idpf_queue_reg_init(struct idpf_vport *vport) +int idpf_queue_reg_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, + struct idpf_queue_id_reg_info *chunks) { - struct virtchnl2_create_vport *vport_params; - struct virtchnl2_queue_reg_chunks *chunks; - struct idpf_vport_config *vport_config; - u16 vport_idx = vport->idx; int num_regs, ret = 0; u32 *reg_vals; @@ -1427,28 +1481,18 @@ int idpf_queue_reg_init(struct idpf_vport *vport) if (!reg_vals) return -ENOMEM; - vport_config = vport->adapter->vport_config[vport_idx]; - if (vport_config->req_qs_chunks) { - struct virtchnl2_add_queues *vc_aq = - (struct virtchnl2_add_queues *)vport_config->req_qs_chunks; - chunks = &vc_aq->chunks; - } else { - vport_params = vport->adapter->vport_params_recvd[vport_idx]; - chunks = &vport_params->chunks; - } - /* Initialize Tx queue tail register address */ num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q, VIRTCHNL2_QUEUE_TYPE_TX, chunks); - if (num_regs < vport->num_txq) { + if (num_regs < rsrc->num_txq) { ret = -EINVAL; goto free_reg_vals; } - num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs, + num_regs = __idpf_queue_reg_init(vport, rsrc, reg_vals, num_regs, VIRTCHNL2_QUEUE_TYPE_TX); - if (num_regs < vport->num_txq) { + if (num_regs < rsrc->num_txq) { ret = -EINVAL; goto free_reg_vals; } @@ -1456,18 +1500,18 @@ int idpf_queue_reg_init(struct idpf_vport *vport) /* Initialize Rx/buffer queue tail register address based on Rx queue * model */ - if (idpf_is_queue_model_split(vport->rxq_model)) { + if (idpf_is_queue_model_split(rsrc->rxq_model)) { num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q, VIRTCHNL2_QUEUE_TYPE_RX_BUFFER, chunks); - if (num_regs < vport->num_bufq) { + if (num_regs < rsrc->num_bufq) { ret = -EINVAL; goto free_reg_vals; } - num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs, + num_regs = __idpf_queue_reg_init(vport, rsrc, reg_vals, num_regs, VIRTCHNL2_QUEUE_TYPE_RX_BUFFER); - if (num_regs < vport->num_bufq) { + if (num_regs < rsrc->num_bufq) { ret = -EINVAL; goto free_reg_vals; } @@ -1475,14 +1519,14 @@ int idpf_queue_reg_init(struct idpf_vport *vport) num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q, VIRTCHNL2_QUEUE_TYPE_RX, chunks); - if (num_regs < vport->num_rxq) { + if (num_regs < rsrc->num_rxq) { ret = -EINVAL; goto free_reg_vals; } - num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs, + num_regs = __idpf_queue_reg_init(vport, rsrc, reg_vals, num_regs, VIRTCHNL2_QUEUE_TYPE_RX); - if (num_regs < vport->num_rxq) { + if (num_regs < rsrc->num_rxq) { ret = -EINVAL; goto free_reg_vals; } @@ -1581,6 +1625,7 @@ free_vport_params: */ int idpf_check_supported_desc_ids(struct idpf_vport *vport) { + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_adapter *adapter = vport->adapter; struct virtchnl2_create_vport *vport_msg; u64 rx_desc_ids, tx_desc_ids; @@ -1597,17 +1642,17 @@ int idpf_check_supported_desc_ids(struct idpf_vport *vport) rx_desc_ids = le64_to_cpu(vport_msg->rx_desc_ids); tx_desc_ids = le64_to_cpu(vport_msg->tx_desc_ids); - if (idpf_is_queue_model_split(vport->rxq_model)) { + if (idpf_is_queue_model_split(rsrc->rxq_model)) { if (!(rx_desc_ids & VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M)) { dev_info(&adapter->pdev->dev, "Minimum RX descriptor support not provided, using the default\n"); vport_msg->rx_desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); } } else { if (!(rx_desc_ids & VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M)) - vport->base_rxd = true; + rsrc->base_rxd = true; } - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(rsrc->txq_model)) return 0; if ((tx_desc_ids & MIN_SUPPORT_TXDID) != MIN_SUPPORT_TXDID) { @@ -1620,96 +1665,96 @@ int idpf_check_supported_desc_ids(struct idpf_vport *vport) /** * idpf_send_destroy_vport_msg - Send virtchnl destroy vport message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @vport_id: vport identifier used while preparing the virtchnl message * - * Send virtchnl destroy vport message. Returns 0 on success, negative on - * failure. + * Return: 0 on success, negative on failure. */ -int idpf_send_destroy_vport_msg(struct idpf_vport *vport) +int idpf_send_destroy_vport_msg(struct idpf_adapter *adapter, u32 vport_id) { struct idpf_vc_xn_params xn_params = {}; struct virtchnl2_vport v_id; ssize_t reply_sz; - v_id.vport_id = cpu_to_le32(vport->vport_id); + v_id.vport_id = cpu_to_le32(vport_id); xn_params.vc_op = VIRTCHNL2_OP_DESTROY_VPORT; xn_params.send_buf.iov_base = &v_id; xn_params.send_buf.iov_len = sizeof(v_id); xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } /** * idpf_send_enable_vport_msg - Send virtchnl enable vport message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @vport_id: vport identifier used while preparing the virtchnl message * - * Send enable vport virtchnl message. Returns 0 on success, negative on - * failure. + * Return: 0 on success, negative on failure. */ -int idpf_send_enable_vport_msg(struct idpf_vport *vport) +int idpf_send_enable_vport_msg(struct idpf_adapter *adapter, u32 vport_id) { struct idpf_vc_xn_params xn_params = {}; struct virtchnl2_vport v_id; ssize_t reply_sz; - v_id.vport_id = cpu_to_le32(vport->vport_id); + v_id.vport_id = cpu_to_le32(vport_id); xn_params.vc_op = VIRTCHNL2_OP_ENABLE_VPORT; xn_params.send_buf.iov_base = &v_id; xn_params.send_buf.iov_len = sizeof(v_id); xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } /** * idpf_send_disable_vport_msg - Send virtchnl disable vport message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @vport_id: vport identifier used while preparing the virtchnl message * - * Send disable vport virtchnl message. Returns 0 on success, negative on - * failure. + * Return: 0 on success, negative on failure. */ -int idpf_send_disable_vport_msg(struct idpf_vport *vport) +int idpf_send_disable_vport_msg(struct idpf_adapter *adapter, u32 vport_id) { struct idpf_vc_xn_params xn_params = {}; struct virtchnl2_vport v_id; ssize_t reply_sz; - v_id.vport_id = cpu_to_le32(vport->vport_id); + v_id.vport_id = cpu_to_le32(vport_id); xn_params.vc_op = VIRTCHNL2_OP_DISABLE_VPORT; xn_params.send_buf.iov_base = &v_id; xn_params.send_buf.iov_len = sizeof(v_id); xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } /** * idpf_fill_txq_config_chunk - fill chunk describing the Tx queue - * @vport: virtual port data structure + * @rsrc: pointer to queue and vector resources * @q: Tx queue to be inserted into VC chunk * @qi: pointer to the buffer containing the VC chunk */ -static void idpf_fill_txq_config_chunk(const struct idpf_vport *vport, +static void idpf_fill_txq_config_chunk(const struct idpf_q_vec_rsrc *rsrc, const struct idpf_tx_queue *q, struct virtchnl2_txq_info *qi) { u32 val; qi->queue_id = cpu_to_le32(q->q_id); - qi->model = cpu_to_le16(vport->txq_model); + qi->model = cpu_to_le16(rsrc->txq_model); qi->type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX); qi->ring_len = cpu_to_le16(q->desc_count); qi->dma_ring_addr = cpu_to_le64(q->dma); qi->relative_queue_id = cpu_to_le16(q->rel_q_id); - if (!idpf_is_queue_model_split(vport->txq_model)) { + if (!idpf_is_queue_model_split(rsrc->txq_model)) { qi->sched_mode = cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_QUEUE); return; } @@ -1731,18 +1776,18 @@ static void idpf_fill_txq_config_chunk(const struct idpf_vport *vport, /** * idpf_fill_complq_config_chunk - fill chunk describing the completion queue - * @vport: virtual port data structure + * @rsrc: pointer to queue and vector resources * @q: completion queue to be inserted into VC chunk * @qi: pointer to the buffer containing the VC chunk */ -static void idpf_fill_complq_config_chunk(const struct idpf_vport *vport, +static void idpf_fill_complq_config_chunk(const struct idpf_q_vec_rsrc *rsrc, const struct idpf_compl_queue *q, struct virtchnl2_txq_info *qi) { u32 val; qi->queue_id = cpu_to_le32(q->q_id); - qi->model = cpu_to_le16(vport->txq_model); + qi->model = cpu_to_le16(rsrc->txq_model); qi->type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION); qi->ring_len = cpu_to_le16(q->desc_count); qi->dma_ring_addr = cpu_to_le64(q->dma); @@ -1757,7 +1802,7 @@ static void idpf_fill_complq_config_chunk(const struct idpf_vport *vport, /** * idpf_prepare_cfg_txqs_msg - prepare message to configure selected Tx queues - * @vport: virtual port data structure + * @vport_id: ID of virtual port queues are associated with * @buf: buffer containing the message * @pos: pointer to the first chunk describing the tx queue * @num_chunks: number of chunks in the message @@ -1767,13 +1812,12 @@ static void idpf_fill_complq_config_chunk(const struct idpf_vport *vport, * * Return: the total size of the prepared message. */ -static u32 idpf_prepare_cfg_txqs_msg(const struct idpf_vport *vport, - void *buf, const void *pos, +static u32 idpf_prepare_cfg_txqs_msg(u32 vport_id, void *buf, const void *pos, u32 num_chunks) { struct virtchnl2_config_tx_queues *ctq = buf; - ctq->vport_id = cpu_to_le32(vport->vport_id); + ctq->vport_id = cpu_to_le32(vport_id); ctq->num_qinfo = cpu_to_le16(num_chunks); memcpy(ctq->qinfo, pos, num_chunks * sizeof(*ctq->qinfo)); @@ -1794,6 +1838,7 @@ static int idpf_send_config_tx_queue_set_msg(const struct idpf_queue_set *qs) { struct virtchnl2_txq_info *qi __free(kfree) = NULL; struct idpf_chunked_msg_params params = { + .vport_id = qs->vport_id, .vc_op = VIRTCHNL2_OP_CONFIG_TX_QUEUES, .prepare_msg = idpf_prepare_cfg_txqs_msg, .config_sz = sizeof(struct virtchnl2_config_tx_queues), @@ -1808,43 +1853,47 @@ static int idpf_send_config_tx_queue_set_msg(const struct idpf_queue_set *qs) for (u32 i = 0; i < qs->num; i++) { if (qs->qs[i].type == VIRTCHNL2_QUEUE_TYPE_TX) - idpf_fill_txq_config_chunk(qs->vport, qs->qs[i].txq, + idpf_fill_txq_config_chunk(qs->qv_rsrc, qs->qs[i].txq, &qi[params.num_chunks++]); else if (qs->qs[i].type == VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION) - idpf_fill_complq_config_chunk(qs->vport, + idpf_fill_complq_config_chunk(qs->qv_rsrc, qs->qs[i].complq, &qi[params.num_chunks++]); } - return idpf_send_chunked_msg(qs->vport, ¶ms); + return idpf_send_chunked_msg(qs->adapter, ¶ms); } /** * idpf_send_config_tx_queues_msg - send virtchnl config Tx queues message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message * * Return: 0 on success, -errno on failure. */ -static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) +static int idpf_send_config_tx_queues_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id) { struct idpf_queue_set *qs __free(kfree) = NULL; - u32 totqs = vport->num_txq + vport->num_complq; + u32 totqs = rsrc->num_txq + rsrc->num_complq; u32 k = 0; - qs = idpf_alloc_queue_set(vport, totqs); + qs = idpf_alloc_queue_set(adapter, rsrc, vport_id, totqs); if (!qs) return -ENOMEM; /* Populate the queue info buffer with all queue context info */ - for (u32 i = 0; i < vport->num_txq_grp; i++) { - const struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (u32 i = 0; i < rsrc->num_txq_grp; i++) { + const struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; for (u32 j = 0; j < tx_qgrp->num_txq; j++) { qs->qs[k].type = VIRTCHNL2_QUEUE_TYPE_TX; qs->qs[k++].txq = tx_qgrp->txqs[j]; } - if (idpf_is_queue_model_split(vport->txq_model)) { + if (idpf_is_queue_model_split(rsrc->txq_model)) { qs->qs[k].type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; qs->qs[k++].complq = tx_qgrp->complq; } @@ -1859,28 +1908,28 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) /** * idpf_fill_rxq_config_chunk - fill chunk describing the Rx queue - * @vport: virtual port data structure + * @rsrc: pointer to queue and vector resources * @q: Rx queue to be inserted into VC chunk * @qi: pointer to the buffer containing the VC chunk */ -static void idpf_fill_rxq_config_chunk(const struct idpf_vport *vport, +static void idpf_fill_rxq_config_chunk(const struct idpf_q_vec_rsrc *rsrc, struct idpf_rx_queue *q, struct virtchnl2_rxq_info *qi) { const struct idpf_bufq_set *sets; qi->queue_id = cpu_to_le32(q->q_id); - qi->model = cpu_to_le16(vport->rxq_model); + qi->model = cpu_to_le16(rsrc->rxq_model); qi->type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX); qi->ring_len = cpu_to_le16(q->desc_count); qi->dma_ring_addr = cpu_to_le64(q->dma); qi->max_pkt_size = cpu_to_le32(q->rx_max_pkt_size); qi->rx_buffer_low_watermark = cpu_to_le16(q->rx_buffer_low_watermark); qi->qflags = cpu_to_le16(VIRTCHNL2_RX_DESC_SIZE_32BYTE); - if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) + if (idpf_queue_has(RSC_EN, q)) qi->qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC); - if (!idpf_is_queue_model_split(vport->rxq_model)) { + if (!idpf_is_queue_model_split(rsrc->rxq_model)) { qi->data_buffer_size = cpu_to_le32(q->rx_buf_size); qi->desc_ids = cpu_to_le64(q->rxdids); @@ -1897,7 +1946,7 @@ static void idpf_fill_rxq_config_chunk(const struct idpf_vport *vport, qi->data_buffer_size = cpu_to_le32(q->rx_buf_size); qi->rx_bufq1_id = cpu_to_le16(sets[0].bufq.q_id); - if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) { + if (rsrc->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) { qi->bufq2_ena = IDPF_BUFQ2_ENA; qi->rx_bufq2_id = cpu_to_le16(sets[1].bufq.q_id); } @@ -1914,16 +1963,16 @@ static void idpf_fill_rxq_config_chunk(const struct idpf_vport *vport, /** * idpf_fill_bufq_config_chunk - fill chunk describing the buffer queue - * @vport: virtual port data structure + * @rsrc: pointer to queue and vector resources * @q: buffer queue to be inserted into VC chunk * @qi: pointer to the buffer containing the VC chunk */ -static void idpf_fill_bufq_config_chunk(const struct idpf_vport *vport, +static void idpf_fill_bufq_config_chunk(const struct idpf_q_vec_rsrc *rsrc, const struct idpf_buf_queue *q, struct virtchnl2_rxq_info *qi) { qi->queue_id = cpu_to_le32(q->q_id); - qi->model = cpu_to_le16(vport->rxq_model); + qi->model = cpu_to_le16(rsrc->rxq_model); qi->type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX_BUFFER); qi->ring_len = cpu_to_le16(q->desc_count); qi->dma_ring_addr = cpu_to_le64(q->dma); @@ -1931,7 +1980,7 @@ static void idpf_fill_bufq_config_chunk(const struct idpf_vport *vport, qi->rx_buffer_low_watermark = cpu_to_le16(q->rx_buffer_low_watermark); qi->desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); qi->buffer_notif_stride = IDPF_RX_BUF_STRIDE; - if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) + if (idpf_queue_has(RSC_EN, q)) qi->qflags = cpu_to_le16(VIRTCHNL2_RXQ_RSC); if (idpf_queue_has(HSPLIT_EN, q)) { @@ -1942,7 +1991,7 @@ static void idpf_fill_bufq_config_chunk(const struct idpf_vport *vport, /** * idpf_prepare_cfg_rxqs_msg - prepare message to configure selected Rx queues - * @vport: virtual port data structure + * @vport_id: ID of virtual port queues are associated with * @buf: buffer containing the message * @pos: pointer to the first chunk describing the rx queue * @num_chunks: number of chunks in the message @@ -1952,13 +2001,12 @@ static void idpf_fill_bufq_config_chunk(const struct idpf_vport *vport, * * Return: the total size of the prepared message. */ -static u32 idpf_prepare_cfg_rxqs_msg(const struct idpf_vport *vport, - void *buf, const void *pos, +static u32 idpf_prepare_cfg_rxqs_msg(u32 vport_id, void *buf, const void *pos, u32 num_chunks) { struct virtchnl2_config_rx_queues *crq = buf; - crq->vport_id = cpu_to_le32(vport->vport_id); + crq->vport_id = cpu_to_le32(vport_id); crq->num_qinfo = cpu_to_le16(num_chunks); memcpy(crq->qinfo, pos, num_chunks * sizeof(*crq->qinfo)); @@ -1979,6 +2027,7 @@ static int idpf_send_config_rx_queue_set_msg(const struct idpf_queue_set *qs) { struct virtchnl2_rxq_info *qi __free(kfree) = NULL; struct idpf_chunked_msg_params params = { + .vport_id = qs->vport_id, .vc_op = VIRTCHNL2_OP_CONFIG_RX_QUEUES, .prepare_msg = idpf_prepare_cfg_rxqs_msg, .config_sz = sizeof(struct virtchnl2_config_rx_queues), @@ -1993,36 +2042,40 @@ static int idpf_send_config_rx_queue_set_msg(const struct idpf_queue_set *qs) for (u32 i = 0; i < qs->num; i++) { if (qs->qs[i].type == VIRTCHNL2_QUEUE_TYPE_RX) - idpf_fill_rxq_config_chunk(qs->vport, qs->qs[i].rxq, + idpf_fill_rxq_config_chunk(qs->qv_rsrc, qs->qs[i].rxq, &qi[params.num_chunks++]); else if (qs->qs[i].type == VIRTCHNL2_QUEUE_TYPE_RX_BUFFER) - idpf_fill_bufq_config_chunk(qs->vport, qs->qs[i].bufq, + idpf_fill_bufq_config_chunk(qs->qv_rsrc, qs->qs[i].bufq, &qi[params.num_chunks++]); } - return idpf_send_chunked_msg(qs->vport, ¶ms); + return idpf_send_chunked_msg(qs->adapter, ¶ms); } /** * idpf_send_config_rx_queues_msg - send virtchnl config Rx queues message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message * * Return: 0 on success, -errno on failure. */ -static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) +static int idpf_send_config_rx_queues_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id) { - bool splitq = idpf_is_queue_model_split(vport->rxq_model); + bool splitq = idpf_is_queue_model_split(rsrc->rxq_model); struct idpf_queue_set *qs __free(kfree) = NULL; - u32 totqs = vport->num_rxq + vport->num_bufq; + u32 totqs = rsrc->num_rxq + rsrc->num_bufq; u32 k = 0; - qs = idpf_alloc_queue_set(vport, totqs); + qs = idpf_alloc_queue_set(adapter, rsrc, vport_id, totqs); if (!qs) return -ENOMEM; /* Populate the queue info buffer with all queue context info */ - for (u32 i = 0; i < vport->num_rxq_grp; i++) { - const struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (u32 i = 0; i < rsrc->num_rxq_grp; i++) { + const struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u32 num_rxq; if (!splitq) { @@ -2030,7 +2083,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) goto rxq; } - for (u32 j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (u32 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { qs->qs[k].type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; qs->qs[k++].bufq = &rx_qgrp->splitq.bufq_sets[j].bufq; } @@ -2059,7 +2112,7 @@ rxq: /** * idpf_prepare_ena_dis_qs_msg - prepare message to enable/disable selected * queues - * @vport: virtual port data structure + * @vport_id: ID of virtual port queues are associated with * @buf: buffer containing the message * @pos: pointer to the first chunk describing the queue * @num_chunks: number of chunks in the message @@ -2069,13 +2122,12 @@ rxq: * * Return: the total size of the prepared message. */ -static u32 idpf_prepare_ena_dis_qs_msg(const struct idpf_vport *vport, - void *buf, const void *pos, +static u32 idpf_prepare_ena_dis_qs_msg(u32 vport_id, void *buf, const void *pos, u32 num_chunks) { struct virtchnl2_del_ena_dis_queues *eq = buf; - eq->vport_id = cpu_to_le32(vport->vport_id); + eq->vport_id = cpu_to_le32(vport_id); eq->chunks.num_chunks = cpu_to_le16(num_chunks); memcpy(eq->chunks.chunks, pos, num_chunks * sizeof(*eq->chunks.chunks)); @@ -2100,6 +2152,7 @@ static int idpf_send_ena_dis_queue_set_msg(const struct idpf_queue_set *qs, { struct virtchnl2_queue_chunk *qc __free(kfree) = NULL; struct idpf_chunked_msg_params params = { + .vport_id = qs->vport_id, .vc_op = en ? VIRTCHNL2_OP_ENABLE_QUEUES : VIRTCHNL2_OP_DISABLE_QUEUES, .prepare_msg = idpf_prepare_ena_dis_qs_msg, @@ -2141,34 +2194,38 @@ static int idpf_send_ena_dis_queue_set_msg(const struct idpf_queue_set *qs, qc[i].start_queue_id = cpu_to_le32(qid); } - return idpf_send_chunked_msg(qs->vport, ¶ms); + return idpf_send_chunked_msg(qs->adapter, ¶ms); } /** * idpf_send_ena_dis_queues_msg - send virtchnl enable or disable queues * message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message * @en: whether to enable or disable queues * * Return: 0 on success, -errno on failure. */ -static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool en) +static int idpf_send_ena_dis_queues_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id, bool en) { struct idpf_queue_set *qs __free(kfree) = NULL; u32 num_txq, num_q, k = 0; bool split; - num_txq = vport->num_txq + vport->num_complq; - num_q = num_txq + vport->num_rxq + vport->num_bufq; + num_txq = rsrc->num_txq + rsrc->num_complq; + num_q = num_txq + rsrc->num_rxq + rsrc->num_bufq; - qs = idpf_alloc_queue_set(vport, num_q); + qs = idpf_alloc_queue_set(adapter, rsrc, vport_id, num_q); if (!qs) return -ENOMEM; - split = idpf_is_queue_model_split(vport->txq_model); + split = idpf_is_queue_model_split(rsrc->txq_model); - for (u32 i = 0; i < vport->num_txq_grp; i++) { - const struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (u32 i = 0; i < rsrc->num_txq_grp; i++) { + const struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; for (u32 j = 0; j < tx_qgrp->num_txq; j++) { qs->qs[k].type = VIRTCHNL2_QUEUE_TYPE_TX; @@ -2185,10 +2242,10 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool en) if (k != num_txq) return -EINVAL; - split = idpf_is_queue_model_split(vport->rxq_model); + split = idpf_is_queue_model_split(rsrc->rxq_model); - for (u32 i = 0; i < vport->num_rxq_grp; i++) { - const struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (u32 i = 0; i < rsrc->num_rxq_grp; i++) { + const struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u32 num_rxq; if (split) @@ -2209,7 +2266,7 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool en) if (!split) continue; - for (u32 j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (u32 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { qs->qs[k].type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; qs->qs[k++].bufq = &rx_qgrp->splitq.bufq_sets[j].bufq; } @@ -2224,7 +2281,7 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool en) /** * idpf_prep_map_unmap_queue_set_vector_msg - prepare message to map or unmap * queue set to the interrupt vector - * @vport: virtual port data structure + * @vport_id: ID of virtual port queues are associated with * @buf: buffer containing the message * @pos: pointer to the first chunk describing the vector mapping * @num_chunks: number of chunks in the message @@ -2235,13 +2292,12 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool en) * Return: the total size of the prepared message. */ static u32 -idpf_prep_map_unmap_queue_set_vector_msg(const struct idpf_vport *vport, - void *buf, const void *pos, - u32 num_chunks) +idpf_prep_map_unmap_queue_set_vector_msg(u32 vport_id, void *buf, + const void *pos, u32 num_chunks) { struct virtchnl2_queue_vector_maps *vqvm = buf; - vqvm->vport_id = cpu_to_le32(vport->vport_id); + vqvm->vport_id = cpu_to_le32(vport_id); vqvm->num_qv_maps = cpu_to_le16(num_chunks); memcpy(vqvm->qv_maps, pos, num_chunks * sizeof(*vqvm->qv_maps)); @@ -2262,6 +2318,7 @@ idpf_send_map_unmap_queue_set_vector_msg(const struct idpf_queue_set *qs, { struct virtchnl2_queue_vector *vqv __free(kfree) = NULL; struct idpf_chunked_msg_params params = { + .vport_id = qs->vport_id, .vc_op = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR : VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR, .prepare_msg = idpf_prep_map_unmap_queue_set_vector_msg, @@ -2277,7 +2334,7 @@ idpf_send_map_unmap_queue_set_vector_msg(const struct idpf_queue_set *qs, params.chunks = vqv; - split = idpf_is_queue_model_split(qs->vport->txq_model); + split = idpf_is_queue_model_split(qs->qv_rsrc->txq_model); for (u32 i = 0; i < qs->num; i++) { const struct idpf_queue_ptr *q = &qs->qs[i]; @@ -2299,7 +2356,7 @@ idpf_send_map_unmap_queue_set_vector_msg(const struct idpf_queue_set *qs, v_idx = vec->v_idx; itr_idx = vec->rx_itr_idx; } else { - v_idx = qs->vport->noirq_v_idx; + v_idx = qs->qv_rsrc->noirq_v_idx; itr_idx = VIRTCHNL2_ITR_IDX_0; } break; @@ -2319,7 +2376,7 @@ idpf_send_map_unmap_queue_set_vector_msg(const struct idpf_queue_set *qs, v_idx = vec->v_idx; itr_idx = vec->tx_itr_idx; } else { - v_idx = qs->vport->noirq_v_idx; + v_idx = qs->qv_rsrc->noirq_v_idx; itr_idx = VIRTCHNL2_ITR_IDX_1; } break; @@ -2332,29 +2389,33 @@ idpf_send_map_unmap_queue_set_vector_msg(const struct idpf_queue_set *qs, vqv[i].itr_idx = cpu_to_le32(itr_idx); } - return idpf_send_chunked_msg(qs->vport, ¶ms); + return idpf_send_chunked_msg(qs->adapter, ¶ms); } /** * idpf_send_map_unmap_queue_vector_msg - send virtchnl map or unmap queue * vector message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message * @map: true for map and false for unmap * * Return: 0 on success, -errno on failure. */ -int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) +int idpf_send_map_unmap_queue_vector_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id, bool map) { struct idpf_queue_set *qs __free(kfree) = NULL; - u32 num_q = vport->num_txq + vport->num_rxq; + u32 num_q = rsrc->num_txq + rsrc->num_rxq; u32 k = 0; - qs = idpf_alloc_queue_set(vport, num_q); + qs = idpf_alloc_queue_set(adapter, rsrc, vport_id, num_q); if (!qs) return -ENOMEM; - for (u32 i = 0; i < vport->num_txq_grp; i++) { - const struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (u32 i = 0; i < rsrc->num_txq_grp; i++) { + const struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; for (u32 j = 0; j < tx_qgrp->num_txq; j++) { qs->qs[k].type = VIRTCHNL2_QUEUE_TYPE_TX; @@ -2362,14 +2423,14 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) } } - if (k != vport->num_txq) + if (k != rsrc->num_txq) return -EINVAL; - for (u32 i = 0; i < vport->num_rxq_grp; i++) { - const struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (u32 i = 0; i < rsrc->num_rxq_grp; i++) { + const struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u32 num_rxq; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) num_rxq = rx_qgrp->splitq.num_rxq_sets; else num_rxq = rx_qgrp->singleq.num_rxq; @@ -2377,7 +2438,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) for (u32 j = 0; j < num_rxq; j++) { qs->qs[k].type = VIRTCHNL2_QUEUE_TYPE_RX; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) qs->qs[k++].rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq; else @@ -2453,7 +2514,9 @@ int idpf_send_config_queue_set_msg(const struct idpf_queue_set *qs) */ int idpf_send_enable_queues_msg(struct idpf_vport *vport) { - return idpf_send_ena_dis_queues_msg(vport, true); + return idpf_send_ena_dis_queues_msg(vport->adapter, + &vport->dflt_qv_rsrc, + vport->vport_id, true); } /** @@ -2467,7 +2530,9 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport) { int err; - err = idpf_send_ena_dis_queues_msg(vport, false); + err = idpf_send_ena_dis_queues_msg(vport->adapter, + &vport->dflt_qv_rsrc, + vport->vport_id, false); if (err) return err; @@ -2482,104 +2547,96 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport) * @num_chunks: number of chunks to copy */ static void idpf_convert_reg_to_queue_chunks(struct virtchnl2_queue_chunk *dchunks, - struct virtchnl2_queue_reg_chunk *schunks, + struct idpf_queue_id_reg_chunk *schunks, u16 num_chunks) { u16 i; for (i = 0; i < num_chunks; i++) { - dchunks[i].type = schunks[i].type; - dchunks[i].start_queue_id = schunks[i].start_queue_id; - dchunks[i].num_queues = schunks[i].num_queues; + dchunks[i].type = cpu_to_le32(schunks[i].type); + dchunks[i].start_queue_id = cpu_to_le32(schunks[i].start_queue_id); + dchunks[i].num_queues = cpu_to_le32(schunks[i].num_queues); } } /** * idpf_send_delete_queues_msg - send delete queues virtchnl message - * @vport: Virtual port private data structure + * @adapter: adapter pointer used to send virtchnl message + * @chunks: queue ids received over mailbox + * @vport_id: vport identifier used while preparing the virtchnl message * - * Will send delete queues virtchnl message. Return 0 on success, negative on - * failure. + * Return: 0 on success, negative on failure. */ -int idpf_send_delete_queues_msg(struct idpf_vport *vport) +int idpf_send_delete_queues_msg(struct idpf_adapter *adapter, + struct idpf_queue_id_reg_info *chunks, + u32 vport_id) { struct virtchnl2_del_ena_dis_queues *eq __free(kfree) = NULL; - struct virtchnl2_create_vport *vport_params; - struct virtchnl2_queue_reg_chunks *chunks; struct idpf_vc_xn_params xn_params = {}; - struct idpf_vport_config *vport_config; - u16 vport_idx = vport->idx; ssize_t reply_sz; u16 num_chunks; int buf_size; - vport_config = vport->adapter->vport_config[vport_idx]; - if (vport_config->req_qs_chunks) { - chunks = &vport_config->req_qs_chunks->chunks; - } else { - vport_params = vport->adapter->vport_params_recvd[vport_idx]; - chunks = &vport_params->chunks; - } - - num_chunks = le16_to_cpu(chunks->num_chunks); + num_chunks = chunks->num_chunks; buf_size = struct_size(eq, chunks.chunks, num_chunks); eq = kzalloc(buf_size, GFP_KERNEL); if (!eq) return -ENOMEM; - eq->vport_id = cpu_to_le32(vport->vport_id); + eq->vport_id = cpu_to_le32(vport_id); eq->chunks.num_chunks = cpu_to_le16(num_chunks); - idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->chunks, + idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->queue_chunks, num_chunks); xn_params.vc_op = VIRTCHNL2_OP_DEL_QUEUES; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; xn_params.send_buf.iov_base = eq; xn_params.send_buf.iov_len = buf_size; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } /** * idpf_send_config_queues_msg - Send config queues virtchnl message - * @vport: Virtual port private data structure + * @adapter: adapter pointer used to send virtchnl message + * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message * - * Will send config queues virtchnl message. Returns 0 on success, negative on - * failure. + * Return: 0 on success, negative on failure. */ -int idpf_send_config_queues_msg(struct idpf_vport *vport) +int idpf_send_config_queues_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id) { int err; - err = idpf_send_config_tx_queues_msg(vport); + err = idpf_send_config_tx_queues_msg(adapter, rsrc, vport_id); if (err) return err; - return idpf_send_config_rx_queues_msg(vport); + return idpf_send_config_rx_queues_msg(adapter, rsrc, vport_id); } /** * idpf_send_add_queues_msg - Send virtchnl add queues message - * @vport: Virtual port private data structure - * @num_tx_q: number of transmit queues - * @num_complq: number of transmit completion queues - * @num_rx_q: number of receive queues - * @num_rx_bufq: number of receive buffer queues + * @adapter: adapter pointer used to send virtchnl message + * @vport_config: vport persistent structure to store the queue chunk info + * @rsrc: pointer to queue and vector resources + * @vport_id: vport identifier used while preparing the virtchnl message * - * Returns 0 on success, negative on failure. vport _MUST_ be const here as - * we should not change any fields within vport itself in this function. + * Return: 0 on success, negative on failure. */ -int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, - u16 num_complq, u16 num_rx_q, u16 num_rx_bufq) +int idpf_send_add_queues_msg(struct idpf_adapter *adapter, + struct idpf_vport_config *vport_config, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id) { struct virtchnl2_add_queues *vc_msg __free(kfree) = NULL; struct idpf_vc_xn_params xn_params = {}; - struct idpf_vport_config *vport_config; struct virtchnl2_add_queues aq = {}; - u16 vport_idx = vport->idx; ssize_t reply_sz; int size; @@ -2587,15 +2644,11 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, if (!vc_msg) return -ENOMEM; - vport_config = vport->adapter->vport_config[vport_idx]; - kfree(vport_config->req_qs_chunks); - vport_config->req_qs_chunks = NULL; - - aq.vport_id = cpu_to_le32(vport->vport_id); - aq.num_tx_q = cpu_to_le16(num_tx_q); - aq.num_tx_complq = cpu_to_le16(num_complq); - aq.num_rx_q = cpu_to_le16(num_rx_q); - aq.num_rx_bufq = cpu_to_le16(num_rx_bufq); + aq.vport_id = cpu_to_le32(vport_id); + aq.num_tx_q = cpu_to_le16(rsrc->num_txq); + aq.num_tx_complq = cpu_to_le16(rsrc->num_complq); + aq.num_rx_q = cpu_to_le16(rsrc->num_rxq); + aq.num_rx_bufq = cpu_to_le16(rsrc->num_bufq); xn_params.vc_op = VIRTCHNL2_OP_ADD_QUEUES; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; @@ -2603,15 +2656,15 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, xn_params.send_buf.iov_len = sizeof(aq); xn_params.recv_buf.iov_base = vc_msg; xn_params.recv_buf.iov_len = IDPF_CTLQ_MAX_BUF_LEN; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); if (reply_sz < 0) return reply_sz; /* compare vc_msg num queues with vport num queues */ - if (le16_to_cpu(vc_msg->num_tx_q) != num_tx_q || - le16_to_cpu(vc_msg->num_rx_q) != num_rx_q || - le16_to_cpu(vc_msg->num_tx_complq) != num_complq || - le16_to_cpu(vc_msg->num_rx_bufq) != num_rx_bufq) + if (le16_to_cpu(vc_msg->num_tx_q) != rsrc->num_txq || + le16_to_cpu(vc_msg->num_rx_q) != rsrc->num_rxq || + le16_to_cpu(vc_msg->num_tx_complq) != rsrc->num_complq || + le16_to_cpu(vc_msg->num_rx_bufq) != rsrc->num_bufq) return -EINVAL; size = struct_size(vc_msg, chunks.chunks, @@ -2619,11 +2672,7 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, if (reply_sz < size) return -EIO; - vport_config->req_qs_chunks = kmemdup(vc_msg, size, GFP_KERNEL); - if (!vport_config->req_qs_chunks) - return -ENOMEM; - - return 0; + return idpf_vport_init_queue_reg_chunks(vport_config, &vc_msg->chunks); } /** @@ -2746,13 +2795,14 @@ int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs) /** * idpf_send_get_stats_msg - Send virtchnl get statistics message - * @vport: vport to get stats for + * @np: netdev private structure + * @port_stats: structure to store the vport statistics * - * Returns 0 on success, negative on failure. + * Return: 0 on success, negative on failure. */ -int idpf_send_get_stats_msg(struct idpf_vport *vport) +int idpf_send_get_stats_msg(struct idpf_netdev_priv *np, + struct idpf_port_stats *port_stats) { - struct idpf_netdev_priv *np = netdev_priv(vport->netdev); struct rtnl_link_stats64 *netstats = &np->netstats; struct virtchnl2_vport_stats stats_msg = {}; struct idpf_vc_xn_params xn_params = {}; @@ -2763,7 +2813,7 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport) if (!test_bit(IDPF_VPORT_UP, np->state)) return 0; - stats_msg.vport_id = cpu_to_le32(vport->vport_id); + stats_msg.vport_id = cpu_to_le32(np->vport_id); xn_params.vc_op = VIRTCHNL2_OP_GET_STATS; xn_params.send_buf.iov_base = &stats_msg; @@ -2771,7 +2821,7 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport) xn_params.recv_buf = xn_params.send_buf; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(np->adapter, &xn_params); if (reply_sz < 0) return reply_sz; if (reply_sz < sizeof(stats_msg)) @@ -2792,7 +2842,7 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport) netstats->rx_dropped = le64_to_cpu(stats_msg.rx_discards); netstats->tx_dropped = le64_to_cpu(stats_msg.tx_discards); - vport->port_stats.vport_stats = stats_msg; + port_stats->vport_stats = stats_msg; spin_unlock_bh(&np->stats_lock); @@ -2800,36 +2850,43 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport) } /** - * idpf_send_get_set_rss_lut_msg - Send virtchnl get or set rss lut message - * @vport: virtual port data structure - * @get: flag to set or get rss look up table + * idpf_send_get_set_rss_lut_msg - Send virtchnl get or set RSS lut message + * @adapter: adapter pointer used to send virtchnl message + * @rss_data: pointer to RSS key and lut info + * @vport_id: vport identifier used while preparing the virtchnl message + * @get: flag to set or get RSS look up table * * When rxhash is disabled, RSS LUT will be configured with zeros. If rxhash * is enabled, the LUT values stored in driver's soft copy will be used to setup * the HW. * - * Returns 0 on success, negative on failure. + * Return: 0 on success, negative on failure. */ -int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get) +int idpf_send_get_set_rss_lut_msg(struct idpf_adapter *adapter, + struct idpf_rss_data *rss_data, + u32 vport_id, bool get) { struct virtchnl2_rss_lut *recv_rl __free(kfree) = NULL; struct virtchnl2_rss_lut *rl __free(kfree) = NULL; struct idpf_vc_xn_params xn_params = {}; - struct idpf_rss_data *rss_data; int buf_size, lut_buf_size; + struct idpf_vport *vport; ssize_t reply_sz; bool rxhash_ena; int i; - rss_data = - &vport->adapter->vport_config[vport->idx]->user_config.rss_data; + vport = idpf_vid_to_vport(adapter, vport_id); + if (!vport) + return -EINVAL; + rxhash_ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH); + buf_size = struct_size(rl, lut, rss_data->rss_lut_size); rl = kzalloc(buf_size, GFP_KERNEL); if (!rl) return -ENOMEM; - rl->vport_id = cpu_to_le32(vport->vport_id); + rl->vport_id = cpu_to_le32(vport_id); xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; xn_params.send_buf.iov_base = rl; @@ -2850,7 +2907,7 @@ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get) xn_params.vc_op = VIRTCHNL2_OP_SET_RSS_LUT; } - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); if (reply_sz < 0) return reply_sz; if (!get) @@ -2882,30 +2939,31 @@ do_memcpy: } /** - * idpf_send_get_set_rss_key_msg - Send virtchnl get or set rss key message - * @vport: virtual port data structure - * @get: flag to set or get rss look up table + * idpf_send_get_set_rss_key_msg - Send virtchnl get or set RSS key message + * @adapter: adapter pointer used to send virtchnl message + * @rss_data: pointer to RSS key and lut info + * @vport_id: vport identifier used while preparing the virtchnl message + * @get: flag to set or get RSS look up table * - * Returns 0 on success, negative on failure + * Return: 0 on success, negative on failure */ -int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get) +int idpf_send_get_set_rss_key_msg(struct idpf_adapter *adapter, + struct idpf_rss_data *rss_data, + u32 vport_id, bool get) { struct virtchnl2_rss_key *recv_rk __free(kfree) = NULL; struct virtchnl2_rss_key *rk __free(kfree) = NULL; struct idpf_vc_xn_params xn_params = {}; - struct idpf_rss_data *rss_data; ssize_t reply_sz; int i, buf_size; u16 key_size; - rss_data = - &vport->adapter->vport_config[vport->idx]->user_config.rss_data; buf_size = struct_size(rk, key_flex, rss_data->rss_key_size); rk = kzalloc(buf_size, GFP_KERNEL); if (!rk) return -ENOMEM; - rk->vport_id = cpu_to_le32(vport->vport_id); + rk->vport_id = cpu_to_le32(vport_id); xn_params.send_buf.iov_base = rk; xn_params.send_buf.iov_len = buf_size; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; @@ -2925,7 +2983,7 @@ int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get) xn_params.vc_op = VIRTCHNL2_OP_SET_RSS_KEY; } - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); if (reply_sz < 0) return reply_sz; if (!get) @@ -3011,33 +3069,142 @@ static void idpf_finalize_ptype_lookup(struct libeth_rx_pt *ptype) } /** + * idpf_parse_protocol_ids - parse protocol IDs for a given packet type + * @ptype: packet type to parse + * @rx_pt: store the parsed packet type info into + */ +static void idpf_parse_protocol_ids(struct virtchnl2_ptype *ptype, + struct libeth_rx_pt *rx_pt) +{ + struct idpf_ptype_state pstate = {}; + + for (u32 j = 0; j < ptype->proto_id_count; j++) { + u16 id = le16_to_cpu(ptype->proto_id[j]); + + switch (id) { + case VIRTCHNL2_PROTO_HDR_GRE: + if (pstate.tunnel_state == IDPF_PTYPE_TUNNEL_IP) { + rx_pt->tunnel_type = + LIBETH_RX_PT_TUNNEL_IP_GRENAT; + pstate.tunnel_state |= + IDPF_PTYPE_TUNNEL_IP_GRENAT; + } + break; + case VIRTCHNL2_PROTO_HDR_MAC: + rx_pt->outer_ip = LIBETH_RX_PT_OUTER_L2; + if (pstate.tunnel_state == IDPF_TUN_IP_GRE) { + rx_pt->tunnel_type = + LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC; + pstate.tunnel_state |= + IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC; + } + break; + case VIRTCHNL2_PROTO_HDR_IPV4: + idpf_fill_ptype_lookup(rx_pt, &pstate, true, false); + break; + case VIRTCHNL2_PROTO_HDR_IPV6: + idpf_fill_ptype_lookup(rx_pt, &pstate, false, false); + break; + case VIRTCHNL2_PROTO_HDR_IPV4_FRAG: + idpf_fill_ptype_lookup(rx_pt, &pstate, true, true); + break; + case VIRTCHNL2_PROTO_HDR_IPV6_FRAG: + idpf_fill_ptype_lookup(rx_pt, &pstate, false, true); + break; + case VIRTCHNL2_PROTO_HDR_UDP: + rx_pt->inner_prot = LIBETH_RX_PT_INNER_UDP; + break; + case VIRTCHNL2_PROTO_HDR_TCP: + rx_pt->inner_prot = LIBETH_RX_PT_INNER_TCP; + break; + case VIRTCHNL2_PROTO_HDR_SCTP: + rx_pt->inner_prot = LIBETH_RX_PT_INNER_SCTP; + break; + case VIRTCHNL2_PROTO_HDR_ICMP: + rx_pt->inner_prot = LIBETH_RX_PT_INNER_ICMP; + break; + case VIRTCHNL2_PROTO_HDR_PAY: + rx_pt->payload_layer = LIBETH_RX_PT_PAYLOAD_L2; + break; + case VIRTCHNL2_PROTO_HDR_ICMPV6: + case VIRTCHNL2_PROTO_HDR_IPV6_EH: + case VIRTCHNL2_PROTO_HDR_PRE_MAC: + case VIRTCHNL2_PROTO_HDR_POST_MAC: + case VIRTCHNL2_PROTO_HDR_ETHERTYPE: + case VIRTCHNL2_PROTO_HDR_SVLAN: + case VIRTCHNL2_PROTO_HDR_CVLAN: + case VIRTCHNL2_PROTO_HDR_MPLS: + case VIRTCHNL2_PROTO_HDR_MMPLS: + case VIRTCHNL2_PROTO_HDR_PTP: + case VIRTCHNL2_PROTO_HDR_CTRL: + case VIRTCHNL2_PROTO_HDR_LLDP: + case VIRTCHNL2_PROTO_HDR_ARP: + case VIRTCHNL2_PROTO_HDR_ECP: + case VIRTCHNL2_PROTO_HDR_EAPOL: + case VIRTCHNL2_PROTO_HDR_PPPOD: + case VIRTCHNL2_PROTO_HDR_PPPOE: + case VIRTCHNL2_PROTO_HDR_IGMP: + case VIRTCHNL2_PROTO_HDR_AH: + case VIRTCHNL2_PROTO_HDR_ESP: + case VIRTCHNL2_PROTO_HDR_IKE: + case VIRTCHNL2_PROTO_HDR_NATT_KEEP: + case VIRTCHNL2_PROTO_HDR_L2TPV2: + case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL: + case VIRTCHNL2_PROTO_HDR_L2TPV3: + case VIRTCHNL2_PROTO_HDR_GTP: + case VIRTCHNL2_PROTO_HDR_GTP_EH: + case VIRTCHNL2_PROTO_HDR_GTPCV2: + case VIRTCHNL2_PROTO_HDR_GTPC_TEID: + case VIRTCHNL2_PROTO_HDR_GTPU: + case VIRTCHNL2_PROTO_HDR_GTPU_UL: + case VIRTCHNL2_PROTO_HDR_GTPU_DL: + case VIRTCHNL2_PROTO_HDR_ECPRI: + case VIRTCHNL2_PROTO_HDR_VRRP: + case VIRTCHNL2_PROTO_HDR_OSPF: + case VIRTCHNL2_PROTO_HDR_TUN: + case VIRTCHNL2_PROTO_HDR_NVGRE: + case VIRTCHNL2_PROTO_HDR_VXLAN: + case VIRTCHNL2_PROTO_HDR_VXLAN_GPE: + case VIRTCHNL2_PROTO_HDR_GENEVE: + case VIRTCHNL2_PROTO_HDR_NSH: + case VIRTCHNL2_PROTO_HDR_QUIC: + case VIRTCHNL2_PROTO_HDR_PFCP: + case VIRTCHNL2_PROTO_HDR_PFCP_NODE: + case VIRTCHNL2_PROTO_HDR_PFCP_SESSION: + case VIRTCHNL2_PROTO_HDR_RTP: + case VIRTCHNL2_PROTO_HDR_NO_PROTO: + break; + default: + break; + } + } +} + +/** * idpf_send_get_rx_ptype_msg - Send virtchnl for ptype info - * @vport: virtual port data structure + * @adapter: driver specific private structure * - * Returns 0 on success, negative on failure. + * Return: 0 on success, negative on failure. */ -int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) +static int idpf_send_get_rx_ptype_msg(struct idpf_adapter *adapter) { struct virtchnl2_get_ptype_info *get_ptype_info __free(kfree) = NULL; struct virtchnl2_get_ptype_info *ptype_info __free(kfree) = NULL; - struct libeth_rx_pt *ptype_lkup __free(kfree) = NULL; - int max_ptype, ptypes_recvd = 0, ptype_offset; - struct idpf_adapter *adapter = vport->adapter; + struct libeth_rx_pt *singleq_pt_lkup __free(kfree) = NULL; + struct libeth_rx_pt *splitq_pt_lkup __free(kfree) = NULL; struct idpf_vc_xn_params xn_params = {}; + int ptypes_recvd = 0, ptype_offset; + u32 max_ptype = IDPF_RX_MAX_PTYPE; u16 next_ptype_id = 0; ssize_t reply_sz; - int i, j, k; - if (vport->rx_ptype_lkup) - return 0; - - if (idpf_is_queue_model_split(vport->rxq_model)) - max_ptype = IDPF_RX_MAX_PTYPE; - else - max_ptype = IDPF_RX_MAX_BASE_PTYPE; + singleq_pt_lkup = kcalloc(IDPF_RX_MAX_BASE_PTYPE, + sizeof(*singleq_pt_lkup), GFP_KERNEL); + if (!singleq_pt_lkup) + return -ENOMEM; - ptype_lkup = kcalloc(max_ptype, sizeof(*ptype_lkup), GFP_KERNEL); - if (!ptype_lkup) + splitq_pt_lkup = kcalloc(max_ptype, sizeof(*splitq_pt_lkup), GFP_KERNEL); + if (!splitq_pt_lkup) return -ENOMEM; get_ptype_info = kzalloc(sizeof(*get_ptype_info), GFP_KERNEL); @@ -3078,175 +3245,85 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) ptype_offset = IDPF_RX_PTYPE_HDR_SZ; - for (i = 0; i < le16_to_cpu(ptype_info->num_ptypes); i++) { - struct idpf_ptype_state pstate = { }; + for (u16 i = 0; i < le16_to_cpu(ptype_info->num_ptypes); i++) { + struct libeth_rx_pt rx_pt = {}; struct virtchnl2_ptype *ptype; - u16 id; + u16 pt_10, pt_8; ptype = (struct virtchnl2_ptype *) ((u8 *)ptype_info + ptype_offset); + pt_10 = le16_to_cpu(ptype->ptype_id_10); + pt_8 = ptype->ptype_id_8; + ptype_offset += IDPF_GET_PTYPE_SIZE(ptype); if (ptype_offset > IDPF_CTLQ_MAX_BUF_LEN) return -EINVAL; /* 0xFFFF indicates end of ptypes */ - if (le16_to_cpu(ptype->ptype_id_10) == - IDPF_INVALID_PTYPE_ID) + if (pt_10 == IDPF_INVALID_PTYPE_ID) goto out; + if (pt_10 >= max_ptype) + return -EINVAL; - if (idpf_is_queue_model_split(vport->rxq_model)) - k = le16_to_cpu(ptype->ptype_id_10); - else - k = ptype->ptype_id_8; - - for (j = 0; j < ptype->proto_id_count; j++) { - id = le16_to_cpu(ptype->proto_id[j]); - switch (id) { - case VIRTCHNL2_PROTO_HDR_GRE: - if (pstate.tunnel_state == - IDPF_PTYPE_TUNNEL_IP) { - ptype_lkup[k].tunnel_type = - LIBETH_RX_PT_TUNNEL_IP_GRENAT; - pstate.tunnel_state |= - IDPF_PTYPE_TUNNEL_IP_GRENAT; - } - break; - case VIRTCHNL2_PROTO_HDR_MAC: - ptype_lkup[k].outer_ip = - LIBETH_RX_PT_OUTER_L2; - if (pstate.tunnel_state == - IDPF_TUN_IP_GRE) { - ptype_lkup[k].tunnel_type = - LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC; - pstate.tunnel_state |= - IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC; - } - break; - case VIRTCHNL2_PROTO_HDR_IPV4: - idpf_fill_ptype_lookup(&ptype_lkup[k], - &pstate, true, - false); - break; - case VIRTCHNL2_PROTO_HDR_IPV6: - idpf_fill_ptype_lookup(&ptype_lkup[k], - &pstate, false, - false); - break; - case VIRTCHNL2_PROTO_HDR_IPV4_FRAG: - idpf_fill_ptype_lookup(&ptype_lkup[k], - &pstate, true, - true); - break; - case VIRTCHNL2_PROTO_HDR_IPV6_FRAG: - idpf_fill_ptype_lookup(&ptype_lkup[k], - &pstate, false, - true); - break; - case VIRTCHNL2_PROTO_HDR_UDP: - ptype_lkup[k].inner_prot = - LIBETH_RX_PT_INNER_UDP; - break; - case VIRTCHNL2_PROTO_HDR_TCP: - ptype_lkup[k].inner_prot = - LIBETH_RX_PT_INNER_TCP; - break; - case VIRTCHNL2_PROTO_HDR_SCTP: - ptype_lkup[k].inner_prot = - LIBETH_RX_PT_INNER_SCTP; - break; - case VIRTCHNL2_PROTO_HDR_ICMP: - ptype_lkup[k].inner_prot = - LIBETH_RX_PT_INNER_ICMP; - break; - case VIRTCHNL2_PROTO_HDR_PAY: - ptype_lkup[k].payload_layer = - LIBETH_RX_PT_PAYLOAD_L2; - break; - case VIRTCHNL2_PROTO_HDR_ICMPV6: - case VIRTCHNL2_PROTO_HDR_IPV6_EH: - case VIRTCHNL2_PROTO_HDR_PRE_MAC: - case VIRTCHNL2_PROTO_HDR_POST_MAC: - case VIRTCHNL2_PROTO_HDR_ETHERTYPE: - case VIRTCHNL2_PROTO_HDR_SVLAN: - case VIRTCHNL2_PROTO_HDR_CVLAN: - case VIRTCHNL2_PROTO_HDR_MPLS: - case VIRTCHNL2_PROTO_HDR_MMPLS: - case VIRTCHNL2_PROTO_HDR_PTP: - case VIRTCHNL2_PROTO_HDR_CTRL: - case VIRTCHNL2_PROTO_HDR_LLDP: - case VIRTCHNL2_PROTO_HDR_ARP: - case VIRTCHNL2_PROTO_HDR_ECP: - case VIRTCHNL2_PROTO_HDR_EAPOL: - case VIRTCHNL2_PROTO_HDR_PPPOD: - case VIRTCHNL2_PROTO_HDR_PPPOE: - case VIRTCHNL2_PROTO_HDR_IGMP: - case VIRTCHNL2_PROTO_HDR_AH: - case VIRTCHNL2_PROTO_HDR_ESP: - case VIRTCHNL2_PROTO_HDR_IKE: - case VIRTCHNL2_PROTO_HDR_NATT_KEEP: - case VIRTCHNL2_PROTO_HDR_L2TPV2: - case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL: - case VIRTCHNL2_PROTO_HDR_L2TPV3: - case VIRTCHNL2_PROTO_HDR_GTP: - case VIRTCHNL2_PROTO_HDR_GTP_EH: - case VIRTCHNL2_PROTO_HDR_GTPCV2: - case VIRTCHNL2_PROTO_HDR_GTPC_TEID: - case VIRTCHNL2_PROTO_HDR_GTPU: - case VIRTCHNL2_PROTO_HDR_GTPU_UL: - case VIRTCHNL2_PROTO_HDR_GTPU_DL: - case VIRTCHNL2_PROTO_HDR_ECPRI: - case VIRTCHNL2_PROTO_HDR_VRRP: - case VIRTCHNL2_PROTO_HDR_OSPF: - case VIRTCHNL2_PROTO_HDR_TUN: - case VIRTCHNL2_PROTO_HDR_NVGRE: - case VIRTCHNL2_PROTO_HDR_VXLAN: - case VIRTCHNL2_PROTO_HDR_VXLAN_GPE: - case VIRTCHNL2_PROTO_HDR_GENEVE: - case VIRTCHNL2_PROTO_HDR_NSH: - case VIRTCHNL2_PROTO_HDR_QUIC: - case VIRTCHNL2_PROTO_HDR_PFCP: - case VIRTCHNL2_PROTO_HDR_PFCP_NODE: - case VIRTCHNL2_PROTO_HDR_PFCP_SESSION: - case VIRTCHNL2_PROTO_HDR_RTP: - case VIRTCHNL2_PROTO_HDR_NO_PROTO: - break; - default: - break; - } - } - - idpf_finalize_ptype_lookup(&ptype_lkup[k]); + idpf_parse_protocol_ids(ptype, &rx_pt); + idpf_finalize_ptype_lookup(&rx_pt); + + /* For a given protocol ID stack, the ptype value might + * vary between ptype_id_10 and ptype_id_8. So store + * them separately for splitq and singleq. Also skip + * the repeated ptypes in case of singleq. + */ + splitq_pt_lkup[pt_10] = rx_pt; + if (!singleq_pt_lkup[pt_8].outer_ip) + singleq_pt_lkup[pt_8] = rx_pt; } } out: - vport->rx_ptype_lkup = no_free_ptr(ptype_lkup); + adapter->splitq_pt_lkup = no_free_ptr(splitq_pt_lkup); + adapter->singleq_pt_lkup = no_free_ptr(singleq_pt_lkup); return 0; } /** + * idpf_rel_rx_pt_lkup - release RX ptype lookup table + * @adapter: adapter pointer to get the lookup table + */ +static void idpf_rel_rx_pt_lkup(struct idpf_adapter *adapter) +{ + kfree(adapter->splitq_pt_lkup); + adapter->splitq_pt_lkup = NULL; + + kfree(adapter->singleq_pt_lkup); + adapter->singleq_pt_lkup = NULL; +} + +/** * idpf_send_ena_dis_loopback_msg - Send virtchnl enable/disable loopback * message - * @vport: virtual port data structure + * @adapter: adapter pointer used to send virtchnl message + * @vport_id: vport identifier used while preparing the virtchnl message + * @loopback_ena: flag to enable or disable loopback * - * Returns 0 on success, negative on failure. + * Return: 0 on success, negative on failure. */ -int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport) +int idpf_send_ena_dis_loopback_msg(struct idpf_adapter *adapter, u32 vport_id, + bool loopback_ena) { struct idpf_vc_xn_params xn_params = {}; struct virtchnl2_loopback loopback; ssize_t reply_sz; - loopback.vport_id = cpu_to_le32(vport->vport_id); - loopback.enable = idpf_is_feature_ena(vport, NETIF_F_LOOPBACK); + loopback.vport_id = cpu_to_le32(vport_id); + loopback.enable = loopback_ena; xn_params.vc_op = VIRTCHNL2_OP_LOOPBACK; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; xn_params.send_buf.iov_base = &loopback; xn_params.send_buf.iov_len = sizeof(loopback); - reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); return reply_sz < 0 ? reply_sz : 0; } @@ -3325,7 +3402,7 @@ int idpf_init_dflt_mbx(struct idpf_adapter *adapter) void idpf_deinit_dflt_mbx(struct idpf_adapter *adapter) { if (adapter->hw.arq && adapter->hw.asq) { - idpf_mb_clean(adapter); + idpf_mb_clean(adapter, adapter->hw.asq); idpf_ctlq_deinit(&adapter->hw); } adapter->hw.arq = NULL; @@ -3520,6 +3597,13 @@ restart: goto err_intr_req; } + err = idpf_send_get_rx_ptype_msg(adapter); + if (err) { + dev_err(&adapter->pdev->dev, "failed to get RX ptypes: %d\n", + err); + goto intr_rel; + } + err = idpf_ptp_init(adapter); if (err) pci_err(adapter->pdev, "PTP init failed, err=%pe\n", @@ -3537,6 +3621,8 @@ restart: return 0; +intr_rel: + idpf_intr_rel(adapter); err_intr_req: cancel_delayed_work_sync(&adapter->serv_task); cancel_delayed_work_sync(&adapter->mbx_task); @@ -3591,6 +3677,7 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter) idpf_ptp_release(adapter); idpf_deinit_task(adapter); idpf_idc_deinit_core_aux_device(adapter->cdev_info); + idpf_rel_rx_pt_lkup(adapter); idpf_intr_rel(adapter); if (remove_in_prog) @@ -3613,25 +3700,27 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter) /** * idpf_vport_alloc_vec_indexes - Get relative vector indexes * @vport: virtual port data struct + * @rsrc: pointer to queue and vector resources * * This function requests the vector information required for the vport and * stores the vector indexes received from the 'global vector distribution' * in the vport's queue vectors array. * - * Return 0 on success, error on failure + * Return: 0 on success, error on failure */ -int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport) +int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc) { struct idpf_vector_info vec_info; int num_alloc_vecs; u32 req; - vec_info.num_curr_vecs = vport->num_q_vectors; + vec_info.num_curr_vecs = rsrc->num_q_vectors; if (vec_info.num_curr_vecs) vec_info.num_curr_vecs += IDPF_RESERVED_VECS; /* XDPSQs are all bound to the NOIRQ vector from IDPF_RESERVED_VECS */ - req = max(vport->num_txq - vport->num_xdp_txq, vport->num_rxq) + + req = max(rsrc->num_txq - vport->num_xdp_txq, rsrc->num_rxq) + IDPF_RESERVED_VECS; vec_info.num_req_vecs = req; @@ -3639,7 +3728,7 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport) vec_info.index = vport->idx; num_alloc_vecs = idpf_req_rel_vector_indexes(vport->adapter, - vport->q_vector_idxs, + rsrc->q_vector_idxs, &vec_info); if (num_alloc_vecs <= 0) { dev_err(&vport->adapter->pdev->dev, "Vector distribution failed: %d\n", @@ -3647,7 +3736,7 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport) return -EINVAL; } - vport->num_q_vectors = num_alloc_vecs - IDPF_RESERVED_VECS; + rsrc->num_q_vectors = num_alloc_vecs - IDPF_RESERVED_VECS; return 0; } @@ -3658,9 +3747,12 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport) * @max_q: vport max queue info * * Will initialize vport with the info received through MB earlier + * + * Return: 0 on success, negative on failure. */ -void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) +int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) { + struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct idpf_adapter *adapter = vport->adapter; struct virtchnl2_create_vport *vport_msg; struct idpf_vport_config *vport_config; @@ -3674,13 +3766,18 @@ void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) rss_data = &vport_config->user_config.rss_data; vport_msg = adapter->vport_params_recvd[idx]; + err = idpf_vport_init_queue_reg_chunks(vport_config, + &vport_msg->chunks); + if (err) + return err; + vport_config->max_q.max_txq = max_q->max_txq; vport_config->max_q.max_rxq = max_q->max_rxq; vport_config->max_q.max_complq = max_q->max_complq; vport_config->max_q.max_bufq = max_q->max_bufq; - vport->txq_model = le16_to_cpu(vport_msg->txq_model); - vport->rxq_model = le16_to_cpu(vport_msg->rxq_model); + rsrc->txq_model = le16_to_cpu(vport_msg->txq_model); + rsrc->rxq_model = le16_to_cpu(vport_msg->rxq_model); vport->vport_type = le16_to_cpu(vport_msg->vport_type); vport->vport_id = le32_to_cpu(vport_msg->vport_id); @@ -3697,24 +3794,27 @@ void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) idpf_vport_set_hsplit(vport, ETHTOOL_TCP_DATA_SPLIT_ENABLED); - idpf_vport_init_num_qs(vport, vport_msg); - idpf_vport_calc_num_q_desc(vport); - idpf_vport_calc_num_q_groups(vport); - idpf_vport_alloc_vec_indexes(vport); + idpf_vport_init_num_qs(vport, vport_msg, rsrc); + idpf_vport_calc_num_q_desc(vport, rsrc); + idpf_vport_calc_num_q_groups(rsrc); + idpf_vport_alloc_vec_indexes(vport, rsrc); vport->crc_enable = adapter->crc_enable; if (!(vport_msg->vport_flags & cpu_to_le16(VIRTCHNL2_VPORT_UPLINK_PORT))) - return; + return 0; err = idpf_ptp_get_vport_tstamps_caps(vport); if (err) { + /* Do not error on timestamp failure */ pci_dbg(vport->adapter->pdev, "Tx timestamping not supported\n"); - return; + return 0; } INIT_WORK(&vport->tstamp_task, idpf_tstamp_task); + + return 0; } /** @@ -3773,21 +3873,21 @@ int idpf_get_vec_ids(struct idpf_adapter *adapter, * Returns number of ids filled */ static int idpf_vport_get_queue_ids(u32 *qids, int num_qids, u16 q_type, - struct virtchnl2_queue_reg_chunks *chunks) + struct idpf_queue_id_reg_info *chunks) { - u16 num_chunks = le16_to_cpu(chunks->num_chunks); + u16 num_chunks = chunks->num_chunks; u32 num_q_id_filled = 0, i; u32 start_q_id, num_q; while (num_chunks--) { - struct virtchnl2_queue_reg_chunk *chunk; + struct idpf_queue_id_reg_chunk *chunk; - chunk = &chunks->chunks[num_chunks]; - if (le32_to_cpu(chunk->type) != q_type) + chunk = &chunks->queue_chunks[num_chunks]; + if (chunk->type != q_type) continue; - num_q = le32_to_cpu(chunk->num_queues); - start_q_id = le32_to_cpu(chunk->start_queue_id); + num_q = chunk->num_queues; + start_q_id = chunk->start_queue_id; for (i = 0; i < num_q; i++) { if ((num_q_id_filled + i) < num_qids) { @@ -3806,6 +3906,7 @@ static int idpf_vport_get_queue_ids(u32 *qids, int num_qids, u16 q_type, /** * __idpf_vport_queue_ids_init - Initialize queue ids from Mailbox parameters * @vport: virtual port for which the queues ids are initialized + * @rsrc: pointer to queue and vector resources * @qids: queue ids * @num_qids: number of queue ids * @q_type: type of queue @@ -3814,6 +3915,7 @@ static int idpf_vport_get_queue_ids(u32 *qids, int num_qids, u16 q_type, * parameters. Returns number of queue ids initialized. */ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, const u32 *qids, int num_qids, u32 q_type) @@ -3822,19 +3924,19 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, switch (q_type) { case VIRTCHNL2_QUEUE_TYPE_TX: - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (i = 0; i < rsrc->num_txq_grp; i++) { + struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; for (j = 0; j < tx_qgrp->num_txq && k < num_qids; j++, k++) tx_qgrp->txqs[j]->q_id = qids[k]; } break; case VIRTCHNL2_QUEUE_TYPE_RX: - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u16 num_rxq; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) num_rxq = rx_qgrp->splitq.num_rxq_sets; else num_rxq = rx_qgrp->singleq.num_rxq; @@ -3842,7 +3944,7 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, for (j = 0; j < num_rxq && k < num_qids; j++, k++) { struct idpf_rx_queue *q; - if (idpf_is_queue_model_split(vport->rxq_model)) + if (idpf_is_queue_model_split(rsrc->rxq_model)) q = &rx_qgrp->splitq.rxq_sets[j]->rxq; else q = rx_qgrp->singleq.rxqs[j]; @@ -3851,16 +3953,16 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, } break; case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: - for (i = 0; i < vport->num_txq_grp && k < num_qids; i++, k++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (i = 0; i < rsrc->num_txq_grp && k < num_qids; i++, k++) { + struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i]; tx_qgrp->complq->q_id = qids[k]; } break; case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - u8 num_bufqs = vport->num_bufqs_per_qgrp; + for (i = 0; i < rsrc->num_rxq_grp; i++) { + struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; + u8 num_bufqs = rsrc->num_bufqs_per_qgrp; for (j = 0; j < num_bufqs && k < num_qids; j++, k++) { struct idpf_buf_queue *q; @@ -3880,30 +3982,21 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport, /** * idpf_vport_queue_ids_init - Initialize queue ids from Mailbox parameters * @vport: virtual port for which the queues ids are initialized + * @rsrc: pointer to queue and vector resources + * @chunks: queue ids received over mailbox * * Will initialize all queue ids with ids received as mailbox parameters. - * Returns 0 on success, negative if all the queues are not initialized. + * + * Return: 0 on success, negative if all the queues are not initialized. */ -int idpf_vport_queue_ids_init(struct idpf_vport *vport) +int idpf_vport_queue_ids_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, + struct idpf_queue_id_reg_info *chunks) { - struct virtchnl2_create_vport *vport_params; - struct virtchnl2_queue_reg_chunks *chunks; - struct idpf_vport_config *vport_config; - u16 vport_idx = vport->idx; int num_ids, err = 0; u16 q_type; u32 *qids; - vport_config = vport->adapter->vport_config[vport_idx]; - if (vport_config->req_qs_chunks) { - struct virtchnl2_add_queues *vc_aq = - (struct virtchnl2_add_queues *)vport_config->req_qs_chunks; - chunks = &vc_aq->chunks; - } else { - vport_params = vport->adapter->vport_params_recvd[vport_idx]; - chunks = &vport_params->chunks; - } - qids = kcalloc(IDPF_MAX_QIDS, sizeof(u32), GFP_KERNEL); if (!qids) return -ENOMEM; @@ -3911,13 +4004,13 @@ int idpf_vport_queue_ids_init(struct idpf_vport *vport) num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, VIRTCHNL2_QUEUE_TYPE_TX, chunks); - if (num_ids < vport->num_txq) { + if (num_ids < rsrc->num_txq) { err = -EINVAL; goto mem_rel; } - num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, + num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids, num_ids, VIRTCHNL2_QUEUE_TYPE_TX); - if (num_ids < vport->num_txq) { + if (num_ids < rsrc->num_txq) { err = -EINVAL; goto mem_rel; } @@ -3925,44 +4018,46 @@ int idpf_vport_queue_ids_init(struct idpf_vport *vport) num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, VIRTCHNL2_QUEUE_TYPE_RX, chunks); - if (num_ids < vport->num_rxq) { + if (num_ids < rsrc->num_rxq) { err = -EINVAL; goto mem_rel; } - num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, + num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids, num_ids, VIRTCHNL2_QUEUE_TYPE_RX); - if (num_ids < vport->num_rxq) { + if (num_ids < rsrc->num_rxq) { err = -EINVAL; goto mem_rel; } - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(rsrc->txq_model)) goto check_rxq; q_type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, q_type, chunks); - if (num_ids < vport->num_complq) { + if (num_ids < rsrc->num_complq) { err = -EINVAL; goto mem_rel; } - num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, q_type); - if (num_ids < vport->num_complq) { + num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids, + num_ids, q_type); + if (num_ids < rsrc->num_complq) { err = -EINVAL; goto mem_rel; } check_rxq: - if (!idpf_is_queue_model_split(vport->rxq_model)) + if (!idpf_is_queue_model_split(rsrc->rxq_model)) goto mem_rel; q_type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, q_type, chunks); - if (num_ids < vport->num_bufq) { + if (num_ids < rsrc->num_bufq) { err = -EINVAL; goto mem_rel; } - num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, q_type); - if (num_ids < vport->num_bufq) + num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids, + num_ids, q_type); + if (num_ids < rsrc->num_bufq) err = -EINVAL; mem_rel: @@ -3974,23 +4069,24 @@ mem_rel: /** * idpf_vport_adjust_qs - Adjust to new requested queues * @vport: virtual port data struct + * @rsrc: pointer to queue and vector resources * * Renegotiate queues. Returns 0 on success, negative on failure. */ -int idpf_vport_adjust_qs(struct idpf_vport *vport) +int idpf_vport_adjust_qs(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc) { struct virtchnl2_create_vport vport_msg; int err; - vport_msg.txq_model = cpu_to_le16(vport->txq_model); - vport_msg.rxq_model = cpu_to_le16(vport->rxq_model); + vport_msg.txq_model = cpu_to_le16(rsrc->txq_model); + vport_msg.rxq_model = cpu_to_le16(rsrc->rxq_model); err = idpf_vport_calc_total_qs(vport->adapter, vport->idx, &vport_msg, NULL); if (err) return err; - idpf_vport_init_num_qs(vport, &vport_msg); - idpf_vport_calc_num_q_groups(vport); + idpf_vport_init_num_qs(vport, &vport_msg, rsrc); + idpf_vport_calc_num_q_groups(rsrc); return 0; } @@ -4112,12 +4208,12 @@ u32 idpf_get_vport_id(struct idpf_vport *vport) return le32_to_cpu(vport_msg->vport_id); } -static void idpf_set_mac_type(struct idpf_vport *vport, +static void idpf_set_mac_type(const u8 *default_mac_addr, struct virtchnl2_mac_addr *mac_addr) { bool is_primary; - is_primary = ether_addr_equal(vport->default_mac_addr, mac_addr->addr); + is_primary = ether_addr_equal(default_mac_addr, mac_addr->addr); mac_addr->type = is_primary ? VIRTCHNL2_MAC_ADDR_PRIMARY : VIRTCHNL2_MAC_ADDR_EXTRA; } @@ -4193,22 +4289,23 @@ invalid_payload: /** * idpf_add_del_mac_filters - Add/del mac filters - * @vport: Virtual port data structure - * @np: Netdev private structure + * @adapter: adapter pointer used to send virtchnl message + * @vport_config: persistent vport structure to get the MAC filter list + * @default_mac_addr: default MAC address to compare with + * @vport_id: vport identifier used while preparing the virtchnl message * @add: Add or delete flag * @async: Don't wait for return message * - * Returns 0 on success, error on failure. + * Return: 0 on success, error on failure. **/ -int idpf_add_del_mac_filters(struct idpf_vport *vport, - struct idpf_netdev_priv *np, +int idpf_add_del_mac_filters(struct idpf_adapter *adapter, + struct idpf_vport_config *vport_config, + const u8 *default_mac_addr, u32 vport_id, bool add, bool async) { struct virtchnl2_mac_addr_list *ma_list __free(kfree) = NULL; struct virtchnl2_mac_addr *mac_addr __free(kfree) = NULL; - struct idpf_adapter *adapter = np->adapter; struct idpf_vc_xn_params xn_params = {}; - struct idpf_vport_config *vport_config; u32 num_msgs, total_filters = 0; struct idpf_mac_filter *f; ssize_t reply_sz; @@ -4220,7 +4317,6 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport, xn_params.async = async; xn_params.async_handler = idpf_mac_filter_async_handler; - vport_config = adapter->vport_config[np->vport_idx]; spin_lock_bh(&vport_config->mac_filter_list_lock); /* Find the number of newly added filters */ @@ -4251,7 +4347,7 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport, list) { if (add && f->add) { ether_addr_copy(mac_addr[i].addr, f->macaddr); - idpf_set_mac_type(vport, &mac_addr[i]); + idpf_set_mac_type(default_mac_addr, &mac_addr[i]); i++; f->add = false; if (i == total_filters) @@ -4259,7 +4355,7 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport, } if (!add && f->remove) { ether_addr_copy(mac_addr[i].addr, f->macaddr); - idpf_set_mac_type(vport, &mac_addr[i]); + idpf_set_mac_type(default_mac_addr, &mac_addr[i]); i++; f->remove = false; if (i == total_filters) @@ -4291,7 +4387,7 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport, memset(ma_list, 0, buf_size); } - ma_list->vport_id = cpu_to_le32(np->vport_id); + ma_list->vport_id = cpu_to_le32(vport_id); ma_list->num_mac_addr = cpu_to_le16(num_entries); memcpy(ma_list->mac_addr_list, &mac_addr[k], entries_size); diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h index eac3d15daa42..fe065911ad5a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -92,6 +92,7 @@ struct idpf_netdev_priv; struct idpf_vec_regs; struct idpf_vport; struct idpf_vport_max_q; +struct idpf_vport_config; struct idpf_vport_user_config_data; ssize_t idpf_vc_xn_exec(struct idpf_adapter *adapter, @@ -101,10 +102,20 @@ void idpf_deinit_dflt_mbx(struct idpf_adapter *adapter); int idpf_vc_core_init(struct idpf_adapter *adapter); void idpf_vc_core_deinit(struct idpf_adapter *adapter); -int idpf_get_reg_intr_vecs(struct idpf_vport *vport, +int idpf_get_reg_intr_vecs(struct idpf_adapter *adapter, struct idpf_vec_regs *reg_vals); -int idpf_queue_reg_init(struct idpf_vport *vport); -int idpf_vport_queue_ids_init(struct idpf_vport *vport); +int idpf_queue_reg_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, + struct idpf_queue_id_reg_info *chunks); +int idpf_vport_queue_ids_init(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc, + struct idpf_queue_id_reg_info *chunks); +static inline void +idpf_vport_deinit_queue_reg_chunks(struct idpf_vport_config *vport_cfg) +{ + kfree(vport_cfg->qid_reg_info.queue_chunks); + vport_cfg->qid_reg_info.queue_chunks = NULL; +} bool idpf_vport_is_cap_ena(struct idpf_vport *vport, u16 flag); bool idpf_sideband_flow_type_ena(struct idpf_vport *vport, u32 flow_type); @@ -112,9 +123,9 @@ bool idpf_sideband_action_ena(struct idpf_vport *vport, struct ethtool_rx_flow_spec *fsp); unsigned int idpf_fsteer_max_rules(struct idpf_vport *vport); -int idpf_recv_mb_msg(struct idpf_adapter *adapter); -int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op, - u16 msg_size, u8 *msg, u16 cookie); +int idpf_recv_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *arq); +int idpf_send_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *asq, + u32 op, u16 msg_size, u8 *msg, u16 cookie); struct idpf_queue_ptr { enum virtchnl2_queue_type type; @@ -127,60 +138,81 @@ struct idpf_queue_ptr { }; struct idpf_queue_set { - struct idpf_vport *vport; + struct idpf_adapter *adapter; + struct idpf_q_vec_rsrc *qv_rsrc; + u32 vport_id; u32 num; struct idpf_queue_ptr qs[] __counted_by(num); }; -struct idpf_queue_set *idpf_alloc_queue_set(struct idpf_vport *vport, u32 num); +struct idpf_queue_set *idpf_alloc_queue_set(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id, u32 num); int idpf_send_enable_queue_set_msg(const struct idpf_queue_set *qs); int idpf_send_disable_queue_set_msg(const struct idpf_queue_set *qs); int idpf_send_config_queue_set_msg(const struct idpf_queue_set *qs); int idpf_send_disable_queues_msg(struct idpf_vport *vport); -int idpf_send_config_queues_msg(struct idpf_vport *vport); int idpf_send_enable_queues_msg(struct idpf_vport *vport); +int idpf_send_config_queues_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id); -void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q); +int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q); u32 idpf_get_vport_id(struct idpf_vport *vport); int idpf_send_create_vport_msg(struct idpf_adapter *adapter, struct idpf_vport_max_q *max_q); -int idpf_send_destroy_vport_msg(struct idpf_vport *vport); -int idpf_send_enable_vport_msg(struct idpf_vport *vport); -int idpf_send_disable_vport_msg(struct idpf_vport *vport); +int idpf_send_destroy_vport_msg(struct idpf_adapter *adapter, u32 vport_id); +int idpf_send_enable_vport_msg(struct idpf_adapter *adapter, u32 vport_id); +int idpf_send_disable_vport_msg(struct idpf_adapter *adapter, u32 vport_id); -int idpf_vport_adjust_qs(struct idpf_vport *vport); +int idpf_vport_adjust_qs(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); int idpf_vport_alloc_max_qs(struct idpf_adapter *adapter, struct idpf_vport_max_q *max_q); void idpf_vport_dealloc_max_qs(struct idpf_adapter *adapter, struct idpf_vport_max_q *max_q); -int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q, - u16 num_complq, u16 num_rx_q, u16 num_rx_bufq); -int idpf_send_delete_queues_msg(struct idpf_vport *vport); - -int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport); +int idpf_send_add_queues_msg(struct idpf_adapter *adapter, + struct idpf_vport_config *vport_config, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id); +int idpf_send_delete_queues_msg(struct idpf_adapter *adapter, + struct idpf_queue_id_reg_info *chunks, + u32 vport_id); + +int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport, + struct idpf_q_vec_rsrc *rsrc); int idpf_get_vec_ids(struct idpf_adapter *adapter, u16 *vecids, int num_vecids, struct virtchnl2_vector_chunks *chunks); int idpf_send_alloc_vectors_msg(struct idpf_adapter *adapter, u16 num_vectors); int idpf_send_dealloc_vectors_msg(struct idpf_adapter *adapter); -int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map); - -int idpf_add_del_mac_filters(struct idpf_vport *vport, - struct idpf_netdev_priv *np, +int idpf_send_map_unmap_queue_vector_msg(struct idpf_adapter *adapter, + struct idpf_q_vec_rsrc *rsrc, + u32 vport_id, + bool map); + +int idpf_add_del_mac_filters(struct idpf_adapter *adapter, + struct idpf_vport_config *vport_config, + const u8 *default_mac_addr, u32 vport_id, bool add, bool async); int idpf_set_promiscuous(struct idpf_adapter *adapter, struct idpf_vport_user_config_data *config_data, u32 vport_id); int idpf_check_supported_desc_ids(struct idpf_vport *vport); -int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport); -int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport); -int idpf_send_get_stats_msg(struct idpf_vport *vport); +int idpf_send_ena_dis_loopback_msg(struct idpf_adapter *adapter, u32 vport_id, + bool loopback_ena); +int idpf_send_get_stats_msg(struct idpf_netdev_priv *np, + struct idpf_port_stats *port_stats); int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs); -int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get); -int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get); +int idpf_send_get_set_rss_key_msg(struct idpf_adapter *adapter, + struct idpf_rss_data *rss_data, + u32 vport_id, bool get); +int idpf_send_get_set_rss_lut_msg(struct idpf_adapter *adapter, + struct idpf_rss_data *rss_data, + u32 vport_id, bool get); void idpf_vc_xn_shutdown(struct idpf_vc_xn_manager *vcxn_mngr); int idpf_idc_rdma_vc_send_sync(struct iidc_rdma_core_dev_info *cdev_info, u8 *send_msg, u16 msg_size, diff --git a/drivers/net/ethernet/intel/idpf/xdp.c b/drivers/net/ethernet/intel/idpf/xdp.c index 958d16f87424..2b60f2a78684 100644 --- a/drivers/net/ethernet/intel/idpf/xdp.c +++ b/drivers/net/ethernet/intel/idpf/xdp.c @@ -2,21 +2,22 @@ /* Copyright (C) 2025 Intel Corporation */ #include "idpf.h" +#include "idpf_ptp.h" #include "idpf_virtchnl.h" #include "xdp.h" #include "xsk.h" -static int idpf_rxq_for_each(const struct idpf_vport *vport, +static int idpf_rxq_for_each(const struct idpf_q_vec_rsrc *rsrc, int (*fn)(struct idpf_rx_queue *rxq, void *arg), void *arg) { - bool splitq = idpf_is_queue_model_split(vport->rxq_model); + bool splitq = idpf_is_queue_model_split(rsrc->rxq_model); - if (!vport->rxq_grps) + if (!rsrc->rxq_grps) return -ENETDOWN; - for (u32 i = 0; i < vport->num_rxq_grp; i++) { - const struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + for (u32 i = 0; i < rsrc->num_rxq_grp; i++) { + const struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i]; u32 num_rxq; if (splitq) @@ -45,7 +46,8 @@ static int idpf_rxq_for_each(const struct idpf_vport *vport, static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq, void *arg) { const struct idpf_vport *vport = rxq->q_vector->vport; - bool split = idpf_is_queue_model_split(vport->rxq_model); + const struct idpf_q_vec_rsrc *rsrc; + bool split; int err; err = __xdp_rxq_info_reg(&rxq->xdp_rxq, vport->netdev, rxq->idx, @@ -54,6 +56,9 @@ static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq, void *arg) if (err) return err; + rsrc = &vport->dflt_qv_rsrc; + split = idpf_is_queue_model_split(rsrc->rxq_model); + if (idpf_queue_has(XSK, rxq)) { err = xdp_rxq_info_reg_mem_model(&rxq->xdp_rxq, MEM_TYPE_XSK_BUFF_POOL, @@ -70,7 +75,7 @@ static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq, void *arg) if (!split) return 0; - rxq->xdpsqs = &vport->txqs[vport->xdp_txq_offset]; + rxq->xdpsqs = &vport->txqs[rsrc->xdp_txq_offset]; rxq->num_xdp_txq = vport->num_xdp_txq; return 0; @@ -86,9 +91,9 @@ int idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq) return __idpf_xdp_rxq_info_init(rxq, NULL); } -int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport) +int idpf_xdp_rxq_info_init_all(const struct idpf_q_vec_rsrc *rsrc) { - return idpf_rxq_for_each(vport, __idpf_xdp_rxq_info_init, NULL); + return idpf_rxq_for_each(rsrc, __idpf_xdp_rxq_info_init, NULL); } static int __idpf_xdp_rxq_info_deinit(struct idpf_rx_queue *rxq, void *arg) @@ -111,10 +116,10 @@ void idpf_xdp_rxq_info_deinit(struct idpf_rx_queue *rxq, u32 model) __idpf_xdp_rxq_info_deinit(rxq, (void *)(size_t)model); } -void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport) +void idpf_xdp_rxq_info_deinit_all(const struct idpf_q_vec_rsrc *rsrc) { - idpf_rxq_for_each(vport, __idpf_xdp_rxq_info_deinit, - (void *)(size_t)vport->rxq_model); + idpf_rxq_for_each(rsrc, __idpf_xdp_rxq_info_deinit, + (void *)(size_t)rsrc->rxq_model); } static int idpf_xdp_rxq_assign_prog(struct idpf_rx_queue *rxq, void *arg) @@ -132,10 +137,10 @@ static int idpf_xdp_rxq_assign_prog(struct idpf_rx_queue *rxq, void *arg) return 0; } -void idpf_xdp_copy_prog_to_rqs(const struct idpf_vport *vport, +void idpf_xdp_copy_prog_to_rqs(const struct idpf_q_vec_rsrc *rsrc, struct bpf_prog *xdp_prog) { - idpf_rxq_for_each(vport, idpf_xdp_rxq_assign_prog, xdp_prog); + idpf_rxq_for_each(rsrc, idpf_xdp_rxq_assign_prog, xdp_prog); } static void idpf_xdp_tx_timer(struct work_struct *work); @@ -165,7 +170,7 @@ int idpf_xdpsqs_get(const struct idpf_vport *vport) } dev = vport->netdev; - sqs = vport->xdp_txq_offset; + sqs = vport->dflt_qv_rsrc.xdp_txq_offset; for (u32 i = sqs; i < vport->num_txq; i++) { struct idpf_tx_queue *xdpsq = vport->txqs[i]; @@ -202,7 +207,7 @@ void idpf_xdpsqs_put(const struct idpf_vport *vport) return; dev = vport->netdev; - sqs = vport->xdp_txq_offset; + sqs = vport->dflt_qv_rsrc.xdp_txq_offset; for (u32 i = sqs; i < vport->num_txq; i++) { struct idpf_tx_queue *xdpsq = vport->txqs[i]; @@ -358,12 +363,15 @@ int idpf_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, { const struct idpf_netdev_priv *np = netdev_priv(dev); const struct idpf_vport *vport = np->vport; + u32 xdp_txq_offset; if (unlikely(!netif_carrier_ok(dev) || !vport->link_up)) return -ENETDOWN; + xdp_txq_offset = vport->dflt_qv_rsrc.xdp_txq_offset; + return libeth_xdp_xmit_do_bulk(dev, n, frames, flags, - &vport->txqs[vport->xdp_txq_offset], + &vport->txqs[xdp_txq_offset], vport->num_xdp_txq, idpf_xdp_xmit_flush_bulk, idpf_xdp_tx_finalize); @@ -391,13 +399,43 @@ static int idpf_xdpmo_rx_hash(const struct xdp_md *ctx, u32 *hash, pt); } +static int idpf_xdpmo_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp) +{ + const struct libeth_xdp_buff *xdp = (typeof(xdp))ctx; + struct idpf_xdp_rx_desc desc __uninitialized; + const struct idpf_rx_queue *rxq; + u64 cached_time, ts_ns; + u32 ts_high; + + rxq = libeth_xdp_buff_to_rq(xdp, typeof(*rxq), xdp_rxq); + + if (!idpf_queue_has(PTP, rxq)) + return -ENODATA; + + idpf_xdp_get_qw1(&desc, xdp->desc); + + if (!(idpf_xdp_rx_ts_low(&desc) & VIRTCHNL2_RX_FLEX_TSTAMP_VALID)) + return -ENODATA; + + cached_time = READ_ONCE(rxq->cached_phc_time); + + idpf_xdp_get_qw3(&desc, xdp->desc); + + ts_high = idpf_xdp_rx_ts_high(&desc); + ts_ns = idpf_ptp_tstamp_extend_32b_to_64b(cached_time, ts_high); + + *timestamp = ts_ns; + return 0; +} + static const struct xdp_metadata_ops idpf_xdpmo = { .xmo_rx_hash = idpf_xdpmo_rx_hash, + .xmo_rx_timestamp = idpf_xdpmo_rx_timestamp, }; void idpf_xdp_set_features(const struct idpf_vport *vport) { - if (!idpf_is_queue_model_split(vport->rxq_model)) + if (!idpf_is_queue_model_split(vport->dflt_qv_rsrc.rxq_model)) return; libeth_xdp_set_features_noredir(vport->netdev, &idpf_xdpmo, @@ -409,6 +447,7 @@ static int idpf_xdp_setup_prog(struct idpf_vport *vport, const struct netdev_bpf *xdp) { const struct idpf_netdev_priv *np = netdev_priv(vport->netdev); + const struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct bpf_prog *old, *prog = xdp->prog; struct idpf_vport_config *cfg; int ret; @@ -419,7 +458,7 @@ static int idpf_xdp_setup_prog(struct idpf_vport *vport, !test_bit(IDPF_VPORT_REG_NETDEV, cfg->flags) || !!vport->xdp_prog == !!prog) { if (test_bit(IDPF_VPORT_UP, np->state)) - idpf_xdp_copy_prog_to_rqs(vport, prog); + idpf_xdp_copy_prog_to_rqs(rsrc, prog); old = xchg(&vport->xdp_prog, prog); if (old) @@ -464,7 +503,7 @@ int idpf_xdp(struct net_device *dev, struct netdev_bpf *xdp) idpf_vport_ctrl_lock(dev); vport = idpf_netdev_to_vport(dev); - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!idpf_is_queue_model_split(vport->dflt_qv_rsrc.txq_model)) goto notsupp; switch (xdp->command) { diff --git a/drivers/net/ethernet/intel/idpf/xdp.h b/drivers/net/ethernet/intel/idpf/xdp.h index 479f5ef3c604..63e56f7d43e0 100644 --- a/drivers/net/ethernet/intel/idpf/xdp.h +++ b/drivers/net/ethernet/intel/idpf/xdp.h @@ -9,10 +9,10 @@ #include "idpf_txrx.h" int idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq); -int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport); +int idpf_xdp_rxq_info_init_all(const struct idpf_q_vec_rsrc *rsrc); void idpf_xdp_rxq_info_deinit(struct idpf_rx_queue *rxq, u32 model); -void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport); -void idpf_xdp_copy_prog_to_rqs(const struct idpf_vport *vport, +void idpf_xdp_rxq_info_deinit_all(const struct idpf_q_vec_rsrc *rsrc); +void idpf_xdp_copy_prog_to_rqs(const struct idpf_q_vec_rsrc *rsrc, struct bpf_prog *xdp_prog); int idpf_xdpsqs_get(const struct idpf_vport *vport); @@ -112,11 +112,13 @@ struct idpf_xdp_rx_desc { aligned_u64 qw1; #define IDPF_XDP_RX_BUF GENMASK_ULL(47, 32) #define IDPF_XDP_RX_EOP BIT_ULL(1) +#define IDPF_XDP_RX_TS_LOW GENMASK_ULL(31, 24) aligned_u64 qw2; #define IDPF_XDP_RX_HASH GENMASK_ULL(31, 0) aligned_u64 qw3; +#define IDPF_XDP_RX_TS_HIGH GENMASK_ULL(63, 32) } __aligned(4 * sizeof(u64)); static_assert(sizeof(struct idpf_xdp_rx_desc) == sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3)); @@ -128,6 +130,8 @@ static_assert(sizeof(struct idpf_xdp_rx_desc) == #define idpf_xdp_rx_buf(desc) FIELD_GET(IDPF_XDP_RX_BUF, (desc)->qw1) #define idpf_xdp_rx_eop(desc) !!((desc)->qw1 & IDPF_XDP_RX_EOP) #define idpf_xdp_rx_hash(desc) FIELD_GET(IDPF_XDP_RX_HASH, (desc)->qw2) +#define idpf_xdp_rx_ts_low(desc) FIELD_GET(IDPF_XDP_RX_TS_LOW, (desc)->qw1) +#define idpf_xdp_rx_ts_high(desc) FIELD_GET(IDPF_XDP_RX_TS_HIGH, (desc)->qw3) static inline void idpf_xdp_get_qw0(struct idpf_xdp_rx_desc *desc, @@ -149,6 +153,9 @@ idpf_xdp_get_qw1(struct idpf_xdp_rx_desc *desc, desc->qw1 = ((const typeof(desc))rxd)->qw1; #else desc->qw1 = ((u64)le16_to_cpu(rxd->buf_id) << 32) | + ((u64)rxd->ts_low << 24) | + ((u64)rxd->fflags1 << 16) | + ((u64)rxd->status_err1 << 8) | rxd->status_err0_qw1; #endif } @@ -166,6 +173,19 @@ idpf_xdp_get_qw2(struct idpf_xdp_rx_desc *desc, #endif } +static inline void +idpf_xdp_get_qw3(struct idpf_xdp_rx_desc *desc, + const struct virtchnl2_rx_flex_desc_adv_nic_3 *rxd) +{ +#ifdef __LIBETH_WORD_ACCESS + desc->qw3 = ((const typeof(desc))rxd)->qw3; +#else + desc->qw3 = ((u64)le32_to_cpu(rxd->ts_high) << 32) | + ((u64)le16_to_cpu(rxd->fmd6) << 16) | + le16_to_cpu(rxd->l2tag1); +#endif +} + void idpf_xdp_set_features(const struct idpf_vport *vport); int idpf_xdp(struct net_device *dev, struct netdev_bpf *xdp); diff --git a/drivers/net/ethernet/intel/idpf/xsk.c b/drivers/net/ethernet/intel/idpf/xsk.c index fd2cc43ab43c..676cbd80774d 100644 --- a/drivers/net/ethernet/intel/idpf/xsk.c +++ b/drivers/net/ethernet/intel/idpf/xsk.c @@ -26,13 +26,14 @@ static void idpf_xsk_setup_rxq(const struct idpf_vport *vport, static void idpf_xsk_setup_bufq(const struct idpf_vport *vport, struct idpf_buf_queue *bufq) { + const struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc; struct xsk_buff_pool *pool; u32 qid = U32_MAX; - for (u32 i = 0; i < vport->num_rxq_grp; i++) { - const struct idpf_rxq_group *grp = &vport->rxq_grps[i]; + for (u32 i = 0; i < rsrc->num_rxq_grp; i++) { + const struct idpf_rxq_group *grp = &rsrc->rxq_grps[i]; - for (u32 j = 0; j < vport->num_bufqs_per_qgrp; j++) { + for (u32 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) { if (&grp->splitq.bufq_sets[j].bufq == bufq) { qid = grp->splitq.rxq_sets[0]->rxq.idx; goto setup; @@ -61,7 +62,7 @@ static void idpf_xsk_setup_txq(const struct idpf_vport *vport, if (!idpf_queue_has(XDP, txq)) return; - qid = txq->idx - vport->xdp_txq_offset; + qid = txq->idx - vport->dflt_qv_rsrc.xdp_txq_offset; pool = xsk_get_pool_from_qid(vport->netdev, qid); if (!pool || !pool->dev) @@ -86,7 +87,8 @@ static void idpf_xsk_setup_complq(const struct idpf_vport *vport, if (!idpf_queue_has(XDP, complq)) return; - qid = complq->txq_grp->txqs[0]->idx - vport->xdp_txq_offset; + qid = complq->txq_grp->txqs[0]->idx - + vport->dflt_qv_rsrc.xdp_txq_offset; pool = xsk_get_pool_from_qid(vport->netdev, qid); if (!pool || !pool->dev) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c index 3069b583fd81..89c7fed7b8fc 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c @@ -342,6 +342,13 @@ static int ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw, return 0; } + if (hw->phy.sfp_type == ixgbe_sfp_type_10g_bx_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_10g_bx_core1) { + *speed = IXGBE_LINK_SPEED_10GB_FULL; + *autoneg = false; + return 0; + } + /* * Determine link capabilities based on the stored value of AUTOC, * which represents EEPROM defaults. If AUTOC value has not been diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c index 2ad81f687a84..bb4b53fee234 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c @@ -351,6 +351,8 @@ static int ixgbe_get_link_ksettings(struct net_device *netdev, case ixgbe_sfp_type_1g_lx_core1: case ixgbe_sfp_type_1g_bx_core0: case ixgbe_sfp_type_1g_bx_core1: + case ixgbe_sfp_type_10g_bx_core0: + case ixgbe_sfp_type_10g_bx_core1: ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE); ethtool_link_ksettings_add_link_mode(cmd, advertising, diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c index 2449e4cf2679..ab733e73927d 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c @@ -1534,8 +1534,10 @@ int ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) struct ixgbe_adapter *adapter = hw->back; u8 oui_bytes[3] = {0, 0, 0}; u8 bitrate_nominal = 0; + u8 sm_length_100m = 0; u8 comp_codes_10g = 0; u8 comp_codes_1g = 0; + u8 sm_length_km = 0; u16 enforce_sfp = 0; u32 vendor_oui = 0; u8 identifier = 0; @@ -1678,6 +1680,33 @@ int ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) else hw->phy.sfp_type = ixgbe_sfp_type_1g_bx_core1; + /* Support Ethernet 10G-BX, checking the Bit Rate + * Nominal Value as per SFF-8472 to be 12.5 Gb/s (67h) and + * Single Mode fibre with at least 1km link length + */ + } else if ((!comp_codes_10g) && (bitrate_nominal == 0x67) && + (!(cable_tech & IXGBE_SFF_DA_PASSIVE_CABLE)) && + (!(cable_tech & IXGBE_SFF_DA_ACTIVE_CABLE))) { + status = hw->phy.ops.read_i2c_eeprom(hw, + IXGBE_SFF_SM_LENGTH_KM, + &sm_length_km); + if (status != 0) + goto err_read_i2c_eeprom; + status = hw->phy.ops.read_i2c_eeprom(hw, + IXGBE_SFF_SM_LENGTH_100M, + &sm_length_100m); + if (status != 0) + goto err_read_i2c_eeprom; + if (sm_length_km > 0 || sm_length_100m >= 10) { + if (hw->bus.lan_id == 0) + hw->phy.sfp_type = + ixgbe_sfp_type_10g_bx_core0; + else + hw->phy.sfp_type = + ixgbe_sfp_type_10g_bx_core1; + } else { + hw->phy.sfp_type = ixgbe_sfp_type_unknown; + } } else { hw->phy.sfp_type = ixgbe_sfp_type_unknown; } @@ -1768,7 +1797,9 @@ int ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1 || hw->phy.sfp_type == ixgbe_sfp_type_1g_bx_core0 || - hw->phy.sfp_type == ixgbe_sfp_type_1g_bx_core1)) { + hw->phy.sfp_type == ixgbe_sfp_type_1g_bx_core1 || + hw->phy.sfp_type == ixgbe_sfp_type_10g_bx_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_10g_bx_core1)) { hw->phy.type = ixgbe_phy_sfp_unsupported; return -EOPNOTSUPP; } @@ -1786,7 +1817,9 @@ int ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1 || hw->phy.sfp_type == ixgbe_sfp_type_1g_bx_core0 || - hw->phy.sfp_type == ixgbe_sfp_type_1g_bx_core1)) { + hw->phy.sfp_type == ixgbe_sfp_type_1g_bx_core1 || + hw->phy.sfp_type == ixgbe_sfp_type_10g_bx_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_10g_bx_core1)) { /* Make sure we're a supported PHY type */ if (hw->phy.type == ixgbe_phy_sfp_intel) return 0; @@ -2016,20 +2049,22 @@ int ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw, return -EOPNOTSUPP; /* - * Limiting active cables and 1G Phys must be initialized as + * Limiting active cables, 10G BX and 1G Phys must be initialized as * SR modules */ if (sfp_type == ixgbe_sfp_type_da_act_lmt_core0 || sfp_type == ixgbe_sfp_type_1g_lx_core0 || sfp_type == ixgbe_sfp_type_1g_cu_core0 || sfp_type == ixgbe_sfp_type_1g_sx_core0 || - sfp_type == ixgbe_sfp_type_1g_bx_core0) + sfp_type == ixgbe_sfp_type_1g_bx_core0 || + sfp_type == ixgbe_sfp_type_10g_bx_core0) sfp_type = ixgbe_sfp_type_srlr_core0; else if (sfp_type == ixgbe_sfp_type_da_act_lmt_core1 || sfp_type == ixgbe_sfp_type_1g_lx_core1 || sfp_type == ixgbe_sfp_type_1g_cu_core1 || sfp_type == ixgbe_sfp_type_1g_sx_core1 || - sfp_type == ixgbe_sfp_type_1g_bx_core1) + sfp_type == ixgbe_sfp_type_1g_bx_core1 || + sfp_type == ixgbe_sfp_type_10g_bx_core1) sfp_type = ixgbe_sfp_type_srlr_core1; /* Read offset to PHY init contents */ diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h index 81179c60af4e..039ba4b6c120 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h @@ -32,6 +32,8 @@ #define IXGBE_SFF_QSFP_1GBE_COMP 0x86 #define IXGBE_SFF_QSFP_CABLE_LENGTH 0x92 #define IXGBE_SFF_QSFP_DEVICE_TECH 0x93 +#define IXGBE_SFF_SM_LENGTH_KM 0xE +#define IXGBE_SFF_SM_LENGTH_100M 0xF /* Bitmasks */ #define IXGBE_SFF_DA_PASSIVE_CABLE 0x4 diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h index b1bfeb21537a..61f2ef67defd 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h @@ -3286,6 +3286,8 @@ enum ixgbe_sfp_type { ixgbe_sfp_type_1g_lx_core1 = 14, ixgbe_sfp_type_1g_bx_core0 = 15, ixgbe_sfp_type_1g_bx_core1 = 16, + ixgbe_sfp_type_10g_bx_core0 = 17, + ixgbe_sfp_type_10g_bx_core1 = 18, ixgbe_sfp_type_not_present = 0xFFFE, ixgbe_sfp_type_unknown = 0xFFFF |
