diff options
| author | David S. Miller <davem@davemloft.net> | 2022-05-16 11:33:59 +0100 |
|---|---|---|
| committer | David S. Miller <davem@davemloft.net> | 2022-05-16 11:33:59 +0100 |
| commit | ee3398c78767b1fe9f5cdac04295abb96496d3e4 (patch) | |
| tree | d5be1a42e32e55e228aeee48bc252b5fa2db4d8b /include/linux | |
| parent | 3daebfbeb4555cb0c113aeb88aa469192ee41d89 (diff) | |
| parent | 909876500251b3b48480a840bbf9053588254eee (diff) | |
Merge branch 'net-skb-defer-freeing-polish'
Eric Dumazet says:
====================
net: polish skb defer freeing
While testing this recently added feature on a variety
of platforms/configurations, I found the following issues:
1) A race leading to concurrent calls to smp_call_function_single_async()
2) Missed opportunity to use napi_consume_skb()
3) Need to limit the max length of the per-cpu lists.
4) Process the per-cpu list more frequently, for the
(unusual) case where net_rx_action() has mutiple
napi_poll() to process per round.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/netdevice.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index d57ce248004c..cbaf312e365b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3136,6 +3136,7 @@ struct softnet_data { /* Another possibly contended cache line */ spinlock_t defer_lock ____cacheline_aligned_in_smp; int defer_count; + int defer_ipi_scheduled; struct sk_buff *defer_list; call_single_data_t defer_csd; }; |
