diff options
| author | Herbert Xu <herbert@gondor.apana.org.au> | 2005-02-05 03:23:27 -0800 |
|---|---|---|
| committer | Thomas Graf <tgraf@suug.ch> | 2005-02-05 03:23:27 -0800 |
| commit | 09d3e84de438f217510b604a980befd07b0c8262 (patch) | |
| tree | 057007ba49db63cd1a5fcc6e5b9c0a149da6688d /include/linux/skbuff.h | |
| parent | 97d52752736afedddab09c0db190ccceff9570b9 (diff) | |
[NET]: Add missing memory barrier to kfree_skb().
Also kill kfree_skb_fast(), that is a relic from fast switching
which was killed off years ago.
The bug is that in the case where we do the atomic_read()
optimization, we need to make sure that reads of skb state
later in __kfree_skb() processing (particularly the skb->list
BUG check) are not reordered to occur before the counter
read by the cpu.
Thanks to Olaf Kirch and Anton Blanchard for discovering
and helping fix this bug.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/linux/skbuff.h')
| -rw-r--r-- | include/linux/skbuff.h | 14 |
1 files changed, 5 insertions, 9 deletions
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index a6b744bccdc8..23e0b48b79a4 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -353,15 +353,11 @@ static inline struct sk_buff *skb_get(struct sk_buff *skb) */ static inline void kfree_skb(struct sk_buff *skb) { - if (atomic_read(&skb->users) == 1 || atomic_dec_and_test(&skb->users)) - __kfree_skb(skb); -} - -/* Use this if you didn't touch the skb state [for fast switching] */ -static inline void kfree_skb_fast(struct sk_buff *skb) -{ - if (atomic_read(&skb->users) == 1 || atomic_dec_and_test(&skb->users)) - kfree_skbmem(skb); + if (likely(atomic_read(&skb->users) == 1)) + smp_rmb(); + else if (likely(!atomic_dec_and_test(&skb->users))) + return; + __kfree_skb(skb); } /** |
