summaryrefslogtreecommitdiff
path: root/include/asm-alpha/cache.h
diff options
context:
space:
mode:
authorAndrew Morton <akpm@zip.com.au>2002-08-27 21:04:02 -0700
committerLinus Torvalds <torvalds@penguin.transmeta.com>2002-08-27 21:04:02 -0700
commitf9da78fb663680455bd763c6b8fbc5af34beb1f2 (patch)
tree65fcaeb83c51fdef43e02857d1dd14ad1947e28f /include/asm-alpha/cache.h
parenta8382cf1153689a1caac0e707e951e7869bb92e1 (diff)
[PATCH] add L1_CACHE_SHIFT_MAX
zone->lock and zone->lru_lock are two of the hottest locks in the kernel. Their usage patterns are quite independent. And they have just been put into the same structure. It is essential that they not fall into the same cacheline. That could be fixed by padding with L1_CACHE_BYTES. But the problem with this is that a kernel which was configured for (say) a PIII will perform poorly on SMP PIV. This will cause problems for kernel vendors. For example, RH currently ship PII and Athlon binaries. To get best SMP performance they will end up needing to ship a lot of differently configured kernels. To solve this we need to know, at compile time, the maximum L1 size which this kernel will ever run on. This patch adds L1_CACHE_SHIFT_MAX to every architecture's cache.h. Of course it'll break when newer chips come out with increased cacheline sizes. Better suggestions are welcome.
Diffstat (limited to 'include/asm-alpha/cache.h')
-rw-r--r--include/asm-alpha/cache.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/asm-alpha/cache.h b/include/asm-alpha/cache.h
index e6d4d1695e25..f74d7ece132e 100644
--- a/include/asm-alpha/cache.h
+++ b/include/asm-alpha/cache.h
@@ -20,5 +20,6 @@
#define L1_CACHE_ALIGN(x) (((x)+(L1_CACHE_BYTES-1))&~(L1_CACHE_BYTES-1))
#define SMP_CACHE_BYTES L1_CACHE_BYTES
+#define L1_CACHE_SHIFT_MAX 6 /* largest L1 which this arch supports */
#endif