From a3a31a5e74d962d6c65a6c5960970b4678fa2ad1 Mon Sep 17 00:00:00 2001 From: Andrew Morton Date: Sun, 5 Jan 2003 03:51:09 -0800 Subject: [PATCH] infrastructure for handling pte_chain_alloc() failures The VM allocates pte_chains with GFP_ATOMIC, under deep locking. If that allocation fails, we oops. My approach to solving this is to require that the caller of page_add_rmap() pass in a pte_chain structure for page_add_rmap() to use. Then, callers can arrange to allocate that structure outside locks with GFP_KERNEL. This patch provides the base infrastructure. A common case is that page_add_rmap() will in fact not consume the pte_chain, because an empty slot was found within one of the page's existing pte_chain structures. So this patch provides for a special one-deep per-cpu pte_chain cache to optimise this case of taking just one pte_chain and then immediately putting it back. We end up adding maybe 20-30 instructions to the pagefault path to handle the eventuality of pte_chain allocation failures. Lots of other design ideas were considered. This is the best I could come up with. --- include/linux/rmap-locking.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) (limited to 'include/linux/rmap-locking.h') diff --git a/include/linux/rmap-locking.h b/include/linux/rmap-locking.h index 302a58f54ca3..51f6697f3794 100644 --- a/include/linux/rmap-locking.h +++ b/include/linux/rmap-locking.h @@ -5,6 +5,11 @@ * pte chain. */ +#include + +struct pte_chain; +extern kmem_cache_t *pte_chain_cache; + static inline void pte_chain_lock(struct page *page) { /* @@ -31,3 +36,12 @@ static inline void pte_chain_unlock(struct page *page) #endif preempt_enable(); } + +struct pte_chain *pte_chain_alloc(int gfp_flags); +void __pte_chain_free(struct pte_chain *pte_chain); + +static inline void pte_chain_free(struct pte_chain *pte_chain) +{ + if (pte_chain) + __pte_chain_free(pte_chain); +} -- cgit v1.2.3