diff options
| author | Hugh Dickins <hugh@veritas.com> | 2004-08-23 21:24:11 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2004-08-23 21:24:11 -0700 |
| commit | edcc56dc6a7c758c4862321fc2c3a9d5a1f4dc5e (patch) | |
| tree | 6284b8043331afe6eac52a6049d5a88238f8bb7a /include/linux/mm.h | |
| parent | 6f055bc1a7c5e20dc145faff534f98cfc841b02d (diff) | |
[PATCH] rmaplock: kill page_map_lock
The pte_chains rmap used pte_chain_lock (bit_spin_lock on PG_chainlock) to
lock its pte_chains. We kept this (as page_map_lock: bit_spin_lock on
PG_maplock) when we moved to objrmap. But the file objrmap locks its vma tree
with mapping->i_mmap_lock, and the anon objrmap locks its vma list with
anon_vma->lock: so isn't the page_map_lock superfluous?
Pretty much, yes. The mapcount was protected by it, and needs to become an
atomic: starting at -1 like page _count, so nr_mapped can be tracked precisely
up and down. The last page_remove_rmap can't clear anon page mapping any
more, because of races with page_add_rmap; from which some BUG_ONs must go for
the same reason, but they've served their purpose.
vmscan decisions are naturally racy, little change there beyond removing
page_map_lock/unlock. But to stabilize the file-backed page->mapping against
truncation while acquiring i_mmap_lock, page_referenced_file now needs page
lock to be held even for refill_inactive_zone. There's a similar issue in
acquiring anon_vma->lock, where page lock doesn't help: which this patch
pretends to handle, but actually it needs the next.
Roughly 10% cut off lmbench fork numbers on my 2*HT*P4. Must confess my
testing failed to show the races even while they were knowingly exposed: would
benefit from testing on racier equipment.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include/linux/mm.h')
| -rw-r--r-- | include/linux/mm.h | 22 |
1 files changed, 18 insertions, 4 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h index ff1aa78f9775..42dca234d166 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -201,10 +201,9 @@ struct page { page_flags_t flags; /* Atomic flags, some possibly * updated asynchronously */ atomic_t _count; /* Usage count, see below. */ - unsigned int mapcount; /* Count of ptes mapped in mms, + atomic_t _mapcount; /* Count of ptes mapped in mms, * to show when page is mapped - * & limit reverse map searches, - * protected by PG_maplock. + * & limit reverse map searches. */ unsigned long private; /* Mapping-private opaque data: * usually used for buffer_heads @@ -478,11 +477,26 @@ static inline pgoff_t page_index(struct page *page) } /* + * The atomic page->_mapcount, like _count, starts from -1: + * so that transitions both from it and to it can be tracked, + * using atomic_inc_and_test and atomic_add_negative(-1). + */ +static inline void reset_page_mapcount(struct page *page) +{ + atomic_set(&(page)->_mapcount, -1); +} + +static inline int page_mapcount(struct page *page) +{ + return atomic_read(&(page)->_mapcount) + 1; +} + +/* * Return true if this page is mapped into pagetables. */ static inline int page_mapped(struct page *page) { - return page->mapcount != 0; + return atomic_read(&(page)->_mapcount) >= 0; } /* |
