diff options
| author | Hugh Dickins <hugh@veritas.com> | 2005-01-07 21:59:38 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@evo.osdl.org> | 2005-01-07 21:59:38 -0800 |
| commit | 8a1a48b7cd80de98d4d07ee1e78311a88c738335 (patch) | |
| tree | afd1b67481116da9aa290e91c34283f25bd92ae2 /include/linux | |
| parent | d5c772ed9d8de5097e3dbfd7b42bd1141084c9d0 (diff) | |
[PATCH] vmtrunc: restart_addr in truncate_count
Despite its restart_pgoff pretentions, unmap_mapping_range_vma was fatally
unable to distinguish a vma to be restarted from the case where that vma
has been freed, and its vm_area_struct reused for the top part of a
!new_below split of an isomorphic vma yet to be scanned.
The obvious answer is to note restart_vma in the struct address_space, and
cancel it when that vma is freed; but I'm reluctant to enlarge every struct
inode just for this. Another answer is to flag valid restart in the
vm_area_struct; but vm_flags is protected by down_write of mmap_sem, which
we cannot take within down_write of i_sem. If we're going to need yet
another field, better to record the restart_addr itself: restart_vma only
recorded the last restart, but a busy tree could well use more.
Actually, we don't need another field: we can neatly (though naughtily)
keep restart_addr in vm_truncate_count, provided mapping->truncate_count
leaps over those values which look like a page-aligned address. Zero
remains good for forcing a scan (though now interpreted as restart_addr 0),
and it turns out no change is needed to any of the vm_truncate_count
settings in dup_mmap, vma_link, vma_adjust, move_one_page.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/mm.h | 7 |
1 files changed, 2 insertions, 5 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h index 97f8b983ef18..bef2e741f89a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -105,7 +105,7 @@ struct vm_area_struct { units, *not* PAGE_CACHE_SIZE */ struct file * vm_file; /* File we map to (can be NULL). */ void * vm_private_data; /* was vm_pte (shared mem) */ - unsigned int vm_truncate_count; /* compare mapping->truncate_count */ + unsigned long vm_truncate_count;/* truncate_count or restart_addr */ #ifndef CONFIG_MMU atomic_t vm_usage; /* refcount (VMAs shared if !MMU) */ @@ -579,11 +579,8 @@ struct zap_details { pgoff_t first_index; /* Lowest page->index to unmap */ pgoff_t last_index; /* Highest page->index to unmap */ spinlock_t *i_mmap_lock; /* For unmap_mapping_range: */ - struct vm_area_struct *restart_vma; /* Where lock was dropped */ - pgoff_t restart_pgoff; /* File offset for restart */ - unsigned long restart_addr; /* Where we should restart */ unsigned long break_addr; /* Where unmap_vmas stopped */ - unsigned int truncate_count; /* Compare vm_truncate_count */ + unsigned long truncate_count; /* Compare vm_truncate_count */ }; void zap_page_range(struct vm_area_struct *vma, unsigned long address, |
