diff options
| author | Hugh Dickins <hugh@veritas.com> | 2005-01-07 21:58:53 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@evo.osdl.org> | 2005-01-07 21:58:53 -0800 |
| commit | 3ee07371ab56b82c00847aa8a4a27e0769640a09 (patch) | |
| tree | 868c11f52d829328a6e10a02f3ac4b084f9a9626 /include/linux | |
| parent | 25f5906cfbf8bf8603599edb5bce4577de0d7085 (diff) | |
[PATCH] vmtrunc: unmap_mapping dropping i_mmap_lock
vmtruncate (or more generally, unmap_mapping_range) has been observed
responsible for very high latencies: the lockbreak work in unmap_vmas is good
for munmap or exit_mmap, but no use while mapping->i_mmap_lock is held, to
keep our place in the prio_tree (or list) of a file's vmas.
Extend the zap_details block with i_mmap_lock pointer, so unmap_vmas can
detect if that needs lockbreak, and break_addr so it can notify where it left
off.
Add unmap_mapping_range_vma, used from both prio_tree and nonlinear list
handlers. This is what now calls zap_page_range (above unmap_vmas), but
handles the lockbreak and restart issues: letting unmap_mapping_range_ tree or
list know when they need to start over because lock was dropped.
When restarting, of course there's a danger of never making progress. Add
vm_truncate_count field to vm_area_struct, update that to mapping->
truncate_count once fully scanned, skip up-to-date vmas without a scan (and
without dropping i_mmap_lock).
Further danger of never making progress if a vma is very large: when breaking
out, save restart_vma and restart_addr (and restart_pgoff to confirm, in case
vma gets reused), to help continue where we left off.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/mm.h | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h index 3995937c714d..97f8b983ef18 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -105,6 +105,7 @@ struct vm_area_struct { units, *not* PAGE_CACHE_SIZE */ struct file * vm_file; /* File we map to (can be NULL). */ void * vm_private_data; /* was vm_pte (shared mem) */ + unsigned int vm_truncate_count; /* compare mapping->truncate_count */ #ifndef CONFIG_MMU atomic_t vm_usage; /* refcount (VMAs shared if !MMU) */ @@ -577,7 +578,12 @@ struct zap_details { struct address_space *check_mapping; /* Check page->mapping if set */ pgoff_t first_index; /* Lowest page->index to unmap */ pgoff_t last_index; /* Highest page->index to unmap */ - int atomic; /* May not schedule() */ + spinlock_t *i_mmap_lock; /* For unmap_mapping_range: */ + struct vm_area_struct *restart_vma; /* Where lock was dropped */ + pgoff_t restart_pgoff; /* File offset for restart */ + unsigned long restart_addr; /* Where we should restart */ + unsigned long break_addr; /* Where unmap_vmas stopped */ + unsigned int truncate_count; /* Compare vm_truncate_count */ }; void zap_page_range(struct vm_area_struct *vma, unsigned long address, |
