diff options
| author | Andrew Morton <akpm@digeo.com> | 2002-09-15 08:50:19 -0700 |
|---|---|---|
| committer | Christoph Hellwig <hch@hera.kernel.org> | 2002-09-15 08:50:19 -0700 |
| commit | e572ef2ea320724ba32094c4b4817dfde4a4bef3 (patch) | |
| tree | 2728ff1f5305cd6242a8be3991b05edebc2a8b1c /fs/proc/array.c | |
| parent | 697f3abeacfbab361efe0191b47a2d366e04949a (diff) | |
[PATCH] low-latency zap_page_range
zap_page_range and truncate are the two main latency problems
in the VM/VFS. The radix-tree-based truncate grinds that into
the dust, but no algorithmic fixes for pagetable takedown have
presented themselves...
Patch from Robert Love.
Attached patch implements a low latency version of "zap_page_range()".
Calls with even moderately large page ranges result in very long lock
held times and consequently very long periods of non-preemptibility.
This function is in my list of the top 3 worst offenders. It is gross.
This new version reimplements zap_page_range() as a loop over
ZAP_BLOCK_SIZE chunks. After each iteration, if a reschedule is
pending, we drop page_table_lock and automagically preempt. Note we can
not blindly drop the locks and reschedule (e.g. for the non-preempt
case) since there is a possibility to enter this codepath holding other
locks.
... I am sure you are familar with all this, its the same deal as your
low-latency work. This patch implements the "cond_resched_lock()" as we
discussed sometime back. I think this solution should be acceptable to
you and Linus.
There are other misc. cleanups, too.
This new zap_page_range() yields latency too-low-to-benchmark: <<1ms.
Diffstat (limited to 'fs/proc/array.c')
0 files changed, 0 insertions, 0 deletions
