diff options
| author | Andrew Morton <akpm@osdl.org> | 2004-01-18 18:28:26 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@home.osdl.org> | 2004-01-18 18:28:26 -0800 |
| commit | d5d4042dd2d0352990d7ff54ea422950eba62969 (patch) | |
| tree | 374bf180fcf647889317c615d1edb3d5739ac6aa /include/linux | |
| parent | 2996d8deaeddd01820691a872550dc0cfba0c37d (diff) | |
[PATCH] make try_to_free_pages walk zonelist
From: Rik van Riel <riel@surriel.com>
In 2.6.0 both __alloc_pages() and the corresponding wakeup_kswapd()s walk
all zones in the zone list, possibly spanning multiple nodes in a low numa
factor system like AMD64.
Also, if lower_zone_protection is set in /proc, then it may be possible
that kswapd never cleans out data in zones further down the zonelist and
try_to_free_pages needs to do that.
However, in 2.6.0 try_to_free_pages() only frees pages in the pgdat the
first zone in the zonelist belongs to.
This is probably the wrong behaviour, since both the page allocator and the
kswapd wakeup free things from all zones on the zonelist. The following
patch makes try_to_free_pages() consistent with the allocator, by passing
the zonelist as an argument and freeing pages from all zones in the list.
I do not have any numa systems myself, so I have only tested it on my own
little smp box. Testing on NUMA systems may be useful, though the patch
really only should have an impact in those rare cases where kswapd can't
keep up with allocations...
As a side effect, the patch shrinks the kernel by 2 lines and replaces some
subtle magic by a simpler array walk.
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/swap.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/swap.h b/include/linux/swap.h index 1ecc25d2fc63..b000c56803b8 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -173,7 +173,7 @@ extern int rotate_reclaimable_page(struct page *page); extern void swap_setup(void); /* linux/mm/vmscan.c */ -extern int try_to_free_pages(struct zone *, unsigned int, unsigned int); +extern int try_to_free_pages(struct zone **, unsigned int, unsigned int); extern int shrink_all_memory(int); extern int vm_swappiness; |
