diff options
| author | Andrew Morton <akpm@zip.com.au> | 2002-07-18 21:09:17 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@home.transmeta.com> | 2002-07-18 21:09:17 -0700 |
| commit | e177ea28e7eded3490174487c81e5bef8a2c4d95 (patch) | |
| tree | 3a4422d4f04b7643fd14e809e7b8385246122bd9 /mm/page_alloc.c | |
| parent | 6a2ea3382b534e937ba2153f4a0c6021e04a1ef5 (diff) | |
[PATCH] VM instrumentation
A patch from Rik which adds some operational statitics to the VM.
In /proc/meminfo:
PageTables: Amount of memory used for process pagetables
PteChainTot: Amount of memory allocated for pte_chain objects
PteChainUsed: Amount of memory currently in use for pte chains.
In /proc/stat:
pageallocs: Number of pages allocated in the page allocator
pagefrees: Number of pages returned to the page allocator
(These can be used to measure the allocation rate)
pageactiv: Number of pages activated (moved to the active list)
pagedeact: Number of pages deactivated (moved to the inactive list)
pagefault: Total pagefaults
majorfault: Major pagefaults
pagescan: Number of pages which shrink_cache looked at
pagesteal: Number of pages which shrink_cache freed
pageoutrun: Number of calls to try_to_free_pages()
allocstall: Number of calls to balance_classzone()
Rik will be writing a userspace app which interprets these things.
The /proc/meminfo stats are efficient, but the /proc/stat accumulators
will cause undesirable cacheline bouncing. We need to break the disk
statistics out of struct kernel_stat and make everything else in there
per-cpu. If that doesn't happen in time for 2.6 then we disable
KERNEL_STAT_INC().
Diffstat (limited to 'mm/page_alloc.c')
| -rw-r--r-- | mm/page_alloc.c | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a5b6e175632d..2acac7c0aa80 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -13,6 +13,7 @@ */ #include <linux/config.h> +#include <linux/kernel_stat.h> #include <linux/mm.h> #include <linux/swap.h> #include <linux/interrupt.h> @@ -86,6 +87,8 @@ static void __free_pages_ok (struct page *page, unsigned int order) struct page *base; zone_t *zone; + KERNEL_STAT_ADD(pgfree, 1<<order); + BUG_ON(PagePrivate(page)); BUG_ON(page->mapping != NULL); BUG_ON(PageLocked(page)); @@ -324,6 +327,8 @@ struct page * __alloc_pages(unsigned int gfp_mask, unsigned int order, zonelist_ struct page * page; int freed; + KERNEL_STAT_ADD(pgalloc, 1<<order); + zone = zonelist->zones; classzone = *zone; if (classzone == NULL) @@ -393,6 +398,7 @@ nopage: if (!(gfp_mask & __GFP_WAIT)) goto nopage; + KERNEL_STAT_INC(allocstall); page = balance_classzone(classzone, gfp_mask, order, &freed); if (page) return page; @@ -563,6 +569,9 @@ void get_page_state(struct page_state *ret) ret->nr_pagecache += ps->nr_pagecache; ret->nr_active += ps->nr_active; ret->nr_inactive += ps->nr_inactive; + ret->nr_page_table_pages += ps->nr_page_table_pages; + ret->nr_pte_chain_pages += ps->nr_pte_chain_pages; + ret->used_pte_chains_bytes += ps->used_pte_chains_bytes; } } |
