From b2dd867477c080de713e366809b71fd500a6b300 Mon Sep 17 00:00:00 2001 From: Andrew Morton Date: Wed, 1 Oct 2003 10:57:58 -0700 Subject: [PATCH] memory writeback/invalidation fixes From: "David S. Miller" This attempts to take care of 2 of the MM todos I had on my backlog: 1) Zap the stupid flush_cache_all() thing with more meaningful interfaces. 2) Resolve the ptrace user page access issues, first stage. The "first stage" mentioned for #2 is simply creating the user page accesor interfaces. The next stage needs to be mucking with get_user_pages() so that we can control when the flush_dcache_page() occurs. Then we: 1) For every platform where flush_dcache_page() is a non-nop add a call to the beginning of copy_{from,to}_user_page(). 2) Make access_process_vm() set the "no dcache flush" bit in it's call to get_user_pages(). The idea also was that we'd consolidate the write etc. boolean args passed to get_user_pages() into flag bits too. But at least with the below, we can delete that reminder FIXME comment from kernel/ptrace.c, the platforms have the necessary tools and just need to make use of it :) As a bonus I noticed that VMALLOC_VMADDR() did absolutely nothing. After all of this I only have 1 real TODO left, and that's dealing with the SMP TLB/pte invalidation stuff, very low priority until someone starts doing more work with sparc32/SMP in 2.6.x :) --- kernel/ptrace.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) (limited to 'kernel') diff --git a/kernel/ptrace.c b/kernel/ptrace.c index cc3047d5562b..1b68379c4886 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -179,19 +179,14 @@ int access_process_vm(struct task_struct *tsk, unsigned long addr, void *buf, in flush_cache_page(vma, addr); - /* - * FIXME! We used to have flush_page_to_ram() in here, but - * that was wrong. davem says we need a new per-arch primitive - * to handle this correctly. - */ - maddr = kmap(page); if (write) { - memcpy(maddr + offset, buf, bytes); - flush_icache_user_range(vma, page, addr, bytes); + copy_to_user_page(vma, page, addr, + maddr + offset, buf, bytes); set_page_dirty_lock(page); } else { - memcpy(buf, maddr + offset, bytes); + copy_from_user_page(vma, page, addr, + buf, maddr + offset, bytes); } kunmap(page); page_cache_release(page); -- cgit v1.2.3