From 55b50278ec233024c2e5be04855d66ebdcebc35e Mon Sep 17 00:00:00 2001 From: Andrew Morton Date: Sun, 21 Sep 2003 01:37:01 -0700 Subject: [PATCH] real-time enhanced page allocator and throttling From: Robert Love - Let real-time tasks dip further into the reserves than usual in __alloc_pages(). There are a lot of ways to special case this. This patch just cuts z->pages_low in half, before doing the incremental min thing, for real-time tasks. I do not do anything in the low memory slow path. We can be a _lot_ more aggressive if we want. Right now, we just give real-time tasks a little help. - Never ever call balance_dirty_pages() on a real-time task. Where and how exactly we handle this is up for debate. We could, for example, special case real-time tasks inside balance_dirty_pages(). This would allow us to perform some of the work (say, waking up pdflush) but not other work (say, the active throttling). As it stands now, we do the per-processor accounting in balance_dirty_pages_ratelimited() but we never call balance_dirty_pages(). Lots of approaches work. What we want to do is never engage the real-time task in forced writeback. --- kernel/sched.c | 1 - 1 file changed, 1 deletion(-) (limited to 'kernel') diff --git a/kernel/sched.c b/kernel/sched.c index 89f1bb28dacd..1c5802ceedae 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -179,7 +179,6 @@ static DEFINE_PER_CPU(struct runqueue, runqueues); #define this_rq() (&__get_cpu_var(runqueues)) #define task_rq(p) cpu_rq(task_cpu(p)) #define cpu_curr(cpu) (cpu_rq(cpu)->curr) -#define rt_task(p) ((p)->prio < MAX_RT_PRIO) /* * Default context-switch locking: -- cgit v1.2.3