diff options
| author | Nick Piggin <nickpiggin@yahoo.com.au> | 2005-02-07 05:31:08 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2005-02-07 05:31:08 -0800 |
| commit | 01c8df0425061f81f99107ca63e4f0a981ec7f6a (patch) | |
| tree | 267e292d0c74c06381472de41bc61f092e6fbfa2 /kernel | |
| parent | 73f54a780c55e358f2b630db9f89be409426e588 (diff) | |
[PATCH] fix wait_task_inactive race
When a task is put to sleep, it is dequeued from the runqueue while it is
still running. The problem is that one some arches that have non-atomic
scheduling, the runqueue lock can be dropped and retaken in schedule() before
the task actually schedules off, and wait_task_inactive did not account for
this.
Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/sched.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index f708d10e7750..911fbdd9c151 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -867,7 +867,7 @@ void wait_task_inactive(task_t * p) repeat: rq = task_rq_lock(p, &flags); /* Must be off runqueue entirely, not preempted. */ - if (unlikely(p->array)) { + if (unlikely(p->array || task_running(rq, p))) { /* If it's preempted, we yield. It could be a while. */ preempted = !task_running(rq, p); task_rq_unlock(rq, &flags); |
