diff options
| author | Ingo Molnar <mingo@elte.hu> | 2005-01-07 21:49:19 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@evo.osdl.org> | 2005-01-07 21:49:19 -0800 |
| commit | 3365d1671c8f5f1ede7a07dcc632e70a385f27ad (patch) | |
| tree | 29f45f5f9e71f435215172663ced84b8049e15bd /include/linux | |
| parent | 38e387ee01e5a57cd3ed84062930997b87fa3896 (diff) | |
[PATCH] preempt cleanup
This is another generic fallout from the voluntary-preempt patchset: a
cleanup of the cond_resched() infrastructure, in preparation of the latency
reduction patches. The changes:
- uninline cond_resched() - this makes the footprint smaller,
especially once the number of cond_resched() points increase.
- add a 'was rescheduled' return value to cond_resched. This makes it
symmetric to cond_resched_lock() and later latency reduction patches
rely on the ability to tell whether there was any preemption.
- make cond_resched() more robust by using the same mechanism as
preempt_kernel(): by using PREEMPT_ACTIVE. This preserves the task's
state - e.g. if the task is in TASK_ZOMBIE but gets preempted via
cond_resched() just prior scheduling off then this approach preserves
TASK_ZOMBIE.
- the patch also adds need_lockbreak() which critical sections can use
to detect lock-break requests.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/hardirq.h | 2 | ||||
| -rw-r--r-- | include/linux/sched.h | 23 |
2 files changed, 17 insertions, 8 deletions
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index afb35ca096e8..833216955f4a 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -66,7 +66,7 @@ # define preemptible() (preempt_count() == 0 && !irqs_disabled()) # define IRQ_EXIT_OFFSET (HARDIRQ_OFFSET-1) #else -# define in_atomic() (preempt_count() != 0) +# define in_atomic() ((preempt_count() & ~PREEMPT_ACTIVE) != 0) # define preemptible() 0 # define IRQ_EXIT_OFFSET HARDIRQ_OFFSET #endif diff --git a/include/linux/sched.h b/include/linux/sched.h index 425ee5e7c4b1..2b2282104bb4 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1065,15 +1065,24 @@ static inline int need_resched(void) return unlikely(test_thread_flag(TIF_NEED_RESCHED)); } -extern void __cond_resched(void); -static inline void cond_resched(void) -{ - if (need_resched()) - __cond_resched(); -} - +/* + * cond_resched() and cond_resched_lock(): latency reduction via + * explicit rescheduling in places that are safe. The return + * value indicates whether a reschedule was done in fact. + */ +extern int cond_resched(void); extern int cond_resched_lock(spinlock_t * lock); +/* + * Does a critical section need to be broken due to another + * task waiting?: + */ +#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP) +# define need_lockbreak(lock) ((lock)->break_lock) +#else +# define need_lockbreak(lock) 0 +#endif + /* Reevaluate whether the task has signals pending delivery. This is required every time the blocked sigset_t changes. callers must hold sighand->siglock. */ |
