diff options
| author | Ingo Molnar <mingo@elte.hu> | 2002-07-21 02:11:12 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@home.transmeta.com> | 2002-07-21 02:11:12 -0700 |
| commit | ae86a80aed1e269d435c70f6e85deb80e8f8be98 (patch) | |
| tree | c0c5b816da7b3a3102f159c335745ae9b01883c1 /kernel/sched.c | |
| parent | 3d37e1e6171f8cbd81e442524d4dd231b8cbf5d1 (diff) | |
[PATCH] "big IRQ lock" removal, IRQ cleanups
This is a massive cleanup of the IRQ subsystem. It's losely based on
Linus' original idea and DaveM's original implementation, to fold our
various irq, softirq and bh counters into the preemption counter.
with this approach it was possible:
- to remove the 'big IRQ lock' on SMP - on which sti() and cli() relied.
- to streamline/simplify arch/i386/kernel/irq.c significantly.
- to simplify the softirq code.
- to remove the preemption count increase/decrease code from the lowlevel
IRQ assembly code.
- to speed up schedule() a bit.
Global sti() and cli() is gone forever on SMP, there is no more globally
synchronizing irq-disabling capability. All code that relied on sti()
and cli() and restore_flags() must use other locking mechanisms from now
on (spinlocks and __cli()/__sti()).
obviously this patch breaks massive amounts of code, so only limited
.configs are working at the moment (UP is expected to be unaffected, but
SMP will require various driver updates).
The patch was developed and tested on SMP systems, and while the code is
still a bit rough in places, the base IRQ code appears to be pretty
robust and clean.
while it boots already so the worst is over, there is lots of work left:
eg. to fix the serial layer to not use cli()/sti() and bhs ...
Diffstat (limited to 'kernel/sched.c')
| -rw-r--r-- | kernel/sched.c | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index c8a11b29794e..3d275a38109e 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -727,7 +727,8 @@ void scheduler_tick(int user_tick, int system) task_t *p = current; if (p == rq->idle) { - if (local_bh_count(cpu) || local_irq_count(cpu) > 1) + /* note: this timer irq context must be accounted for as well */ + if (preempt_count() >= 2*IRQ_OFFSET) kstat.per_cpu_system[cpu] += system; #if CONFIG_SMP idle_tick(); @@ -816,7 +817,7 @@ need_resched: prev = current; rq = this_rq(); - release_kernel_lock(prev, smp_processor_id()); + release_kernel_lock(prev); prepare_arch_schedule(prev); prev->sleep_timestamp = jiffies; spin_lock_irq(&rq->lock); @@ -825,7 +826,7 @@ need_resched: * if entering off of a kernel preemption go straight * to picking the next task. */ - if (unlikely(preempt_get_count() & PREEMPT_ACTIVE)) + if (unlikely(preempt_count() & PREEMPT_ACTIVE)) goto pick_next_task; switch (prev->state) { @@ -1694,7 +1695,9 @@ void __init init_idle(task_t *idle, int cpu) __restore_flags(flags); /* Set the preempt count _outside_ the spinlocks! */ +#if CONFIG_PREEMPT idle->thread_info->preempt_count = (idle->lock_depth >= 0); +#endif } extern void init_timervecs(void); |
