diff options
| author | Ingo Molnar <mingo@elte.hu> | 2002-06-18 21:25:17 +0200 |
|---|---|---|
| committer | Ingo Molnar <mingo@elte.hu> | 2002-06-18 21:25:17 +0200 |
| commit | 18cb13a652a762c0bc97b2b63d369173cd7ac0f6 (patch) | |
| tree | 216590c85fac61119af45cbf2ee313582a9a7c72 | |
| parent | 6986d71db4b2613876c4cc66d574b42440703e56 (diff) | |
sched_yield() is misbehaving.
the current implementation does the following to 'give up' the CPU:
- it decreases its priority by 1 until it reaches the lowest level
- it queues the task to the end of the priority queue
this scheme works fine in most cases, but if sched_yield()-active tasks
are mixed with CPU-using processes then it's quite likely that the
CPU-using process is in the expired array. In that case the yield()-ing
process only requeues itself in the active array - a true context-switch
to the expired process will only occur once the timeslice of the
yield()-ing process has expired: in ~150 msecs. This leads to the
yield()-ing and CPU-using process to use up rougly the same amount of
CPU-time, which is arguably deficient.
i've fixed this problem by extending sched_yield() the following way:
+ * There are three levels of how a yielding task will give up
+ * the current CPU:
+ *
+ * #1 - it decreases its priority by one. This priority loss is
+ * temporary, it's recovered once the current timeslice
+ * expires.
+ *
+ * #2 - once it has reached the lowest priority level,
+ * it will give up timeslices one by one. (We do not
+ * want to give them up all at once, it's gradual,
+ * to protect the casual yield()er.)
+ *
+ * #3 - once all timeslices are gone we put the process into
+ * the expired array.
+ *
+ * (special rule: RT tasks do not lose any priority, they just
+ * roundrobin on their current priority level.)
+ */
| -rw-r--r-- | kernel/sched.c | 40 |
1 files changed, 25 insertions, 15 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 29086b7da218..8558049a4be4 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -1398,25 +1398,35 @@ out_unlock: asmlinkage long sys_sched_yield(void) { - runqueue_t *rq; - prio_array_t *array; - - rq = rq_lock(rq); + runqueue_t *rq = rq_lock(rq); + prio_array_t *array = current->array; /* - * Decrease the yielding task's priority by one, to avoid - * livelocks. This priority loss is temporary, it's recovered - * once the current timeslice expires. + * There are three levels of how a yielding task will give up + * the current CPU: * - * If priority is already MAX_PRIO-1 then we still - * roundrobin the task within the runlist. - */ - array = current->array; - /* - * If the task has reached maximum priority (or is a RT task) - * then just requeue the task to the end of the runqueue: + * #1 - it decreases its priority by one. This priority loss is + * temporary, it's recovered once the current timeslice + * expires. + * + * #2 - once it has reached the lowest priority level, + * it will give up timeslices one by one. (We do not + * want to give them up all at once, it's gradual, + * to protect the casual yield()er.) + * + * #3 - once all timeslices are gone we put the process into + * the expired array. + * + * (special rule: RT tasks do not lose any priority, they just + * roundrobin on their current priority level.) */ - if (likely(current->prio == MAX_PRIO-1 || rt_task(current))) { + if (likely(current->prio == MAX_PRIO-1)) { + if (current->time_slice <= 1) { + dequeue_task(current, rq->active); + enqueue_task(current, rq->expired); + } else + current->time_slice--; + } else if (unlikely(rt_task(current))) { list_del(¤t->run_list); list_add_tail(¤t->run_list, array->queue + current->prio); } else { |
