| Age | Commit message (Collapse) | Author |
|
The workqueue code currently has a notion of a per-cpu queue being "busy".
flush_scheduled_work()'s responsibility is to wait for a queue to be not busy.
Problem is, flush_scheduled_work() can easily hang up.
- The workqueue is deemed "busy" when there are pending delayed
(timer-based) works. But if someone repeatedly schedules new delayed work
in the callback, the queue will never fall idle, and flush_scheduled_work()
will not terminate.
- If someone reschedules work (not delayed work) in the work function, that
too will cause the queue to never go idle, and flush_scheduled_work() will
not terminate.
So what this patch does is:
- Create a new "cancel_delayed_work()" which will try to kill off any
timer-based delayed works.
- Change flush_scheduled_work() so that it is immune to people re-adding
work in the work callout handler.
We can do this by recognising that the caller does *not* want to wait
until the workqueue is "empty". The caller merely wants to wait until all
works which were pending at the time flush_scheduled_work() was called have
completed.
The patch uses a couple of sequence numbers for that.
So now, if someone wants to reliably remove delayed work they should do:
/*
* Make sure that my work-callback will no longer schedule new work
*/
my_driver_is_shutting_down = 1;
/*
* Kill off any pending delayed work
*/
cancel_delayed_work(&my_work);
/*
* OK, there will be no new works scheduled. But there may be one
* currently queued or in progress. So wait for that to complete.
*/
flush_scheduled_work();
The patch also changes the flush_workqueue() sleep to be uninterruptible.
We cannot legally bale out if a signal is delivered anyway.
|
|
Add a name argument to daemonize() (va_arg) to avoid all the
kernel threads having to duplicate the name setting over and
over again.
Make daemonize() disable all signals by default, and add a
"allow_signal()" function to let daemons say they explicitly
want to support a signal.
Make flush_signal() take the signal lock, so that callers do
not need to.
|
|
This is required to get make the old LinuxThread semantics work
together with the fixed-for-POSIX full signal sharing. A traditional
CLONE_SIGHAND thread (LinuxThread) will not see any other shared
signal state, while a new-style CLONE_THREAD thread will share all
of it.
This way the two methods don't confuse each other.
|
|
The patch teaches a queue to unplug itself:
a) if is has four requests OR
b) if it has had plugged requests for 3 milliseconds.
These numbers may need to be tuned, although doing so doesn't seem to
make much difference. 10 msecs works OK, so HZ=100 machines will be
fine.
Instrumentation shows that about 5-10% of requests were started due to
the three millisecond timeout (during a kernel compile). That's
somewhat significant. It means that the kernel is leaving stuff in the
queue, plugged, for too long. This testing was with a uniprocessor
preemptible kernel, which is particularly vulnerable to unplug latency
(submit some IO, get preempted before the unplug).
This patch permits the removal of a lot of rather lame unplugging in
page reclaim and in the writeback code, which kicks the queues
(globally!) every four megabytes to get writeback underway.
This patch doesn't use blk_run_queues(). It is able to kick just the
particular queue.
The patch is not expected to make much difference really, except for
AIO. AIO needs a blk_run_queues() in its io_submit() call. For each
request. This means that AIO has to disable plugging altogether,
unless something like this patch does it for it. It means that AIO
will unplug *all* queues in the machine for every io_submit(). Even
against a socket!
This patch was tested by disabling blk_run_queues() completely. The
system ran OK.
The 3 milliseconds may be too long. It's OK for the heavy writeback
code, but AIO may want less. Or maybe AIO really wants zero (ie:
disable plugging). If that is so, we need new code paths by which AIO
can communicate the "immediate unplug" information - a global unplug is
not good.
To minimise unplug latency due to user CPU load, this patch gives keventd
`nice -10'. This is of course completely arbitrary. Really, I think keventd
should be SCHED_RR/MAX_RT_PRIO-1, as it has been in -aa kernels for ages.
|
|
This does a number of timer subsystem enhancements:
- simplified timer initialization, now it's the cheapest possible thing:
static inline void init_timer(struct timer_list * timer)
{
timer->base = NULL;
}
since the timer functions already did a !timer->base check this did not
have any effect on their fastpath.
- the rule from now on is that timer->base is set upon activation of the
timer, and cleared upon deactivation. This also made it possible to:
- reorganize all the timer handling code to not assume anything about
timer->entry.next and timer->entry.prev - this also removed lots of
unnecessery cleaning of these fields. Removed lots of unnecessary list
operations from the fastpath.
- simplified del_timer_sync(): it now uses del_timer() plus some simple
synchronization code. Note that this also fixes a bug: if mod_timer (or
add_timer) moves a currently executing timer to another CPU's timer
vector, then del_timer_sync() does not synchronize with the handler
properly.
- bugfix: moved run_local_timers() from scheduler_tick() into
update_process_times() .. scheduler_tick() might be called from the fork
code which will not quite have the intended effect ...
- removed the APIC-timer-IRQ shifting done on SMP, Dipankar Sarma's
testing shows no negative effects.
- cleaned up include/linux/timer.h:
- removed the timer_t typedef, and fixes up kernel/workqueue.c to use
the 'struct timer_list' name instead.
- removed unnecessery includes
- renamed the 'list' field to 'entry' (it's an entry not a list head)
- exchanged the 'function' and 'data' fields. This, besides being
more logical, also unearthed the last few remaining places that
initialized timers by assuming some given field ordering, the patch
also fixes these places. (fs/xfs/pagebuf/page_buf.c,
net/core/profile.c and net/ipv4/inetpeer.c)
- removed the defunct sync_timers(), timer_enter() and timer_exit()
prototypes.
- added docbook-style comments.
- other kernel/timer.c changes:
- base->running_timer does not have to be volatile ...
- added consistent comments to all the important functions.
- made the sync-waiting in del_timer_sync preempt- and lowpower-
friendly.
i've compiled, booted & tested the patched kernel on x86 UP and SMP. I
have tried moderately high networking load as well, to make sure the timer
changes are correct - they appear to be.
|
|
|
|
This is the next iteration of the workqueue abstraction.
The framework includes:
- per-CPU queueing support.
on SMP there is a per-CPU worker thread (bound to its CPU) and per-CPU
work queues - this feature is completely transparent to workqueue-users.
keventd automatically uses this feature. XFS can now update to work-queues
and have the same per-CPU performance as it had with its per-CPU worker
threads.
- delayed work submission
there's a new queue_delayed_work(wq, work, delay) function and a new
schedule_delayed_work(work, delay) function. The later one is used to
correctly fix former tq_timer users. I've reverted those changes in 2.5.40
that changed tq_timer uses to schedule_work() - eg. in the case of
random.c or the tty flip queue it was definitely the wrong thing to do.
delayed work means a timer embedded in struct work_struct. I considered
using split struct work_struct and delayed_work_struct types, but lots
of code actively uses task-queues in both delayed and non-delayed mode,
so i went for the more generic approach that allows both methods of work
submission. Delayed timers do not cause any other overhead in the
normal submission path otherwise.
- multithreaded run_workqueue() implementation
the run_workqueue() function can now be called from multiple contexts, and
a worker thread will only use up a single entryy - this property is used
by the flushing code, and can potentially be used in the future to extend
the number of per-CPU worker threads.
- more reliable flushing
there's now a 'pending work' counter, which is used to accurately detect
when the last work-function has finished execution. It's also used to
correctly flush against timed requests. I'm not convinced whether the old
keventd implementation got this detail right.
- i switched the arguments of the queueing function(s) per Jeff's
suggestion, it's more straightforward this way.
Driver fixes:
i have converted almost every affected driver to the new framework. This
cleaned up tons of code. I also fixed a number of drivers that were still
using BHs (these drivers did not compile in 2.5.40).
while this means lots of changes, it might ease the QA decision whether to
put this patch into 2.5.
The pach converts roughly 80% of all tqueue-using code to workqueues - and
all the places that are not converted to workqueues yet are places that do
not compile in vanilla 2.5.40 anyway, due to unrelated changes. I've
converted a fair number of drivers that do not compile in 2.5.40, and i
think i've managed to convert every driver that compiles under 2.5.40.
|