summaryrefslogtreecommitdiff
path: root/include/linux/workqueue.h
AgeCommit message (Collapse)Author
2005-04-16[PATCH] re-export cancel_rearming_delayed_workqueueJames Bottomley
This was unexported by Arjan because we have no current users. However, during a conversion from tasklets to workqueues of the parisc led functions, we ran across a case where this was needed. In particular, the open coded equivalent of cancel_rearming_delayed_workqueue was implemented incorrectly, which is, I think, all the evidence necessary that this is a useful API. Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-02-14[WORKQUEUE]: Add cancel_rearming_delayed_work()Andrew Morton
From: Arjan van de Ven <arjan@infradead.org> cancel_rearming_delayed_workqueue() is only used inside workqueue.c; make this function static (the more useful wrapper around it later in that function remains non-static and exported) Signed-off-by: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2004-08-22[PATCH] Move cache_reap out of timer contextDimitri Sivanich
I'm submitting two patches associated with moving cache_reap functionality out of timer context. Note that these patches do not make any further optimizations to cache_reap at this time. The first patch adds a function similiar to schedule_delayed_work to allow work to be scheduled on another cpu. The second patch makes use of schedule_delayed_work_on to schedule cache_reap to run from keventd. Signed-off-by: Dimitri Sivanich <sivanich@sgi.com> Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-05-03[PATCH] cancel_delayed_work() fixAndrew Morton
cancel_delayed_work() forgets to clear the workqueue's pending flag. This makes the workqueue appear to be permanently busy, so any subsequent attempts to use it will fail.
2004-04-26[PATCH] create singlethread_workqueue()Andrew Morton
From: Rusty Russell <rusty@rustcorp.com.au> Workqueues are a great primitive for running things from user context from a completely clean environment. Unfortunately, they currently insist on creating one thread per CPU, which is overkill for many situations, so the more generic keventd workqueue is used for these. Recently deadlocks using keventd were demonstrated, showing that it is not suitable for all uses. 1) Clean up CPU iterators. Always a nice touch. 2) Add __create_workqueue() and create_singlethread_workqueue(), keeping source compatibility. 3) Put workqueues in workqueue list even if !CONFIG_HOTPLUG_CPU (means we need a lock to protect that list). Now we can tell if a wq is single-threaded using list_empty(&wq->list). 4) For single-threaded workqueues, override CPU in queue_work, delayed_work_timer_fn and flush_workqueue to be 0. flush_workqueue now does redundant passes for single-threaded workqueues, but the code remains simple. 5) Make create_workqueue_thread return the thread, so we can easily kthread_bind for multi-threaded workqueues. akpm fixes: - Fix up is_single_threaded() handling - single-threaded wq thread does not have "/0" appended.
2004-02-18[PATCH] kthread primitiveAndrew Morton
From: Rusty Russell <rusty@rustcorp.com.au> These two patches provide the framework for stopping kernel threads to allow hotplug CPU. This one just adds kthread.c and kthread.h, next one uses it. Most importantly, adds a Monty Python quote to the kernel. Details: The hotplug CPU code introduces two major problems: 1) Threads which previously never stopped (migration thread, ksoftirqd, keventd) have to be stopped cleanly as CPUs go offline. 2) Threads which previously never had to be created now have to be created when a CPU goes online. Unfortunately, stopping a thread is fairly baroque, involving memory barriers, a completion and spinning until the task is actually dead (for example, complete_and_exit() must be used if inside a module). There are also three problems in starting a thread: 1) Doing it from a random process context risks environment contamination: better to do it from keventd to guarantee a clean environment, a-la call_usermodehelper. 2) Getting the task struct without races is a hard: see kernel/sched.c migration_call(), kernel/workqueue.c create_workqueue_thread(). 3) There are races in starting a thread for a CPU which is not yet online: migration thread does a complex dance at the moment for a similar reason (there may be no migration thread to migrate us). Place all this logic in some primitives to make life easier: kthread_create() and kthread_stop(). These primitives require no extra data-structures in the caller: they operate on normal "struct task_struct"s. Other changes: - Expose keventd_up(), as keventd and migration threads will use kthread to launch, and kthread normally uses workqueues and must recognize this case. - Kthreads created at boot before "keventd" are spawned directly. However, this means that they don't have all signals blocked, and hence can be killed. The simplest solution is to always explicitly block all signals in the kthread. - Change over the migration threads, the workqueue threads and the ksoftirqd threads to use kthread. - module.c currently spawns threads directly to stop the machine, so a module can be atomically tested for removal. - Unfortunately, this means that the current task is manipulated (which races with set_cpus_allowed, for example), and it can't set its priority artificially high. Using a kernel thread can solve this cleanly, and with kthread_run, it's simple. - kthreads use keventd, so they inherit its cpus_allowed mask. Unset it. All current users set it explicity anyway, but it's nice to fix. - call_usermode_helper uses keventd, so the process created inherits its cpus_allowed mask. Unset it. - Prevent errors in boot when cpus_possible() contains a cpu which is not online (ie. a cpu didn't come up). This doesn't happen on x86, since a boot failure makes that CPU no longer possible (hacky, but it works). - When the cpu fails to come up, some callbacks do kthread_stop(), which doesn't work without keventd (which hasn't started yet). Call it directly, and take care that it restores signal state (note: do_sigaction does a flush on blocked signals, so we don't need to repeat it).
2003-04-14[PATCH] flush_work_queue() fixesAndrew Morton
The workqueue code currently has a notion of a per-cpu queue being "busy". flush_scheduled_work()'s responsibility is to wait for a queue to be not busy. Problem is, flush_scheduled_work() can easily hang up. - The workqueue is deemed "busy" when there are pending delayed (timer-based) works. But if someone repeatedly schedules new delayed work in the callback, the queue will never fall idle, and flush_scheduled_work() will not terminate. - If someone reschedules work (not delayed work) in the work function, that too will cause the queue to never go idle, and flush_scheduled_work() will not terminate. So what this patch does is: - Create a new "cancel_delayed_work()" which will try to kill off any timer-based delayed works. - Change flush_scheduled_work() so that it is immune to people re-adding work in the work callout handler. We can do this by recognising that the caller does *not* want to wait until the workqueue is "empty". The caller merely wants to wait until all works which were pending at the time flush_scheduled_work() was called have completed. The patch uses a couple of sequence numbers for that. So now, if someone wants to reliably remove delayed work they should do: /* * Make sure that my work-callback will no longer schedule new work */ my_driver_is_shutting_down = 1; /* * Kill off any pending delayed work */ cancel_delayed_work(&my_work); /* * OK, there will be no new works scheduled. But there may be one * currently queued or in progress. So wait for that to complete. */ flush_scheduled_work(); The patch also changes the flush_workqueue() sleep to be uninterruptible. We cannot legally bale out if a signal is delivered anyway.
2002-11-04[PATCH] use timer intialiser in workqueuesAndrew Morton
Teach DECLARE_WORK about __TIMER_INITIALIZER. So all statically initialised workqueues have valid timers. eg: drivers/char/random.c:batch_work.
2002-10-03Add <linux/linkage.h> include to get FASTCALL() define.Linus Torvalds
2002-10-02[PATCH] timer-2.5.40-F7Ingo Molnar
This does a number of timer subsystem enhancements: - simplified timer initialization, now it's the cheapest possible thing: static inline void init_timer(struct timer_list * timer) { timer->base = NULL; } since the timer functions already did a !timer->base check this did not have any effect on their fastpath. - the rule from now on is that timer->base is set upon activation of the timer, and cleared upon deactivation. This also made it possible to: - reorganize all the timer handling code to not assume anything about timer->entry.next and timer->entry.prev - this also removed lots of unnecessery cleaning of these fields. Removed lots of unnecessary list operations from the fastpath. - simplified del_timer_sync(): it now uses del_timer() plus some simple synchronization code. Note that this also fixes a bug: if mod_timer (or add_timer) moves a currently executing timer to another CPU's timer vector, then del_timer_sync() does not synchronize with the handler properly. - bugfix: moved run_local_timers() from scheduler_tick() into update_process_times() .. scheduler_tick() might be called from the fork code which will not quite have the intended effect ... - removed the APIC-timer-IRQ shifting done on SMP, Dipankar Sarma's testing shows no negative effects. - cleaned up include/linux/timer.h: - removed the timer_t typedef, and fixes up kernel/workqueue.c to use the 'struct timer_list' name instead. - removed unnecessery includes - renamed the 'list' field to 'entry' (it's an entry not a list head) - exchanged the 'function' and 'data' fields. This, besides being more logical, also unearthed the last few remaining places that initialized timers by assuming some given field ordering, the patch also fixes these places. (fs/xfs/pagebuf/page_buf.c, net/core/profile.c and net/ipv4/inetpeer.c) - removed the defunct sync_timers(), timer_enter() and timer_exit() prototypes. - added docbook-style comments. - other kernel/timer.c changes: - base->running_timer does not have to be volatile ... - added consistent comments to all the important functions. - made the sync-waiting in del_timer_sync preempt- and lowpower- friendly. i've compiled, booted & tested the patched kernel on x86 UP and SMP. I have tried moderately high networking load as well, to make sure the timer changes are correct - they appear to be.
2002-09-30[PATCH] Workqueue AbstractionIngo Molnar
This is the next iteration of the workqueue abstraction. The framework includes: - per-CPU queueing support. on SMP there is a per-CPU worker thread (bound to its CPU) and per-CPU work queues - this feature is completely transparent to workqueue-users. keventd automatically uses this feature. XFS can now update to work-queues and have the same per-CPU performance as it had with its per-CPU worker threads. - delayed work submission there's a new queue_delayed_work(wq, work, delay) function and a new schedule_delayed_work(work, delay) function. The later one is used to correctly fix former tq_timer users. I've reverted those changes in 2.5.40 that changed tq_timer uses to schedule_work() - eg. in the case of random.c or the tty flip queue it was definitely the wrong thing to do. delayed work means a timer embedded in struct work_struct. I considered using split struct work_struct and delayed_work_struct types, but lots of code actively uses task-queues in both delayed and non-delayed mode, so i went for the more generic approach that allows both methods of work submission. Delayed timers do not cause any other overhead in the normal submission path otherwise. - multithreaded run_workqueue() implementation the run_workqueue() function can now be called from multiple contexts, and a worker thread will only use up a single entryy - this property is used by the flushing code, and can potentially be used in the future to extend the number of per-CPU worker threads. - more reliable flushing there's now a 'pending work' counter, which is used to accurately detect when the last work-function has finished execution. It's also used to correctly flush against timed requests. I'm not convinced whether the old keventd implementation got this detail right. - i switched the arguments of the queueing function(s) per Jeff's suggestion, it's more straightforward this way. Driver fixes: i have converted almost every affected driver to the new framework. This cleaned up tons of code. I also fixed a number of drivers that were still using BHs (these drivers did not compile in 2.5.40). while this means lots of changes, it might ease the QA decision whether to put this patch into 2.5. The pach converts roughly 80% of all tqueue-using code to workqueues - and all the places that are not converted to workqueues yet are places that do not compile in vanilla 2.5.40 anyway, due to unrelated changes. I've converted a fair number of drivers that do not compile in 2.5.40, and i think i've managed to convert every driver that compiles under 2.5.40.