diff options
| author | Martin J. Bligh <mbligh@aracnet.com> | 2003-01-15 19:46:10 -0800 |
|---|---|---|
| committer | Justin T. Gibbs <gibbs@overdrive.btc.adaptec.com> | 2003-01-15 19:46:10 -0800 |
| commit | f01419fd6d4e5b32fef19d206bc3550cc04567a9 (patch) | |
| tree | 333edf330dfc500904580ff4cbeb0fc14d37f79c /include/linux | |
| parent | 5f24fe82613b570ef31c9e62c5921edcd09b576f (diff) | |
[PATCH] (2/3) Initial load balancing
Patch from Michael Hohnbaum
This adds a hook, sched_balance_exec(), to the exec code, to make it
place the exec'ed task on the least loaded queue. We have less state
to move at exec time than fork time, so this is the cheapest point
to cross-node migrate. Experience in Dynix/PTX and testing on Linux
has confirmed that this is the cheapest time to move tasks between nodes.
It also macro-wraps changes to nr_running, to allow us to keep track of
per-node nr_running as well. Again, no impact on non-NUMA machines.
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/sched.h | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index 931cdf559eb2..15a951d2d27e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -447,6 +447,14 @@ extern void set_cpus_allowed(task_t *p, unsigned long new_mask); # define set_cpus_allowed(p, new_mask) do { } while (0) #endif +#ifdef CONFIG_NUMA +extern void sched_balance_exec(void); +extern void node_nr_running_init(void); +#else +#define sched_balance_exec() {} +#define node_nr_running_init() {} +#endif + extern void set_user_nice(task_t *p, long nice); extern int task_prio(task_t *p); extern int task_nice(task_t *p); |
