summaryrefslogtreecommitdiff
path: root/include/linux/resource.h
AgeCommit message (Collapse)Author
2006-06-25[PATCH] kernel/sys.c: cleanupsAdrian Bunk
- proper prototypes for the following functions: - ctrl_alt_del() (in include/linux/reboot.h) - getrusage() (in include/linux/resource.h) - make the following needlessly global functions static: - kernel_restart_prepare() - kernel_kexec() [akpm@osdl.org: compile fix] Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-09-07[PATCH] Remove RUSAGE_GROUPRoland McGrath
After my cleanup of the rusage semantics was so quickly taken in by Andrew and Linus without comment, I wonder if I should not have tried to be so accommodating of potential objections as I was. :-) In my original posting, I solicited comment on whether introducing RUSAGE_GROUP as distinct from RUSAGE_SELF was warranted. Note that we've now changed the behavior of the times system call when using CLONE_THREAD, so changing getrusage RUSAGE_SELF to match would be consistent. I think that changing the meaning of the old RUSAGE_SELF value is preferable to introducing the new value for the proper POSIX getrusage behavior. This patch against Linus's current tree dumps RUSAGE_GROUP and makes RUSAGE_SELF have the fixed behavior. If there is interest in having a new explicit interface to sample a single thread's stats alone, then I think that would be better done by introducing a new value for RUSAGE_THREAD. This is trivial to implement, but I won't offer patches bloating the interface if noone is actually interested in using it. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-08-30[PATCH] fix rusage semanticsRoland McGrath
This patch changes the rusage bookkeeping and the semantics of the getrusage and times calls in a couple of ways. The first change is in the c* fields counting dead child processes. POSIX requires that children that have died be counted in these fields when they are reaped by a wait* call, and that if they are never reaped (e.g. because of ignoring SIGCHLD, or exitting yourself first) then they are never counted. These were counted in release_task for all threads. I've changed it so they are counted in wait_task_zombie, i.e. exactly when being reaped. POSIX also specifies for RUSAGE_CHILDREN that the report include the reaped child processes of the calling process, i.e. whole thread group in Linux, not just ones forked by the calling thread. POSIX specifies tms_c[us]time fields in the times call the same way. I've moved the c* fields that contain this information into signal_struct, where the single set of counters accumulates data from any thread in the group that calls wait*. Finally, POSIX specifies getrusage and times as returning cumulative totals for the whole process (aka thread group), not just the calling thread. I've added fields in signal_struct to accumulate the stats of detached threads as they die. The process stats are the sums of these records plus the stats of remaining each live/zombie thread. The times and getrusage calls, and the internal uses for filling in wait4 results and siginfo_t, now iterate over the threads in the thread group and sum up their stats along with the stats recorded for threads already dead and gone. I added a new value RUSAGE_GROUP (-3) for the getrusage system call rather than changing the behavior of the old RUSAGE_SELF (0). POSIX specifies RUSAGE_SELF to mean all threads, so the glibc getrusage call will just translate it to RUSAGE_GROUP for new kernels. I did this thinking that someone somewhere might want the old behavior with an old glibc and a new kernel (it is only different if they are using CLONE_THREAD anyway). However, I've changed the times system call to conform to POSIX as well and did not provide any backward compatibility there. In that case there is nothing easy like a parameter value to use, it would have to be a new system call number. That seems pretty pointless. Given that, I wonder if it is worth bothering to preserve the compatible RUSAGE_SELF behavior by introducing RUSAGE_GROUP instead of just changing RUSAGE_SELF's meaning. Comments? I've done some basic testing on x86 and x86-64, and all the numbers come out right after these fixes. (I have a test program that shows a few Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-08-26[PATCH] sane mlock_limitAndrew Morton
As David M-T points out, the default per-user mlock limit should be at least a single page. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-08-22[PATCH] increase per-user mlock limit default to 32kRik van Riel
Since various gnupg users have indicated that gpg wants to mlock 32kB of memory, I created the patch below that increases the default mlock ulimit to 32kB. This is no security problem because it's trivial for processes to lock way more memory than this in page tables, network buffers, etc. In fact, since this patch allows gnupg to mlock to prevent passphrase data from being swapped out, the security people will probably like it ;) This gets the new per-user mlock limit a bit more testing, too. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2002-11-07[PATCH] move _STK_LIM to <linux_resource.h>Tim Schmielau
I don't see any connection between the stack limit and scheduling. So I think _STK_LIMIT is better defined in <linux/resource.h> than in <linux/sched.h>. The only place STK_LIM is used is in <asm/resource.h>, which only gets included by <linux/resource.h>, so no change in #includes is necessary.
2002-02-04Import changesetLinus Torvalds