| Age | Commit message (Collapse) | Author |
|
into tp1.ruhr-uni-bochum.de:/home/kai/kernel/v2.5/linux-2.5.make
|
|
this is only for the module-related warning introduced by my
__deprecated patch.
|
|
Add a const declaration to the __module_param_call so __param section
gets more correct attributes.
|
|
Rather than have the module loader the module structure and
resolve the symbols __this_module to it, make __this_module a real
structure inside the module, using the linkonce trick we used for
module names.
This saves us an allocation (saving a page per module on
archs which need the module structure close by), and means we don't
have to fill in a few module fields.
|
|
A second start at removing them from kernel/*.c and fs/*.c.
Note that module_put is fine for a NULL argument.
|
|
Rename the deprecated attribute to __deprecated to make it obvious
this is something special and to avoid namespace clashes.
Mark old module interfaces deprecated.
|
|
This corrects the misspellings of "deprecated" in a few places.
|
|
This moves ramfs_getattr() to fs/libfs.c as simple_getattr()
|
|
This marks check_region "deprecated".
This gives a nice warning messages for programs that still use
check_region for example:
drivers/parport/parport_pc.c:2215: warning: `__check_region' is deprecated (declared at include/linux/ioport.h:111)
|
|
This patch adds support for usage of the attribute as "deprecated" and
is backward-compatible. Usage is:
int deprecated foo(void)
etc..
If we mark a function as deprecated, then each use of the function emits
a warning like:
foo.c:12: warning: `baz' is deprecated (declared at bar.c:60)
Which is very informative, giving both the location of each usage and
where the little bastard is declared.
|
|
into home.transmeta.com:/home/torvalds/v2.5/linux
|
|
Mac/m68k Nubus updates (from Ray Knight in 2.4.x)
- Add missing Nubus devices.
|
|
|
|
into home.transmeta.com:/home/torvalds/v2.5/linux
|
|
usb_device
|
|
This was done to make the next reference count patch easier,
and because almost everyone was already calling usb_put_dev() anyway...
|
|
|
|
|
|
into kroah.com:/home/linux/linux/BK/gregkh-2.5
|
|
shouldn't be used anymore
Also added usb_get_intfdata() and usb_set_intfdata() functions to set the
struct usb_interface private pointer easier.
|
|
Attached is a patch leveraging some of the new generic dma stuff:
- Replaces dma mapping calls in usbcore with generic equivalents.
This is a minor code shrink (which we'd hoped could happen).
- Pass dma mask along, so net drivers can notice it'd be good to
set NETIF_F_HIGHDMA; or scsi ones can set highmem_io. (Some
Intel EHCI setups are able to support this kind of DMA.)
- Updates one net driver (usbnet) to set NETIF_F_HIGHDMA when
appropriate, mostly as an example (since I can't test this).
- Provides Documentation/usb/dma.txt, describing current APIs.
(Unchanged by this patch, except dma mask visibility.)
- Converted another info() to dev_info(), and likewise a couple
dbg() to dev_dbg() conversions in the modified routine.
The number of FIXMEs was conserved: the generic API doesn't yet
fix the error reporting bugs in the PCI-specific mapping API.
|
|
|
|
into tp1.ruhr-uni-bochum.de:/home/kai/kernel/v2.5/linux-2.5.make
|
|
Add support for amd756 and amd8111 sensors
|
|
This patch implements simple stem compression for the kallsyms symbol
table. Each symbol has as first byte a count on how many characters
are identical to the previous symbol. This compresses the often
common repetive prefixes (like subsys_) fairly effectively.
On a fairly full featured monolithic i386 kernel this saves about 60k in
the kallsyms symbol table.
The changes are very simple, so the 60k are not shabby.
One visible change is that the caller of kallsyms_lookup has to pass in
a buffer now, because it has to be modified. I added an arbitary
127 character limit to it.
Still >210k left in the symbol table unfortunately. Another idea would be to
delta encode the addresses in 16bits (functions are all likely to be smaller
than 64K). This would especially help on 64bit hosts. Not done yet, however.
No, before someone asks, I don't want to use zlib for that. Far too fragile
during an oops and overkill too and it would require to link it into all
kernels.
|
|
into nuts.ninka.net:/home/davem/src/BK/net-2.5
|
|
into nuts.ninka.net:/home/davem/src/BK/net-2.5
|
|
into nuts.ninka.net:/home/davem/src/BK/sctp-2.5
|
|
into home.transmeta.com:/home/torvalds/v2.5/linux
|
|
use a #include mechanism for generic implementations of the pci_
API in terms of the dma_ one
|
|
into kroah.com:/home/linux/linux/BK/gregkh-2.5
|
|
|
|
|
|
add dma_ API to mirror pci_ DMA API but phrased to use struct
device instead of struct pci_dev.
See Documentation/DMA-API.txt for details
|
|
Patch from Christoph Hellwig <hch@lst.de>
remove unused macro MAP_ALIGN()
|
|
From hch. Nothing is using the memclass() predicate.
|
|
- Add some (much-needed) commentary to the ext2/ext3 block allocator
state fields.
- Remove the SEARCH_FROM_ZERO debug code. I wrote that to trigger
some race and it hasn't been used in a year.
|
|
The `low latency page reclaim' design works by preventing page
allocators from blocking on request queues (and by preventing them from
blocking against writeback of individual pages, but that is immaterial
here).
This has a problem under some situations. pdflush (or a write(2)
caller) could be saturating the queue with highmem pages. This
prevents anyone from writing back ZONE_NORMAL pages. We end up doing
enormous amounts of scenning.
A test case is to mmap(MAP_SHARED) almost all of a 4G machine's memory,
then kill the mmapping applications. The machine instantly goes from
0% of memory dirty to 95% or more. pdflush kicks in and starts writing
the least-recently-dirtied pages, which are all highmem. The queue is
congested so nobody will write back ZONE_NORMAL pages. kswapd chews
50% of the CPU scanning past dirty ZONE_NORMAL pages and page reclaim
efficiency (pages_reclaimed/pages_scanned) falls to 2%.
So this patch changes the policy for kswapd. kswapd may use all of a
request queue, and is prepared to block on request queues.
What will now happen in the above scenario is:
1: The page alloctor scans some pages, fails to reclaim enough
memory and takes a nap in blk_congetion_wait().
2: kswapd() will scan the ZONE_NORMAL LRU and will start writing
back pages. (These pages will be rotated to the tail of the
inactive list at IO-completion interrupt time).
This writeback will saturate the queue with ZONE_NORMAL pages.
Conveniently, pdflush will avoid the congested queues. So we end up
writing the correct pages.
In this test, kswapd CPU utilisation falls from 50% to 2%, page reclaim
efficiency rises from 2% to 40% and things are generally a lot happier.
The downside is that kswapd may now do a lot less page reclaim,
increasing page allocation latency, causing more direct reclaim,
increasing lock contention in the VM, etc. But I have not been able to
demonstrate that in testing.
The other problem is that there is only one kswapd, and there are lots
of disks. That is a generic problem - without being able to co-opt
user processes we don't have enough threads to keep lots of disks saturated.
One fix for this would be to add an additional "really congested"
threshold in the request queues, so kswapd can still perform
nonblocking writeout. This gives kswapd priority over pdflush while
allowing kswapd to feed many disk queues. I doubt if this will be
called for.
|
|
We keep getting in a mess with the current->flags setting and
unsetting.
Remove current->flags:PF_NOWARN and create __GFP_NOWARN instead.
|
|
|
|
|
|
|
|
Description:
everywhere the NFS client uses the req_offset() function today, it adds
req->wb_offset to the result. this patch simply makes "+req->wb_offset"
a part of the req_offset() function.
Test status:
Passes all Connectathon '02 tests with v2, v3, UDP and TCP. Passes
NFS torture tests on an x86 UP highmem system.
|
|
These are used for the new in-kernel module loader (actually not all the
relocation types are used right now, but are included for completeness).
Only the EM_CYGNUS_V850 macro, which is in a global namespace, is added
to <linux/elf.h>; the relocation types, which are private to the v850,
are added to <asm-v850/elf.h>. [Perhaps some other archs can do a
similar split, to reduce the bloat in <linux/elf.h>]
|
|
Fix task->cpus_allowed bitmask truncations on 64.bit architectures.
Originally by Bjorn Helgaas for 2.4.x.
|
|
into home.transmeta.com:/home/torvalds/v2.5/linux
|
|
major changes to actually fit.
SGI Modid: 2.5.x-xfs:slinx:132210a
|
|
pr_debug() is defined to print using KERN_DEBUG already,
so uses of it don't need to repeat KERN_DEBUG.
|
|
Retrieve the post-operation attribute changes for NFSv4 READ and
WRITE operations. Unlike for NFSv2 and NFSv3, we do not retrieve the
full set of file attributes. The main reason for this is that
interpreting attributes is a much heavier task on NFSv4 (requiring, for
instance, translation of file owner names into uids ...). Hence
For a READ request, we retrieve only the 'change attribute' (for cache
consistency checking) and the atime.
For a WRITE request, we retrieve the 'change attribute' and the file size.
In addition, we retrieve the value of the change attribute prior to the
write operation, in order to be able to do weak cache consistency checking.
|
|
The following patch creates a clean XDR path for the NFSv4 write requests
instead of routing through encode_compound()/decode_compound().
|