summaryrefslogtreecommitdiff
path: root/include/linux/blkdev.h
AgeCommit message (Collapse)Author
2006-01-24[BLOCK] ll_rw_blk: make max_sectors and max_hw_sectors unsigned intsJens Axboe
IDE lba48 can support full 64k request size, which overflows the max_hw_sectors variable. Signed-off-by: Jens Axboe <axboe@suse.de>
2006-01-09Merge branch 'blk-softirq' of git://brick.kernel.dk/data/git/linux-2.6-blockLinus Torvalds
Manual merge for trivial #include changes
2006-01-09[BLOCK] ll_rw_blk: Enable out-of-order request completions through softirqJens Axboe
Request completion can be a quite heavy process, since it needs to iterate through the entire request and complete the bio's it holds. This patch adds blk_complete_request() which moves this processing into a dedicated block softirq. Signed-off-by: Jens Axboe <axboe@suse.de>
2006-01-09[BLOCK] Kill blk_attempt_remerge()Jens Axboe
It's a broken interface, it's done way too late. And apparently it triggers slab problems in recent kernels as well (most likely after the generic dispatch code was merged). So kill it, ide-cd is the only user of it. Signed-off-by: Jens Axboe <axboe@suse.de>
2006-01-06[BLOCK] Correct blk_execute_rq_nowait() prototypeJens Axboe
2006-01-06[BLOCK] reimplement handling of barrier requestTejun Heo
Reimplement handling of barrier requests. * Flexible handling to deal with various capabilities of target devices. * Retry support for falling back. * Tagged queues which don't support ordered tag can do ordered. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
2006-01-06[BLOCK] add @uptodate to end_that_request_last() and @error to rq_end_io_fn()Tejun Heo
add @uptodate argument to end_that_request_last() and @error to rq_end_io_fn(). there's no generic way to pass error code to request completion function, making generic error handling of non-fs request difficult (rq->errors is driver-specific and each driver uses it differently). this patch adds @uptodate to end_that_request_last() and @error to rq_end_io_fn(). for fs requests, this doesn't really matter, so just using the same uptodate argument used in the last call to end_that_request_first() should suffice. imho, this can also help the generic command-carrying request jens is working on. Signed-off-by: tejun heo <htejun@gmail.com> Signed-Off-By: Jens Axboe <axboe@suse.de>
2005-12-15[SCSI] seperate max_sectors from max_hw_sectorsMike Christie
- export __blk_put_request and blk_execute_rq_nowait needed for async REQ_BLOCK_PC requests - seperate max_hw_sectors and max_sectors for block/scsi_ioctl.c and SG_IO bio.c helpers per Jens's last comments. Since block/scsi_ioctl.c SG_IO was already testing against max_sectors and SCSI-ml was setting max_sectors and max_hw_sectors to the same value this does not change any scsi SG_IO behavior. It only prepares ll_rw_blk.c, scsi_ioctl.c and bio.c for when SCSI-ml begins to set a valid max_hw_sectors for all LLDs. Today if a LLD does not set it SCSI-ml sets it to a safe default and some LLDs set it to a artificial low value to overcome memory and feedback issues. Note: Since we now cap max_sectors to BLK_DEF_MAX_SECTORS, which is 1024, drivers that used to call blk_queue_max_sectors with a large value of max_sectors will now see the fs requests capped to BLK_DEF_MAX_SECTORS. Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
2005-12-14[SCSI] add retries field to request for REQ_BLOCK_PC useMike Christie
For tape we need to control the retries. This patch adds a retries counter on the request for REQ_BLOCK_PC commands originating from scsi_execute* to use. REQ_BLOCK_PC commands comming from the block layer SG_IO path continue to use the retires set in the ULD init_command. (scsi_execute* does not set the gendisk so we do not execute the init_command in that path). Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
2005-12-14[SCSI] export blk layer functions needed for blk_execute_rq_nowaitMike Christie
To send async requests we need these two functions exported. Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
2005-11-12[BLOCK] Implement elv_drain_elevator for improved switch error detectionTejun Heo
This patch adds request_queue->nr_sorted which keeps the number of requests in the iosched and implement elv_drain_elevator which performs forced dispatching. elv_drain_elevator checks whether iosched actually dispatches all requests it has and prints error message if it doesn't. As buggy forced dispatching can result in wrong barrier operations, I think this extra check is worthwhile. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
2005-10-28Merge branch 'elevator-switch' of git://brick.kernel.dk/data/git/linux-2.6-blockLinus Torvalds
Manual fixup for trivial "gfp_t" changes.
2005-10-28Merge branch 'generic-dispatch' of ↵Linus Torvalds
git://brick.kernel.dk/data/git/linux-2.6-block
2005-10-28[PATCH] gfp_t: block layer coreAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-28[BLOCK] elevator switch fixes/cleanupJens Axboe
- 100msec sleep is a little excessive, lots of requests can complete in that timeframe. Use 10msec instead. - Rename QUEUE_FLAG_BYPASS to QUEUE_FLAG_ELVSWITCH to indicate what is going on. Signed-off-by: Jens Axboe <axboe@suse.de>
2005-10-28[BLOCK] Reimplement elevator switchTejun Heo
This patch reimplements elevator switch. This patch assumes generic dispatch queue patchset is applied. * Each request is tagged with REQ_ELVPRIV flag if it has its elevator private data set. * Requests which doesn't have REQ_ELVPRIV flag set never enter iosched. They are always directly back inserted to dispatch queue. Of course, elevator_put_req_fn is called only for requests which have its REQ_ELVPRIV set. * Request queue maintains the current number of requests which have its elevator data set (elevator_set_req_fn called) in q->rq->elvpriv. * If a request queue has QUEUE_FLAG_BYPASS set, elevator private data is not allocated for new requests. To switch to another iosched, we set QUEUE_FLAG_BYPASS and wait until elvpriv goes to zero; then, we attach the new iosched and clears QUEUE_FLAG_BYPASS. New implementation is much simpler and main code paths are less cluttered, IMHO. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
2005-10-28[BLOCK] kill generic max_back_kb handlingTejun Heo
This patch kills max_back_kb handling from elv_dispatch_sort() and kills max_back_kb field from struct request_queue. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
2005-10-28[PATCH] 03/05 move last_merge handlin into generic elevator codeTejun Heo
Currently, both generic elevator code and specific ioscheds participate in the management and usage of last_merge. This and the following patches move last_merge handling into generic elevator code. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
2005-10-28[PATCH] generic dispatch fixesJens Axboe
- Split elv_dispatch_insert() into two functions - Rename rq_last_sector() to rq_end_sector() Signed-off-by: Jens Axboe <axboe@suse.de>
2005-10-28[PATCH] 01/05 Implement generic dispatch queueTejun Heo
Implements generic dispatch queue which can replace all dispatch queues implemented by each iosched. This reduces code duplication, eases enforcing semantics over dispatch queue, and simplifies specific ioscheds. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
2005-09-10[PATCH] include/linux/blkdev.h: "extern inline" -> "static inline"Adrian Bunk
"extern inline" doesn't make much sense. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-08-28fix mismerge in ll_rw_blk.cJames Bottomley
2005-08-05[PATCH] blk: fix tag shrinking (revive real_max_size)Tejun Heo
My patch in commit fa72b903f75e4f0f0b2c2feed093005167da4023 incorrectly removed blk_queue_tag->real_max_depth. The original resize implementation was incorrect in the following points. * actual allocation size of tag_index was shorter than real_max_size, but assumed to be of the same size, possibly causing memory access beyond the allocated area. * bits in tag_map between max_deptn and real_max_depth were initialized to 1's, making the tags permanently reserved. In an attempt to fix above two bugs, I had removed allocation optimization in init_tag_map and real_max_size. Tag map/index were allocated and freed immediately during resize. Unfortunately, I wasn't considering that tag map/index can be resized dynamically with tags beyond new_depth active. This led to accessing freed area after shrinking tags and led to the following bug reporting thread on linux-scsi. http://marc.theaimsgroup.com/?l=linux-scsi&m=112319898111885&w=2 To fix the problem, I've revived real_max_depth without allocation optimization in init_tag_map, and Andrew Vasquez confirmed that the problem was fixed. As Jens is not going to be available for a week, he asked me to make sure that this patch reaches you. http://marc.theaimsgroup.com/?l=linux-scsi&m=112325778530886&w=2 Also, a comment was added to make sure that real_max_size is needed for dynamic shrinking. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-28[PATCH] blk: light iocontext opsNick Piggin
get_io_context needlessly turned off interrupts and checked for racing io context creations. Both of which aren't needed, because the io context can only be created while in process context of the current process. Also, split the function in 2. A light version, current_io_context does not elevate the reference count specifically, but can be used when in process context, because the process holds a reference itself. Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27[PATCH] Update cfq io scheduler to time sliced designJens Axboe
This updates the CFQ io scheduler to the new time sliced design (cfq v3). It provides full process fairness, while giving excellent aggregate system throughput even for many competing processes. It supports io priorities, either inherited from the cpu nice value or set directly with the ioprio_get/set syscalls. The latter closely mimic set/getpriority. This import is based on my latest from -mm. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25[PATCH] drivers/block/ll_rw_blk.c: cleanupsAdrian Bunk
This patch contains the following cleanups: - make needlessly global code static - remove the following unused global functions: - blkdev_scsi_issue_flush_fn - __blk_attempt_remerge - remove the following unused EXPORT_SYMBOL's: - blk_phys_contig_segment - blk_hw_contig_segment - blkdev_scsi_issue_flush_fn - __blk_attempt_remerge Signed-off-by: Adrian Bunk <bunk@stusta.de> Acked-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23[PATCH] blk: remove BLK_TAGS_{PER_LONG|MASK}Tejun Heo
Replace BLK_TAGS_PER_LONG with BITS_PER_LONG and remove unused BLK_TAGS_MASK. Signed-off-by: Tejun Heo <htejun@gmail.com> Acked-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23[PATCH] blk: remove blk_queue_tag->real_max_depth optimizationTejun Heo
blk_queue_tag->real_max_depth was used to optimize out unnecessary allocations/frees on tag resize. However, the whole thing was very broken - tag_map was never allocated to real_max_depth resulting in access beyond the end of the map, bits in [max_depth..real_max_depth] were set when initializing a map and copied when resizing resulting in pre-occupied tags. As the gain of the optimization is very small, well, almost nill, remove the whole thing. Signed-off-by: Tejun Heo <htejun@gmail.com> Acked-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23[PATCH] NUMA aware block device control structure allocationChristoph Lameter
Patch to allocate the control structures for for ide devices on the node of the device itself (for NUMA systems). The patch depends on the Slab API change patch by Manfred and me (in mm) and the pcidev_to_node patch that I posted today. Does some realignment too. Signed-off-by: Justin M. Forbes <jmforbes@linuxtx.org> Signed-off-by: Christoph Lameter <christoph@lameter.com> Signed-off-by: Pravin Shelar <pravin@calsoftinc.com> Signed-off-by: Shobhit Dayal <shobhit@calsoftinc.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-20[PATCH] update blk_execute_rq to take an at_head parameterJames Bottomley
Original From: Mike Christie <michaelc@cs.wisc.edu> Modified to split out block changes (this patch) and SCSI pieces. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
2005-06-20[PATCH] Add scatter-gather support for the block layer SG_IOJames Bottomley
Signed-off-by: Jens Axboe <axboe@suse.de>
2005-06-20[PATCH] Cleanup blk_rq_map_* interfacesJens Axboe
Change the blk_rq_map_user() and blk_rq_map_kern() interface to require a previously allocated request to be passed in. This is both more efficient for multiple iterations of mapping data to the same request, and it is also a much nicer API. Signed-off-by: Jens Axboe <axboe@suse.de>
2005-06-20[PATCH] Add blk_rq_map_kern()Mike Christie
Add blk_rq_map_kern which takes a kernel buffer and maps it into a request and bio. This can be used by the dm hw_handlers, old sg_scsi_ioctl, and one day scsi special requests so all requests comming into scsi will have bios. All requests having bios should allow scsi to use scatter lists for all IO and allow it to use block layer functions. Signed-off-by: Jens Axboe <axboe@suse.de>
2005-05-20[SCSI] remove requeue feature from blk_insert_request()Tejun Heo
blk_insert_request() has a unobivous feature of requeuing a request setting REQ_SPECIAL|REQ_SOFTBARRIER. SCSI midlayer was the only user and as previous patches removed the usage, remove the feature from blk_insert_request(). Only special requests should be queued with blk_insert_request(). All requeueing should go through blk_requeue_request(). Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
2005-04-16[PATCH] fix NMI lockup with CFQ scheduler
The current problem seen is that the queue lock is actually in the SCSI device structure, so when that structure is freed on device release, we go boom if the queue tries to access the lock again. The fix here is to move the lock from the scsi_device to the queue. Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
2005-03-28Mark "gfp" masks as "unsigned int" and use __nocast to find violations.Linus Torvalds
This makes it hard(er) to mix argument orders by mistake for things like kmalloc() and friends, since silent integer promotion is now caught by sparse.
2005-03-08[PATCH] barrier rework updatesJens Axboe
As promised to Andrew, here are the latest bits that fixup the block io barrier handling. - Add io scheduler ->deactivate hook to tell the io scheduler is a request is suspended from the block layer. cfq and as needs this hook. - Locking updates - Make sure a driver doesn't reuse the flush rq before a previous one has completed - Typo in the scsi_io_completion() function, the bit shift was wrong - sd needs proper timeout on the flush - remove silly debug leftover in ide-disk wrt "hdc" Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-03-07[PATCH] rework core barrier supportJens Axboe
This reworks the core barrier support to be a lot nicer, so that all the nasty code resides outside of drivers/ide. It requires minimal changes to support in a driver, I've added SCSI support as an example. The ide code is adapted to the new code. With this patch, we support full barriers on sata now. Bart has acked the addition to -mm, I would like for this to be submitted as soon as 2.6.12 opens. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-03-07[PATCH] Add struct request end_io callbackJens Axboe
This is needed for several things, one in-tree user which I will introduce after this patch. This adds a ->end_io callback to struct request, so it can be used with async io of any sort. Right now users have to wait for completion in a blocking manner. In the next iteration, ->waiting can be folded into ->end_io_data since it is just a special case of that use. From: Peter Osterlund <petero2@telia.com> The problem is that the add-struct-request-end_io-callback patch forgot to update pktcdvd.c. This patch fixes it. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Peter Osterlund <petero2@telia.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-01-12[PATCH] possible rq starvation on oomJens Axboe
I stumbled across this the other day. The block layer only uses a single memory pool for request allocation, so it's very possible for eg writes to have allocated them all at any point in time. If that is the case and the machine is low on memory, a reader attempting to allocate a request and failing in blk_alloc_request() can get stuck for a long time since no one is there to wake it up. The solution is either to add the extra mempool so both reads and writes have one, or attempt to handle the situation. I chose the latter, to save the extra memory required for the additional mempool with BLKDEV_MIN_RQ statically allocated requests per-queue. If a read allocation fails and we have no readers in flight for this queue, mark us rq-starved so that the next write being freed will wake up the sleeping reader(s). Same situation would happen for writes as well of course, it's just a lot more unlikely. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-11-10[PATCH] md: delete unplug timer before shutting down md arrayNeil Brown
As the unplug timer can potentially fire at any time, and and it access data that is released by the md ->stop function, we need to del_timer_sync before releasing that data. (After much discussion, we created blk_sync_queue() for this) Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Contributions from Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-10-27[PATCH] issues with online scheduler switchingJens Axboe
There are two issues with online io scheduler switching that this patch addresses. The first is pretty simple - it concerns racing with scheduler removal on switch. elevator_find() does not grab a reference to the io scheduler, so before elevator_attach() is run it could go away. Add elevator_get() to solve that. Second issue is the flushing out of requests that is needed before switching can be problematic with requests that aren't allocated in the block layer (such as requests on the stack of a process). The problem is that we don't know when they will have finished, and most io schedulers need to access the elevator structures on io completion. This can be fixed by adding an intermedia step that switches to noop, since it doesn't need to touch anything but the request_queue. The queue drain can then safely be split into two operations - one that waits for file system requests, and one that waits for the queue to completely empty. Requests arriving after the first drain will get stuck in a seperate queue list. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-10-22[block] remove bio walkingBartlomiej Zolnierkiewicz
All users of this code were fixed to use scatterlists. Acked by Jens. Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
2004-10-18[PATCH] cfq-v2 I/O scheduler updateJens Axboe
Here is the next incarnation of the CFQ io scheduler, so far known as CFQ v2 locally. It attempts to address some of the limitations of the original CFQ io scheduler (hence forth known as CFQ v1). Some of the problems with CFQ v1 are: - It does accounting for the lifetime of the cfq_queue, which is setup and torn down for the time when a process has io in flight. For a fork heavy work load (such as a kernel compile, for instance), new processes can effectively starve io of running processes. This is in part due to the fact that CFQ v1 gives preference to a new processes to get better latency numbers. Removing that heuristic is not an option exactly because of that. - It makes no attempts to address inter-cfq_queue fairness. - It makes no attempt to limit upper latency bound of a single request. - It only provides per-tgid grouping. You need to change the source to group on a different criteria. - It uses a mempool for the cfq_queues. Theoretically this could deadlock if io bound processes never exit. - The may_queue() logic can be unfair since it fluctuates quickly, thus leaving processes sleeping while new processes are allowed to allocate a request. CFQ v2 attempts to fix these issues. It uses the process io_context logic to maintain a cfq_queue lifetime of the duration of the process (and its io). This means we can now be a lot more clever in deciding which process is allowed to queue or dispatch io to the device. The cfq_io_context is per-process per-queue, this is an extension to what AS currently does in that we truly do have a unique per-process identifier for io grouping. Busy queues are sorted by service time used, sub sorted by in_flight requests. Queues that have no io in flight are also preferred at dispatch time. Accounting is done on completion time of a request, or with a fixed cost for tagged command queueing. Requests are fifo'ed like with deadline, to make sure that a single request doesn't stay in the io scheduler for ages. Process grouping is selectable at runtime. I provide 4 grouping criterias: process group, thread group id, user id, and group id. As usual, settings are sysfs tweakable in /sys/block/<dev>/queue/iosched axboe@apu:[.]s/block/hda/queue/iosched $ ls back_seek_max fifo_batch_expire find_best_crq queued back_seek_penalty fifo_expire_async key_type show_status clear_elapsed fifo_expire_sync quantum tagged In order, each of these settings control: back_seek_max back_seek_penalty: Useful logic stolen from AS that allow small backwards seeks in the io stream if we deem them useful. CFQ uses a strict ascending elevator otherwise. _max controls the maximum allowed backwards seek, defaulting to 16MiB. _penalty denotes how expensive we account a backwards seek compared to a forward seek. Default is 2, meaning it's twice as expensive. clear_elapsed: Really a debug switch, will go away in the future. It clears the maximum values for completion and dispatch time, shown in show_status. fifo_batch_expire fifo_batch_async fifo_batch_sync: The settings for the expiry fifo. batch_expire is how often we allow the fifo expire to control which request to select. Default is 125ms. _async is the deadline for async requests (typically writes), _sync is the deadline for sync requests (reads and sync writes). Defaults are, respectively, 5 seconds and 0.5 seconds. key_type: The grouping key. Can be set to pgid, tgid, uid, or gid. The current value is shown bracketed: axboe@apu:[.]s/block/hda/queue/iosched $ cat key_type [pgid] tgid uid gid Default is tgid. To set, simply echo any of the 4 words into the file. quantum: The amount of requests we select for dispatch when the driver asks for work to do and the current pending list is empty. Default is 4. queued: The minimum amount of requests a group is allowed to queue. Default is 8. show_status: Debug output showing the current state of the queues. tagged: Set this to 1 if the device is using tagged command queueing. This cannot be reliably detected by CFQ yet, since most drivers don't use the block layer (well it could, by looking at number of requests being between dispatch and completion. but not completely reliably). Default is 0. The patch is a little big, but works reliably here on my laptop. There are a number of other changes and fixes in there (like converting to hlist for hashes). The code is commented a lot better, CFQ v1 has basically no comments (reflecting that it was writting in one go, no touched or tuned much since then). This is of course only done to increase the AAF, akpm acceptance factor. Since I'm on the road, I cannot provide any really good numbers of CFQ v1 compared to v2, maybe someone will help me out there. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-10-18[PATCH] switchable and modular io schedulersJens Axboe
This patch modularizes the io schedulers completely, allowing them to be modular. Additionally it enables online switching of io schedulers. See also http://lwn.net/Articles/102593/ . There's a scheduler file in the sysfs directory for the block device queue: axboe@router:/sys/block/hda/queue> ls iosched max_sectors_kb read_ahead_kb max_hw_sectors_kb nr_requests scheduler If you list the contents of the file, it will show available schedulers and the active one: axboe@router:/sys/block/hda/queue> cat scheduler [cfq] Lets load a few more. router:/sys/block/hda/queue # modprobe deadline-iosched router:/sys/block/hda/queue # modprobe as-iosched router:/sys/block/hda/queue # cat scheduler [cfq] deadline anticipatory Changing is done with router:/sys/block/hda/queue # echo deadline > scheduler router:/sys/block/hda/queue # cat scheduler cfq [deadline] anticipatory deadline is now the new active io scheduler for hda. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-09-13[PATCH] blk: max_sectors tunablesIngo Molnar
Introduces two new /sys/block values: /sys/block/*/queue/max_hw_sectors_kb /sys/block/*/queue/max_sectors_kb max_hw_sectors_kb is the maximum that the driver can handle and is readonly. max_sectors_kb is the current max_sectors value and can be tuned by root. PAGE_SIZE granularity is enforced. It's all locking-safe and all affected layered drivers have been updated as well. The patch has been in testing for a couple of weeks already as part of the voluntary-preempt patches and it works just fine - people use it to reduce IDE IRQ handling latencies. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-08-22[PATCH] disk barriers: coreJens Axboe
IDE disk barrier core. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-08-12Pass done file pointer to block device ioctl'sLinus Torvalds
They'll need it for permission checking.
2004-08-02[PATCH] bio_copy_user() cleanups and fixesJens Axboe
blk_rq_map_user() is a bit of a hack currently, since it drops back to kmalloc() if bio_map_user() fails. This is unfortunate since it means we do no real segment or size checking (and the request segment counts contain crap, already found one bug in a scsi lld). It's also pretty nasty for > PAGE_SIZE requests, as we attempt to do higher order page allocations. Even worse still, ide-cd will drop back to PIO for non-sg/bio requests. All in all, very suboptimal. This patch adds bio_copy_user() which simply sets up a bio with kernel pages and copies data as needed for reads and writes. It also changes bio_map_user() to return an error pointer like bio_copy_user(), so we can return something sane to the user instead of always -ENOMEM. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-06-29[PATCH] Combined patch for remaining trivial sparse warnings in allnoconfig ↵Mika Kukkonen
build Well, one of these (fs/block_dev.c) is little non-trivial, but i felt throwing that away would be a shame (and I did add comments ;-). Also almost all of these have been submitted earlier through other channels, but have not been picked up (the only controversial is again the fs/block_dev.c patch, where Linus felt a better job would be done with __ffs(), but I could not convince myself that is does the same thing as original code). Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>