=====================================================
Notes Written on Jan 15, 2002:
- Jens Axboe <axboe@suse.de>
+ Jens Axboe <jens.axboe@oracle.com>
Suparna Bhattacharya <suparna@in.ibm.com>
Last Updated May 2, 2002
---------
2.5 bio rewrite:
- Jens Axboe <axboe@suse.de>
+ Jens Axboe <jens.axboe@oracle.com>
Many aspects of the generic block layer redesign were driven by and evolved
over discussions, prior patches and the collective experience of several
do not have a corresponding kernel virtual address space mapping) and
low-memory pages.
-Note: Please refer to DMA-mapping.txt for a discussion on PCI high mem DMA
-aspects and mapping of scatter gather lists, and support for 64 bit PCI.
+Note: Please refer to Documentation/PCI/PCI-DMA-mapping.txt for a discussion
+on PCI high mem DMA aspects and mapping of scatter gather lists, and support
+for 64 bit PCI.
Special handling is required only for cases where i/o needs to happen on
pages at physical memory addresses beyond what the device can support. In these
the same bi_io_vec array, but with the index and size accordingly modified)
- A linked list of bios is used as before for unrelated merges (*) - this
avoids reallocs and makes independent completions easier to handle.
-- Code that traverses the req list needs to make a distinction between
- segments of a request (bio_for_each_segment) and the distinct completion
- units/bios (rq_for_each_bio).
+- Code that traverses the req list can find all the segments of a bio
+ by using rq_for_each_segment. This handles the fact that a request
+ has multiple bios, each of which can have multiple segments.
- Drivers which can't process a large bio in one shot can use the bi_idx
field to keep track of the next bio_vec entry to process.
(e.g a 1MB bio_vec needs to be handled in max 128kB chunks for IDE)
3.2.1 Traversing segments and completion units in a request
-The macros bio_for_each_segment() and rq_for_each_bio() should be used for
-traversing the bios in the request list (drivers should avoid directly
-trying to do it themselves). Using these helpers should also make it easier
-to cope with block changes in the future.
+The macro rq_for_each_segment() should be used for traversing the bios
+in the request list (drivers should avoid directly trying to do it
+themselves). Using these helpers should also make it easier to cope
+with block changes in the future.
- rq_for_each_bio(bio, rq)
- bio_for_each_segment(bio_vec, bio, i)
- /* bio_vec is now current segment */
+ struct req_iterator iter;
+ rq_for_each_segment(bio_vec, rq, iter)
+ /* bio_vec is now current segment */
I/O completion callbacks are per-bio rather than per-segment, so drivers
that traverse bio chains on completion need to keep that in mind. Drivers
queueing (typically known as tagged command queueing), ie manage more than
one outstanding command on a queue at any given time.
- blk_queue_init_tags(request_queue_t *q, int depth)
+ blk_queue_init_tags(struct request_queue *q, int depth)
Initialize internal command tagging structures for a maximum
depth of 'depth'.
- blk_queue_free_tags((request_queue_t *q)
+ blk_queue_free_tags((struct request_queue *q)
Teardown tag info associated with the queue. This will be done
automatically by block if blk_queue_cleanup() is called on a queue
The above are initialization and exit management, the main helpers during
normal operations are:
- blk_queue_start_tag(request_queue_t *q, struct request *rq)
+ blk_queue_start_tag(struct request_queue *q, struct request *rq)
Start tagged operation for this request. A free tag number between
0 and 'depth' is assigned to the request (rq->tag holds this number),
for this queue is already achieved (or if the tag wasn't started for
some other reason), 1 is returned. Otherwise 0 is returned.
- blk_queue_end_tag(request_queue_t *q, struct request *rq)
+ blk_queue_end_tag(struct request_queue *q, struct request *rq)
End tagged operation on this request. 'rq' is removed from the internal
book keeping structures.
the hardware and software block queue and enable the driver to sanely restart
all the outstanding requests. There's a third helper to do that:
- blk_queue_invalidate_tags(request_queue_t *q)
+ blk_queue_invalidate_tags(struct request_queue *q)
Clear the internal block tag queue and re-add all the pending requests
to the request queue. The driver will receive them again on the
queue and specific I/O schedulers. Unless stated otherwise, elevator is used
to refer to both parts and I/O scheduler to specific I/O schedulers.
-Block layer implements generic dispatch queue in ll_rw_blk.c and elevator.c.
+Block layer implements generic dispatch queue in block/*.c.
The generic dispatch queue is responsible for properly ordering barrier
requests, requeueing, handling non-fs requests and all other subtleties.
change to another one dynamically.
A block layer call to the i/o scheduler follows the convention elv_xxx(). This
-calls elevator_xxx_fn in the elevator switch (drivers/block/elevator.c). Oh,
-xxx and xxx might not match exactly, but use your imagination. If an elevator
+calls elevator_xxx_fn in the elevator switch (block/elevator.c). Oh, xxx
+and xxx might not match exactly, but use your imagination. If an elevator
doesn't implement a function, the switch does nothing or some minimal house
keeping work.
scheduler for example, to reposition the request
if its sorting order has changed.
-elevator_dispatch_fn fills the dispatch queue with ready requests.
+elevator_allow_merge_fn called whenever the block layer determines
+ that a bio can be merged into an existing
+ request safely. The io scheduler may still
+ want to stop a merge at this point if it
+ results in some sort of conflict internally,
+ this hook allows it to do that.
+
+elevator_dispatch_fn* fills the dispatch queue with ready requests.
I/O schedulers are free to postpone requests by
not filling the dispatch queue unless @force
is non-zero. Once dispatched, I/O schedulers
are not allowed to manipulate the requests -
they belong to generic dispatch queue.
-elevator_add_req_fn called to add a new request into the scheduler
+elevator_add_req_fn* called to add a new request into the scheduler
elevator_queue_empty_fn returns true if the merge queue is empty.
Drivers shouldn't use this, but rather check
elevator_deactivate_req_fn Called when device driver decides to delay
a request by requeueing it.
-elevator_init_fn
+elevator_init_fn*
elevator_exit_fn Allocate and free any elevator specific storage
for a queue.
iii. Plugging the queue to batch requests in anticipation of opportunities for
merge/sort optimizations
-This is just the same as in 2.4 so far, though per-device unplugging
-support is anticipated for 2.5. Also with a priority-based i/o scheduler,
-such decisions could be based on request priorities.
-
Plugging is an approach that the current i/o scheduling algorithm resorts to so
that it collects up enough requests in the queue to be able to take
advantage of the sorting/merging logic in the elevator. If the
queue is empty when a request comes in, then it plugs the request queue
-(sort of like plugging the bottom of a vessel to get fluid to build up)
+(sort of like plugging the bath tub of a vessel to get fluid to build up)
till it fills up with a few more requests, before starting to service
the requests. This provides an opportunity to merge/sort the requests before
passing them down to the device. There are various conditions when the queue is
unplugged (to open up the flow again), either through a scheduled task or
could be on demand. For example wait_on_buffer sets the unplugging going
-(by running tq_disk) so the read gets satisfied soon. So in the read case,
-the queue gets explicitly unplugged as part of waiting for completion,
-in fact all queues get unplugged as a side-effect.
+through sync_buffer() running blk_run_address_space(mapping). Or the caller
+can do it explicity through blk_unplug(bdev). So in the read case,
+the queue gets explicitly unplugged as part of waiting for completion on that
+buffer. For page driven IO, the address space ->sync_page() takes care of
+doing the blk_run_address_space().
Aside:
This is kind of controversial territory, as it's not clear if plugging is
multi-page bios being queued in one shot, we may not need to wait to merge
a big request from the broken up pieces coming by.
- Per-queue granularity unplugging (still a Todo) may help reduce some of the
- concerns with just a single tq_disk flush approach. Something like
- blk_kick_queue() to unplug a specific queue (right away ?)
- or optionally, all queues, is in the plan.
-
4.4 I/O contexts
I/O contexts provide a dynamically allocated per process data area. They may
be used in I/O schedulers, and in the block layer (could be used for IO statis,
io_request_lock for serialization need to be modified accordingly.
Usually it's as easy as adding a global lock:
- static spinlock_t my_driver_lock = SPIN_LOCK_UNLOCKED;
+ static DEFINE_SPINLOCK(my_driver_lock);
and passing the address to that lock to blk_init_queue().