| Age | Commit message (Collapse) | Author |
|
The md_recoveryd thread is responsible for initiating and cleaning
up resync threads.
This job can be equally well done by the per-array threads
for those arrays which might need it.
So the mdrecoveryd thread is gone and the core code that
it ran is now run by raid5d, raid1d or multipathd.
We add an MD_RECOVERY_NEEDED flag so those daemon don't have
to bother trying to lock the md array unless it is likely
that something needs to be done.
Also modify the names of all threads to have the number of
md device.
|
|
This allows the thread to easily identified and signalled.
The point of signalling will appear in the next patch.
|
|
raid1, raid5 and multipath maintain their own
'operational' flag. This is equivalent to
!rdev->faulty
and so isn't needed.
Similarly raid1 and raid1 maintain a "write_only" flag
that is equivalnt to
!rdev->in_sync
so it isn't needed either.
As part of implementing this change, we introduce some extra
flag bit in raid5 that are meaningful only inside 'handle_stripe'.
Some of these replace the "action" array which recorded what
actions were required (and would be performed after the stripe
spinlock was released). This has the advantage of reducing our
dependance on MD_SB_DISKS which personalities shouldn't need
to know about.
|
|
1/ Personalities only know about raid_disks devices.
Some might be not in_sync and so cannot be read from,
but must be written to.
- change MD_SB_DISKS to ->raid_disks
- add tests for .write_only
2/ rdev->raid_disk is now -1 for spares. desc_nr is maintained
by analyse_sbs and sync_sbs.
3/ spare_inactive method is subsumed into hot_remove_disk
spare_writable is subsumed into hot_add_disk.
hot_add_disk decides which slot a new device will hold.
4/ spare_active now finds all non-in_sync devices and marks them
in_sync.
5/ faulty devices are removed by the md recovery thread as soon
as they are idle. Any spares that are available are then added.
|
|
This is equivalent to ->rdev != NULL, so it isn't needed.
|
|
device on an MD array
This will allow us to know, in the event of a device failure, when the
device is completely unused and so can be disconnected from the
array. Currently this isn't a problem as drives aren't normally disconnect
until after a repacement has been rebuilt, which is a LONG TIME, but that
will change shortly...
We always increment the count under a spinlock after checking that
it hasn't been disconnected already (rdev!= NULL).
We disconnect under the same spinlock after checking that the
count is zero.
|
|
Holding the rdev instead of the bdev does cause an extra
de-reference, but it is conceptually cleaner and will allow
lots more tidying up.
|
|
Remove number and raid_disk from personality arrays
These are redundant. number not needed any more
raid_disk never was as that is the index.
|
|
nr_disks is gone from multipath/raid1
Never used.
|
|
* a bunch of callers of partition_name() are calling
bdev_partition_name(),
* the last users of raid1 and multipath ->dev are gone; so are
the fields in question.
|
|
Previously each raid personality (Well, 1 and 5) started their
own thread to do resync, but md.c had a single common thread to do
reconstruct. Apart from being untidy, this means that you cannot
have two arrays reconstructing at the same time, though you can have
to array resyncing at the same time..
This patch changes the personalities so they don't start the resync,
but just leave a flag to say that it is needed.
The common thread (mdrecoveryd) now just monitors things and starts a
separate per-array thread whenever resync or recovery (or both) is
needed.
When the recovery finishes, mdrecoveryd will be woken up to re-lock
the device and activate the spares or whatever.
raid1 needs to know when resync/recovery starts and ends so it can
allocate and release resources.
It allocated when a resync request for stripe 0 is received.
Previously it deallocated for resync in it's own thread, and
deallocated for recovery when the spare is made active or inactive
(depending on success).
As raid1 doesn't own a thread anymore this needed to change. So to
match the "alloc on 0", the md_do_resync now calls sync_request one
last time asking to sync one block past the end. This is a signal to
release any resources.
|
|
- md/raid1.c - bring struct block_device * into private data.
|
|
- me: revert the "kill(-1..)" change. POSIX isn't that clear on the
issue anyway, and the new behaviour breaks things.
- Jens Axboe: more bio updates
- Al Viro: rd_load cleanups. hpfs mount fix, mount cleanups
- Ingo Molnar: more raid updates
- Jakub Jelinek: fix Linux/x86 confusion about arg passing of "save_v86_state" and "do_signal"
- Trond Myklebust: fix NFS client race conditions
|
|
- Jeff Garzik: no longer support old cards in tulip driver
(see separate driver for old tulip chips)
- Pat Mochel: driverfs/device model documentation
- Ballabio Dario: update eata driver to new IO locking
- Ingo Molnar: raid resync with new bio structures (much more efficient)
and mempool_resize()
- Jens Axboe: bio queue locking
|
|
- Rui Sousa: emu10k1 module fixes, remove joystick part.
- Alan Cox: driver merges
- Andrea Arkangeli: alpha updates
- David Woodhouse: up_and_exit -> complete_and_exit
- David Miller: sparc and network update
- Andrew Morton: update 3c59x driver
- Neil Brown: NFS export VFAT, knfsd cleanups, raid fixes
- Ben Collins: ieee1394 updates
- Paul Mackerras: PPC update
- me: make sure we don't lose position bits in "filldir()"
|
|
|