| Age | Commit message (Collapse) | Author |
|
Commit block was intended to have several copies of the header. But
due to a bug it never had them and actually, nobody checks that. So
just remove the useless loop.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
|
|
Some devices - notably dm and md - can change their behaviour in response
to BIO_RW_BARRIER requests. They might start out accepting such requests
but on reconfiguration, they find out that they cannot any more.
ext3 (and other filesystems) deal with this by always testing if
BIO_RW_BARRIER requests fail with EOPNOTSUPP, and retrying the write
requests without the barrier (probably after waiting for any pending writes
to complete).
However there is a bug in the handling for this for ext3.
When ext3 (jbd actually) decides to submit a BIO_RW_BARRIER request, it
sets the buffer_ordered flag on the buffer head. If the request completes
successfully, the flag STAYS SET.
Other code might then write the same buffer_head after the device has been
reconfigured to not accept barriers. This write will then fail, but the
"other code" is not ready to handle EOPNOTSUPP errors and the error will be
treated as fatal.
This can be seen without having to reconfigure a device at exactly the
wrong time by putting:
if (buffer_ordered(bh))
printk("OH DEAR, and ordered buffer\n");
in the while loop in "commit phase 5" of journal_commit_transaction.
If it ever prints the "OH DEAR ..." message (as it does sometimes for
me), then that request could (in different circumstances) have failed
with EOPNOTSUPP, but that isn't tested for.
My proposed fix is to clear the buffer_ordered flag after it has been
used, as in the following patch.
Signed-off-by: Neil Brown <neilb@suse.de>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The break_lock data structure and code for spinlocks is quite nasty.
Not only does it double the size of a spinlock but it changes locking to
a potentially less optimal trylock.
Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a
__raw_spin_is_contended that uses the lock data itself to determine whether
there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is
not set.
Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to
decouple it from the spinlock implementation, and make it typesafe (rwlocks
do not have any need_lockbreak sites -- why do they even get bloated up
with that break_lock then?).
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Before we start committing a transaction, we call
__journal_clean_checkpoint_list() to cleanup transaction's written-back
buffers.
If this call happens to remove all of them (and there were already some
buffers), __journal_remove_checkpoint() will decide to free the transaction
because it isn't (yet) a committing transaction and soon we fail some
assertion - the transaction really isn't ready to be freed :).
We change the check in __journal_remove_checkpoint() to free only a
transaction in T_FINISHED state. The locking there is subtle though (as
everywhere in JBD ;(). We use j_list_lock to protect the check and a
subsequent call to __journal_drop_transaction() and do the same in the end
of journal_commit_transaction() which is the only place where a transaction
can get to T_FINISHED state.
Probably I'm too paranoid here and such locking is not really necessary -
checkpoint lists are processed only from log_do_checkpoint() where a
transaction must be already committed to be processed or from
__journal_clean_checkpoint_list() where kjournald itself calls it and thus
transaction cannot change state either. Better be safe if something
changes in future...
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
We should really call journal_abort() and not __journal_abort_hard() in
case of errors. The latter call does not record the error in the journal
superblock and thus filesystem won't be marked as with errors later (and
user could happily mount it without any warning).
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
JBD: Replace slab allocations with page allocations
JBD allocate memory for committed_data and frozen_data from slab. However
JBD should not pass slab pages down to the block layer. Use page allocator pages instead. This will also prepare JBD for the large blocksize patchset.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Mingming Cao <cmm@us.ibm.com>
|
|
We have to check that also the second checkpoint list is non-empty before
dropping the transaction.
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Chuck Ebbert <cebbert@redhat.com>
Cc: Kirill Korotaev <dev@openvz.org>
Cc: <linux-ext4@vger.kernel.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Remove includes of <linux/smp_lock.h> where it is not used/needed.
Suggested by Al Viro.
Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
sparc64, and arm (all 59 defconfigs).
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
In the current jbd code, if a buffer on BJ_SyncData list is dirty and not
locked, the buffer is refiled to BJ_Locked list, submitted to the IO and
waited for IO completion.
But the fsstress test showed the case that when a buffer was already
submitted to the IO just before the buffer_dirty(bh) check, the buffer was
not waited for IO completion.
Following patch solves this problem. If it is assumed that a buffer is
submitted to the IO before the buffer_dirty(bh) check and still being
written to disk, this buffer is refiled to BJ_Locked list.
Signed-off-by: Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
Cc: Jan Kara <jack@ucw.cz>
Cc: "Stephen C. Tweedie" <sct@redhat.com>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Many files include the filename at the beginning, serveral used a wrong one.
Signed-off-by: Uwe Zeisberger <Uwe_Zeisberger@digi.com>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
|
|
Original commit code assumes, that when a buffer on BJ_SyncData list is
locked, it is being written to disk. But this is not true and hence it can
lead to a potential data loss on crash. Also the code didn't count with
the fact that journal_dirty_data() can steal buffers from committing
transaction and hence could write buffers that no longer belong to the
committing transaction. Finally it could possibly happen that we tried
writing out one buffer several times.
The patch below tries to solve these problems by a complete rewrite of the
data commit code. We go through buffers on t_sync_datalist, lock buffers
needing write out and store them in an array. Buffers are also immediately
refiled to BJ_Locked list or unfiled (if the write out is completed). When
the array is full or we have to block on buffer lock, we submit all
accumulated buffers for IO.
[suitable for 2.6.18.x around the 2.6.19-rc2 timeframe]
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Badari Pulavarty <pbadari@us.ibm.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
JBD currently allocates commit and frozen buffers from slabs. With
CONFIG_SLAB_DEBUG, its possible for an allocation to cross the page
boundary causing IO problems.
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=200127
So, instead of allocating these from regular slabs - manage allocation from
its own slabs and disable slab debug for these slabs.
[akpm@osdl.org: cleanups]
Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Fix possible assertion failure in journal_commit_transaction() on
jh->b_next_transaction == NULL (when we are processing BJ_Forget list and
buffer is not jbddirty).
!jbddirty buffers can be placed on BJ_Forget list for example by
journal_forget() or by __dispose_buffer() - generally such buffer means
that it has been freed by this transaction.
Freed buffers should not be reallocated until the transaction has committed
(that's why we have the assertion there) but they *can* be reallocated when
the transaction has already been committed to disk and we are just
processing the BJ_Forget list (as soon as we remove b_committed_data from
the bitmap bh, ext3 will be able to reallocate buffers freed by the
committing transaction). So we have to also count with the case that the
buffer has been reallocated and b_next_transaction has been already set.
And one more subtle point: it can happen that we manage to reallocate the
buffer and also mark it jbddirty. Then we also add the freed buffer to the
checkpoint list of the committing trasaction. But that should do no harm.
Non-jbddirty buffers should be filed to BJ_Reserved and not BJ_Metadata
list. It can actually happen that we refile such buffers during the commit
phase when we reallocate in the running transaction blocks deleted in
committing transaction (and that can happen if the committing transaction
already wrote all the data and is just cleaning up BJ_Forget list).
Signed-off-by: Jan Kara <jack@suse.cz>
Acked-by: "Stephen C. Tweedie" <sct@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
This patch reverts commit f93ea411b73594f7d144855fd34278bcf34a9afc:
[PATCH] jbd: split checkpoint lists
This broke journal_flush() for OCFS2, which is its method of being sure
that metadata is sent to disk for another node.
And two related commits 8d3c7fce2d20ecc3264c8d8c91ae3beacdeaed1b and
43c3e6f5abdf6acac9b90c86bf03f995bf7d3d92 with the subjects:
[PATCH] jbd: log_do_checkpoint fix
[PATCH] jbd: remove_transaction fix
These seem to be incremental bugfixes on the original patch and as such are
no longer needed.
Signed-off-by: Mark Fasheh <mark.fasheh@oracle.com>
Cc: Jan Kara <jack@ucw.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
We have to check that also the second checkpoint list is non-empty before
dropping the transaction.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
This is the fs/ part of the big kfree cleanup patch.
Remove pointless checks for NULL prior to calling kfree() in fs/.
Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
We must be sure that the current data in buffer are sent to disk. Hence we
have to call ll_rw_block() with SWRITE.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Fix race between journal_commit_transaction() and other places as
journal_unmap_buffer() that are adding buffers to transaction's t_forget list.
We have to protect against such places by holding j_list_lock even when
traversing the t_forget list. The fact that other places can only add buffers
to the list makes the locking easier. OTOH the lock ranking complicates the
stuff...
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Fix destruction of in-use journal_head
journal_put_journal_head() can destroy a journal_head at any time as
long as the jh's b_jcount is zero and b_transaction is NULL. It has no
locking protection against the rest of the journaling code, as the lock
it uses to protect b_jcount and bh->b_private is not used elsewhere in
jbd.
However, there are small windows where b_transaction is getting set
temporarily to NULL during normal operations; typically this is
happening in
__journal_unfile_buffer(jh);
__journal_file_buffer(jh, ...);
call pairs, as __journal_unfile_buffer() will set b_transaction to NULL
and __journal_file_buffer() re-sets it afterwards. A truncate running
in parallel can lead to journal_unmap_buffer() destroying the jh if it
occurs between these two calls.
Fix this by adding a variant of __journal_unfile_buffer() which is only
used for these temporary jh unlinks, and which leaves the b_transaction
field intact so that we never leave a window open where b_transaction is
NULL.
Additionally, trap this error if it does occur, by checking against
jh->b_jlist being non-null when we destroy a jh.
Signed-off-by: Stephen Tweedie <sct@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
journal_commit_transaction() is 720 lines long. This patch pulls about 55
of them out into their own function, removes a goto and cleans up the
control flow a little.
Signed-off-by: Matthew Wilcox <matthew@wil.cx>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Dynamically allocate the holding array for kjournald write patching rather
than allocating it on the stack.
Signed-off-by: Alex Tomas <alex@clusterfs.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
fix against credits leak in journal_release_buffer()
The idea is to charge a buffer in journal_dirty_metadata(), not in
journal_get_*_access()). Each buffer has flag call
journal_dirty_metadata() sets on the buffer.
Signed-off-by: Alex Tomas <alex@clusterfs.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
I found bugs on error handlings in the functions arround the ext3 file
system, which cause inadequate completions of synchronous write I/O
operations when disk I/O failures occur. Both 2.4 and 2.6 have this
problem.
I carried out following experiment:
1. Mount a ext3 file system on a SCSI disk with ordered mode.
2. Open a file on the file system with O_SYNC|O_RDWR|O_TRUNC|O_CREAT flag.
3. Write 512 bytes data to the file by calling write() every 5 seconds, and
examine return values from the syscall.
from write().
4. Disconnect the SCSI cable, and examine messages from the kernel.
After the SCSI cable is disconnected, write() must fail. But the result
was different: write() succeeded for a while even though messages of the
kernel notified SCSI I/O error.
By applying following modifications, the above problem was solved.
Signed-off-by: Hisashi Hifumi <hifumi.hisashi@lab.ntt.co.jp>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
The attached patch fixes long scheduling latencies in the ext3 code, and it
also cleans up the existing lock-break functionality to use the new
primitives.
This patch has been in the -VP patchset for quite some time.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
With the demise of intermezzo, the journal callback stuff in jbd is
entirely unused (neither ext3 nor ocfs2 use it), and thus will only bitrot
and bloat the kernel with code and datastructure growth. If intermezzo
ever gets resurrected this will be the least of the problems they have to
face (both with generic kernel as jbd).
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Processes can sleep in do_get_write_access(), waiting for buffers to be
removed from the BJ_Shadow state. We did this by doing a wake_up_buffer() in
the commit path and sleeping on the buffer in do_get_write_access().
With the filtered bit-level wakeup code this doesn't work properly any more -
the wake_up_buffer() accidentally wakes up tasks which are sleeping in
lock_buffer() as well. Those tasks now implicitly assume that the buffer came
unlocked. Net effect: Bogus I/O errors when reading journal blocks, because
the buffer isn't up to date yet. Hence the recently spate of journal_bmap()
failure reports.
The patch creates a new jbd-private BH flag purely for this wakeup function.
So a wake_up_bit(..., BH_Unshadow) doesn't wake up someone who is waiting for
a wake_up_bit(BH_Lock).
JBD was the only user of wake_up_buffer(), so remove it altogether.
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Signed-off-by: Al Viro <viro@parcelfarce.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Mount with "mount -o barrier=1" to enable barriers.
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
From: Chris Mason <mason@suse.com>
jbd needs to wait for any io to complete on the buffer before changing the
end_io function. Using set_buffer_locked means that it can change the
end_io function while the page is in the middle of writeback, and the
writeback bit on the page will never get cleared.
Since we set the buffer dirty earlier on, if the page was previously dirty,
pdflush or memory pressure might trigger a writepage call, which will race
with jbd's set_buffer_locked.
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Fix a problem discovered by Jeff Mahoney <jeffm@suse.com>, based on an initial
patch from Chris Mason <mason@suse.com>.
journal_get_descriptor_buffer() is used to obtain a regular old buffer_head
against the blockdev mapping. The caller will populate that bh by hand and
will then submit it for writing.
But there are problems:
a) The function sets bh->b_state nonatomically. But this buffer is
accessible to other CPUs via pagecache lookup.
b) The function sets the buffer dirty and then the caller populates it and
then it is submitted for I/O. Wrong order: there's a window in which the
VM could write the buffer before it is fully populated.
c) The function fails to set the buffer uptodate after zeroing it. And one
caller forgot to mark it uptodate as well. So if the VM happens to decide
to write the containing page back __block_write_full_page() encounters a
dirty, not uptodate buffer, which is an illegal state. This was generating
buffer_error() warnings before we removed buffer_error().
Leaving the buffer not uptodate also means that a concurrent reader of
/dev/hda1 could cause physical I/O against the buffer, scribbling on what
we just put in it.
So journal_get_descriptor_buffer() is changed to mark the buffer
uptodate, under the buffer lock.
I considered changing journal_get_descriptor_buffer() to return a locked
buffer but there doesn't seem to be a need for this, and both callers end up
using ll_rw_block() anyway, which requires that the buffer be unlocked again.
Note that the journal_get_descriptor_buffer() callers dirty these buffers with
set_buffer_dirty(). That's a bit naughty, because it could create dirty
buffers against a clean page - an illegal state. They really should use
mark_buffer_dirty() to dirty the page and inode as well. But all callers will
immediately write and clean the buffer anyway, so we can safely leave this
optimising cheat in place.
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Almost everywhere where JBD removes a buffer from the transaction lists the
caller then nulls out jh->b_transaction. Sometimes, the caller does that
without holding the locks which are defined to protect b_transaction. This
makes me queazy.
So change things so that __journal_unfile_buffer() nulls out b_transaction
inside both j_list_lock and jbd_lock_bh_state().
It cleans things up a bit, too.
|
|
Fix a few buglets spotted by Jeff Mahoney <jeffm@suse.com>. We're currently
only checking for I/O errors against journal buffers if they were locked when
they were first inspected.
We need to check buffer_uptodate() even if the buffers were already unlocked.
|
|
For data=ordered, kjournald at commit time has to write out and wait upon a
long list of buffers. It does this in a rather awkward way with a single
list. it causes complexity and long lock hold times, and makes the addition
of rescheduling points quite hard
So what we do instead (based on Chris Mason's suggestion) is to add a new
buffer list (t_locked_list) to the journal. It contains buffers which have
been placed under I/O.
So as we walk the t_sync_datalist list we move buffers over to t_locked_list
as they are written out.
When t_sync_datalist is empty we may then walk t_locked_list waiting for the
I/O to complete.
As a side-effect this means that we can remove the nasty synchronous wait in
journal_dirty_data which is there to avoid the kjournald livelock which would
otherwise occur when someone is continuously dirtying a buffer.
|
|
There's some nasty code in commit which deals with a lock ranking problem.
Currently if it fails to get the lock when and local variable `bufs' is zero
we forget to write out some ordered-data buffers. So a subsequent
crash+recovery could yield stale data in existing files.
Fix it by correctly restarting the t_sync_datalist search.
|
|
The locking rules say that b_committed_data is covered by
jbd_lock_bh_state(), so implement that during the start of commit, while
throwing away unused shadow buffers.
I don't expect that there is really a race here, but them's the rules.
|
|
Sometimes kjournald has to refile a huge number of buffers, because someone
else wrote them out beforehand - they are all clean.
This happens under a lock and scheduling latencies of 88 milliseconds on a
2.7GHx CPU were observed.
The patch forward-ports a little bit of the 2.4 low-latency patch to fix this
problem.
Worst-case on ext3 is now sub-half-millisecond, except for when the RCU
dentry reaping softirq cuts in :(
|
|
Plug the two-megabyte-per-day memory leak.
|
|
We're getting asserion failures in commit in data=journal mode.
journal_unmap_buffer() has unexpectedly donated this buffer to the committing
transaction, and the commit-time assertion doesn't expect that to happen. It
doesn't happen in 2.4 because both paths are under lock_journal().
Simply remove the assertion: the commit code will uncheckpoint the buffer and
then recheckpoint it if needed.
|
|
From: Alex Tomas <bzzz@tmi.comex.ru>
start_this_handle() takes into account t_outstanding_credits when calculating
log free space, but journal_next_log_block() accounts for blocks being logged
also. Hence, blocks are accounting twice. This effectively reduces the
amount of log space available to transactions and forces more commits.
Fix it by decrementing t_outstanding_credits each time we allocate a new
journal block.
|
|
- remove accidental debug code from ext3 commit.
- /proc/profile documentation fix (Randy Dunlap)
- use sb_breadahead() in ext2_preread_inode()
- unused var in mpage_writepages()
|
|
CPU0 CPU1
journal_get_write_access(bh)
(Add buffer to t_reserved_list)
journal_get_write_access(bh)
(It's already on t_reserved_list:
nothing to do)
(We decide we don't want to
journal the buffer after all)
journal_release_buffer()
(It gets pulled off the transaction)
journal_dirty_metadata()
(The buffer isn't on the reserved
list! The kernel explodes)
Simple fix: just leave the buffer on t_reserved_list in
journal_release_buffer(). If nobody ends up claiming the buffer then it will
get thrown away at start of transaction commit.
|
|
We need to unconditionally brelse() the buffer in there, because
journal_remove_journal_head() leaves a ref behind.
release_buffer_page() does that. Call it all the time because we can usually
strip the buffers and free the page even if it was not marked buffer_freed().
Mainly affects data=journal mode
|
|
ext3 and JBD still have enormous numbers of lines which end in tabs. Fix
them all up.
|
|
With data=ordered it is often the case that a quick write-and-truncate will
leave large numbers of pages on the page LRU with no ->mapping, and attached
buffers. Because ext3 was not ready to let the pages go at the time of
truncation.
These pages are trivially reclaimable, but their seeming absence makes the VM
overcommit accounting confused (they don't count as "free", nor as
pagecache). And they make the /proc/meminfo stats look odd.
So what we do here is to try to strip the buffers from these pages as the
buffers exit the journal commit.
|
|
start_this_handle() can decide to add this handle to a transaction, but
kjournald then moves the handle into commit phase.
Extend the coverage of j_state_lock so that start_this_transaction()'s
examination of journal->j_state is atomic wrt journal_commit_transaction().
|
|
Plug a conceivable race with the freeing up of trasnactions, and add some
more debug checks.
|
|
This filesystem-wide sleeping lock is no longer needed. Remove it.
|
|
lock_kernel() is no longer needed in JBD. Remove all the lock_kernel() calls
from fs/jbd/.
Here is where I get to say "ex-parrot".
|
|
Remove the remaining sleep_on() calls from JBD.
|