<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/fs/jbd2, branch v3.18.74</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.18.74</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.18.74'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2016-11-24T02:28:38Z</updated>
<entry>
<title>jbd2: fix incorrect unlock on j_list_lock</title>
<updated>2016-11-24T02:28:38Z</updated>
<author>
<name>Taesoo Kim</name>
<email>tsgatesv@gmail.com</email>
</author>
<published>2016-10-13T03:19:18Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=74407793c5d017cfa295a226f06160331c2bc29e'/>
<id>urn:sha1:74407793c5d017cfa295a226f06160331c2bc29e</id>
<content type='text'>
[ Upstream commit 559cce698eaf4ccecb2213b2519ea3a0413e5155 ]

When 'jh-&gt;b_transaction == transaction' (asserted by below)

  J_ASSERT_JH(jh, (jh-&gt;b_transaction == transaction || ...

'journal-&gt;j_list_lock' will be incorrectly unlocked, since
the the lock is aquired only at the end of if / else-if
statements (missing the else case).

Signed-off-by: Taesoo Kim &lt;tsgatesv@gmail.com&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Reviewed-by: Andreas Dilger &lt;adilger@dilger.ca&gt;
Fixes: 6e4862a5bb9d12be87e4ea5d9a60836ebed71d28
Cc: stable@vger.kernel.org # 3.14+
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
</content>
</entry>
<entry>
<title>jbd2: fix FS corruption possibility in jbd2_journal_destroy() on umount path</title>
<updated>2016-04-18T12:49:27Z</updated>
<author>
<name>OGAWA Hirofumi</name>
<email>hirofumi@mail.parknet.co.jp</email>
</author>
<published>2016-03-10T04:47:25Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=44d77496ba729e62328930e9f949adb32b4c3af7'/>
<id>urn:sha1:44d77496ba729e62328930e9f949adb32b4c3af7</id>
<content type='text'>
[ Upstream commit c0a2ad9b50dd80eeccd73d9ff962234590d5ec93 ]

On umount path, jbd2_journal_destroy() writes latest transaction ID
(-&gt;j_tail_sequence) to be used at next mount.

The bug is that -&gt;j_tail_sequence is not holding latest transaction ID
in some cases. So, at next mount, there is chance to conflict with
remaining (not overwritten yet) transactions.

	mount (id=10)
	write transaction (id=11)
	write transaction (id=12)
	umount (id=10) &lt;= the bug doesn't write latest ID

	mount (id=10)
	write transaction (id=11)
	crash

	mount
	[recovery process]
		transaction (id=11)
		transaction (id=12) &lt;= valid transaction ID, but old commit
                                       must not replay

Like above, this bug become the cause of recovery failure, or FS
corruption.

So why -&gt;j_tail_sequence doesn't point latest ID?

Because if checkpoint transactions was reclaimed by memory pressure
(i.e. bdev_try_to_free_page()), then -&gt;j_tail_sequence is not updated.
(And another case is, __jbd2_journal_clean_checkpoint_list() is called
with empty transaction.)

So in above cases, -&gt;j_tail_sequence is not pointing latest
transaction ID at umount path. Plus, REQ_FLUSH for checkpoint is not
done too.

So, to fix this problem with minimum changes, this patch updates
-&gt;j_tail_sequence, and issue REQ_FLUSH.  (With more complex changes,
some optimizations would be possible to avoid unnecessary REQ_FLUSH
for example though.)

BTW,

	journal-&gt;j_tail_sequence =
		++journal-&gt;j_transaction_sequence;

Increment of -&gt;j_transaction_sequence seems to be unnecessary, but
ext3 does this.

Signed-off-by: OGAWA Hirofumi &lt;hirofumi@mail.parknet.co.jp&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin &lt;sasha.levin@oracle.com&gt;
</content>
</entry>
<entry>
<title>ext4, jbd2: ensure entering into panic after recording an error in superblock</title>
<updated>2016-01-21T16:23:28Z</updated>
<author>
<name>Daeho Jeong</name>
<email>daeho.jeong@samsung.com</email>
</author>
<published>2015-10-18T21:02:56Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=8fecc1e2c4b4a71abbbead8afe3098fce2863569'/>
<id>urn:sha1:8fecc1e2c4b4a71abbbead8afe3098fce2863569</id>
<content type='text'>
[ Upstream commit 4327ba52afd03fc4b5afa0ee1d774c9c5b0e85c5 ]

If a EXT4 filesystem utilizes JBD2 journaling and an error occurs, the
journaling will be aborted first and the error number will be recorded
into JBD2 superblock and, finally, the system will enter into the
panic state in "errors=panic" option.  But, in the rare case, this
sequence is little twisted like the below figure and it will happen
that the system enters into panic state, which means the system reset
in mobile environment, before completion of recording an error in the
journal superblock. In this case, e2fsck cannot recognize that the
filesystem failure occurred in the previous run and the corruption
wouldn't be fixed.

Task A                        Task B
ext4_handle_error()
-&gt; jbd2_journal_abort()
  -&gt; __journal_abort_soft()
    -&gt; __jbd2_journal_abort_hard()
    | -&gt; journal-&gt;j_flags |= JBD2_ABORT;
    |
    |                         __ext4_abort()
    |                         -&gt; jbd2_journal_abort()
    |                         | -&gt; __journal_abort_soft()
    |                         |   -&gt; if (journal-&gt;j_flags &amp; JBD2_ABORT)
    |                         |           return;
    |                         -&gt; panic()
    |
    -&gt; jbd2_journal_update_sb_errno()

Tested-by: Hobin Woo &lt;hobin.woo@samsung.com&gt;
Signed-off-by: Daeho Jeong &lt;daeho.jeong@samsung.com&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin &lt;sasha.levin@oracle.com&gt;
</content>
</entry>
<entry>
<title>jbd2: avoid infinite loop when destroying aborted journal</title>
<updated>2015-11-13T18:14:15Z</updated>
<author>
<name>Jan Kara</name>
<email>jack@suse.com</email>
</author>
<published>2015-07-28T18:57:14Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=ffb04e43c08ae23a4b3995ee77cc7fcc17a7afba'/>
<id>urn:sha1:ffb04e43c08ae23a4b3995ee77cc7fcc17a7afba</id>
<content type='text'>
[ Upstream commit 841df7df196237ea63233f0f9eaa41db53afd70f ]

Commit 6f6a6fda2945 "jbd2: fix ocfs2 corrupt when updating journal
superblock fails" changed jbd2_cleanup_journal_tail() to return EIO
when the journal is aborted. That makes logic in
jbd2_log_do_checkpoint() bail out which is fine, except that
jbd2_journal_destroy() expects jbd2_log_do_checkpoint() to always make
a progress in cleaning the journal. Without it jbd2_journal_destroy()
just loops in an infinite loop.

Fix jbd2_journal_destroy() to cleanup journal checkpoint lists of
jbd2_log_do_checkpoint() fails with error.

Reported-by: Eryu Guan &lt;guaneryu@gmail.com&gt;
Tested-by: Eryu Guan &lt;guaneryu@gmail.com&gt;
Fixes: 6f6a6fda294506dfe0e3e0a253bb2d2923f28f0a
Signed-off-by: Jan Kara &lt;jack@suse.com&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Signed-off-by: Sasha Levin &lt;sasha.levin@oracle.com&gt;
</content>
</entry>
<entry>
<title>jbd2: fix ocfs2 corrupt when updating journal superblock fails</title>
<updated>2015-07-04T03:02:30Z</updated>
<author>
<name>Joseph Qi</name>
<email>joseph.qi@huawei.com</email>
</author>
<published>2015-06-15T18:36:01Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1215720518ca3601baf1dd2a13f89aaf5511abf2'/>
<id>urn:sha1:1215720518ca3601baf1dd2a13f89aaf5511abf2</id>
<content type='text'>
[ Upstream commit 6f6a6fda294506dfe0e3e0a253bb2d2923f28f0a ]

If updating journal superblock fails after journal data has been
flushed, the error is omitted and this will mislead the caller as a
normal case.  In ocfs2, the checkpoint will be treated successfully
and the other node can get the lock to update. Since the sb_start is
still pointing to the old log block, it will rewrite the journal data
during journal recovery by the other node. Thus the new updates will
be overwritten and ocfs2 corrupts.  So in above case we have to return
the error, and ocfs2_commit_cache will take care of the error and
prevent the other node to do update first.  And only after recovering
journal it can do the new updates.

The issue discussion mail can be found at:
https://oss.oracle.com/pipermail/ocfs2-devel/2015-June/010856.html
http://comments.gmane.org/gmane.comp.file-systems.ext4/48841

[ Fixed bug in patch which allowed a non-negative error return from
  jbd2_cleanup_journal_tail() to leak out of jbd2_fjournal_flush(); this
  was causing xfstests ext4/306 to fail. -- Ted ]

Reported-by: Yiwen Jiang &lt;jiangyiwen@huawei.com&gt;
Signed-off-by: Joseph Qi &lt;joseph.qi@huawei.com&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Tested-by: Yiwen Jiang &lt;jiangyiwen@huawei.com&gt;
Cc: Junxiao Bi &lt;junxiao.bi@oracle.com&gt;
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin &lt;sasha.levin@oracle.com&gt;
</content>
</entry>
<entry>
<title>jbd2: use GFP_NOFS in jbd2_cleanup_journal_tail()</title>
<updated>2015-07-04T03:02:30Z</updated>
<author>
<name>Dmitry Monakhov</name>
<email>dmonakhov@openvz.org</email>
</author>
<published>2015-06-15T04:18:02Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c84ed5e549f31ce4567a601702ce9af94826ea7b'/>
<id>urn:sha1:c84ed5e549f31ce4567a601702ce9af94826ea7b</id>
<content type='text'>
[ Upstream commit b4f1afcd068f6e533230dfed00782cd8a907f96b ]

jbd2_cleanup_journal_tail() can be invoked by jbd2__journal_start()
So allocations should be done with GFP_NOFS

[Full stack trace snipped from 3.10-rh7]
[&lt;ffffffff815c4bd4&gt;] dump_stack+0x19/0x1b
[&lt;ffffffff8105dba1&gt;] warn_slowpath_common+0x61/0x80
[&lt;ffffffff8105dcca&gt;] warn_slowpath_null+0x1a/0x20
[&lt;ffffffff815c2142&gt;] slab_pre_alloc_hook.isra.31.part.32+0x15/0x17
[&lt;ffffffff8119c045&gt;] kmem_cache_alloc+0x55/0x210
[&lt;ffffffff811477f5&gt;] ? mempool_alloc_slab+0x15/0x20
[&lt;ffffffff811477f5&gt;] mempool_alloc_slab+0x15/0x20
[&lt;ffffffff81147939&gt;] mempool_alloc+0x69/0x170
[&lt;ffffffff815cb69e&gt;] ? _raw_spin_unlock_irq+0xe/0x20
[&lt;ffffffff8109160d&gt;] ? finish_task_switch+0x5d/0x150
[&lt;ffffffff811f1a8e&gt;] bio_alloc_bioset+0x1be/0x2e0
[&lt;ffffffff8127ee49&gt;] blkdev_issue_flush+0x99/0x120
[&lt;ffffffffa019a733&gt;] jbd2_cleanup_journal_tail+0x93/0xa0 [jbd2] --&gt;GFP_KERNEL
[&lt;ffffffffa019aca1&gt;] jbd2_log_do_checkpoint+0x221/0x4a0 [jbd2]
[&lt;ffffffffa019afc7&gt;] __jbd2_log_wait_for_space+0xa7/0x1e0 [jbd2]
[&lt;ffffffffa01952d8&gt;] start_this_handle+0x2d8/0x550 [jbd2]
[&lt;ffffffff811b02a9&gt;] ? __memcg_kmem_put_cache+0x29/0x30
[&lt;ffffffff8119c120&gt;] ? kmem_cache_alloc+0x130/0x210
[&lt;ffffffffa019573a&gt;] jbd2__journal_start+0xba/0x190 [jbd2]
[&lt;ffffffff811532ce&gt;] ? lru_cache_add+0xe/0x10
[&lt;ffffffffa01c9549&gt;] ? ext4_da_write_begin+0xf9/0x330 [ext4]
[&lt;ffffffffa01f2c77&gt;] __ext4_journal_start_sb+0x77/0x160 [ext4]
[&lt;ffffffffa01c9549&gt;] ext4_da_write_begin+0xf9/0x330 [ext4]
[&lt;ffffffff811446ec&gt;] generic_file_buffered_write_iter+0x10c/0x270
[&lt;ffffffff81146918&gt;] __generic_file_write_iter+0x178/0x390
[&lt;ffffffff81146c6b&gt;] __generic_file_aio_write+0x8b/0xb0
[&lt;ffffffff81146ced&gt;] generic_file_aio_write+0x5d/0xc0
[&lt;ffffffffa01bf289&gt;] ext4_file_write+0xa9/0x450 [ext4]
[&lt;ffffffff811c31d9&gt;] ? pipe_read+0x379/0x4f0
[&lt;ffffffff811b93f0&gt;] do_sync_write+0x90/0xe0
[&lt;ffffffff811b9b6d&gt;] vfs_write+0xbd/0x1e0
[&lt;ffffffff811ba5b8&gt;] SyS_write+0x58/0xb0
[&lt;ffffffff815d4799&gt;] system_call_fastpath+0x16/0x1b

Signed-off-by: Dmitry Monakhov &lt;dmonakhov@openvz.org&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin &lt;sasha.levin@oracle.com&gt;
</content>
</entry>
<entry>
<title>jbd2: fix r_count overflows leading to buffer overflow in journal recovery</title>
<updated>2015-06-10T17:42:24Z</updated>
<author>
<name>Darrick J. Wong</name>
<email>darrick.wong@oracle.com</email>
</author>
<published>2015-05-14T23:11:50Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d4842a5455e20003780bd81c429065756ed1ebcb'/>
<id>urn:sha1:d4842a5455e20003780bd81c429065756ed1ebcb</id>
<content type='text'>
[ Upstream commit e531d0bceb402e643a4499de40dd3fa39d8d2e43 ]

The journal revoke block recovery code does not check r_count for
sanity, which means that an evil value of r_count could result in
the kernel reading off the end of the revoke table and into whatever
garbage lies beyond.  This could crash the kernel, so fix that.

However, in testing this fix, I discovered that the code to write
out the revoke tables also was not correctly checking to see if the
block was full -- the current offset check is fine so long as the
revoke table space size is a multiple of the record size, but this
is not true when either journal_csum_v[23] are set.

Signed-off-by: Darrick J. Wong &lt;darrick.wong@oracle.com&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin &lt;sasha.levin@oracle.com&gt;
</content>
</entry>
<entry>
<title>ext4: fix NULL pointer dereference when journal restart fails</title>
<updated>2015-06-10T17:42:23Z</updated>
<author>
<name>Lukas Czerner</name>
<email>lczerner@redhat.com</email>
</author>
<published>2015-05-14T22:55:18Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d08570f3f052876a7a4472b66daac86c6a11b6b0'/>
<id>urn:sha1:d08570f3f052876a7a4472b66daac86c6a11b6b0</id>
<content type='text'>
[ Upstream commit 9d506594069355d1fb2de3f9104667312ff08ed3 ]

Currently when journal restart fails, we'll have the h_transaction of
the handle set to NULL to indicate that the handle has been effectively
aborted. We handle this situation quietly in the jbd2_journal_stop() and just
free the handle and exit because everything else has been done before we
attempted (and failed) to restart the journal.

Unfortunately there are a number of problems with that approach
introduced with commit

41a5b913197c "jbd2: invalidate handle if jbd2_journal_restart()
fails"

First of all in ext4 jbd2_journal_stop() will be called through
__ext4_journal_stop() where we would try to get a hold of the superblock
by dereferencing h_transaction which in this case would lead to NULL
pointer dereference and crash.

In addition we're going to free the handle regardless of the refcount
which is bad as well, because others up the call chain will still
reference the handle so we might potentially reference already freed
memory.

Moreover it's expected that we'll get aborted handle as well as detached
handle in some of the journalling function as the error propagates up
the stack, so it's unnecessary to call WARN_ON every time we get
detached handle.

And finally we might leak some memory by forgetting to free reserved
handle in jbd2_journal_stop() in the case where handle was detached from
the transaction (h_transaction is NULL).

Fix the NULL pointer dereference in __ext4_journal_stop() by just
calling jbd2_journal_stop() quietly as suggested by Jan Kara. Also fix
the potential memory leak in jbd2_journal_stop() and use proper
handle refcounting before we attempt to free it to avoid use-after-free
issues.

And finally remove all WARN_ON(!transaction) from the code so that we do
not get random traces when something goes wrong because when journal
restart fails we will get to some of those functions.

Cc: stable@vger.kernel.org
Signed-off-by: Lukas Czerner &lt;lczerner@redhat.com&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Signed-off-by: Sasha Levin &lt;sasha.levin@oracle.com&gt;
</content>
</entry>
<entry>
<title>jbd2: fix regression where we fail to initialize checksum seed when loading</title>
<updated>2014-12-02T02:57:06Z</updated>
<author>
<name>Darrick J. Wong</name>
<email>darrick.wong@oracle.com</email>
</author>
<published>2014-12-02T00:22:23Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=32f3869184d498850d36b7e6aa3b9f5260ea648a'/>
<id>urn:sha1:32f3869184d498850d36b7e6aa3b9f5260ea648a</id>
<content type='text'>
When we're enabling journal features, we cannot use the predicate
jbd2_journal_has_csum_v2or3() because we haven't yet set the sb
feature flag fields!  Moreover, we just finished loading the shash
driver, so the test is unnecessary; calculate the seed always.

Without this patch, we fail to initialize the checksum seed the first
time we turn on journal_checksum, which means that all journal blocks
written during that first mount are corrupt.  Transactions written
after the second mount will be fine, since the feature flag will be
set in the journal superblock.  xfstests generic/{034,321,322} are the
regression tests.

(This is important for 3.18.)

Signed-off-by: Darrick J. Wong &lt;darrick.wong@oracle.coM&gt;
Reported-by: Eric Whitney &lt;enwlinux@gmail.com&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
</content>
</entry>
<entry>
<title>jbd2: use a better hash function for the revoke table</title>
<updated>2014-10-30T14:53:17Z</updated>
<author>
<name>Theodore Ts'o</name>
<email>tytso@mit.edu</email>
</author>
<published>2014-10-30T14:53:17Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d48458d4a768cece43f80a081a26cf912877da9c'/>
<id>urn:sha1:d48458d4a768cece43f80a081a26cf912877da9c</id>
<content type='text'>
The old hash function didn't work well for 64-bit block numbers, and
used undefined (negative) shift right behavior.  Use the generic
64-bit hash function instead.

Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Reported-by: Andrey Ryabinin &lt;a.ryabinin@samsung.com&gt;
</content>
</entry>
</feed>
