<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/fs, branch v2.6.30.7</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v2.6.30.7</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v2.6.30.7'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2009-09-15T17:45:25Z</updated>
<entry>
<title>nilfs2: fix preempt count underflow in nilfs_btnode_prepare_change_key</title>
<updated>2009-09-15T17:45:25Z</updated>
<author>
<name>Ryusuke Konishi</name>
<email>konishi.ryusuke@lab.ntt.co.jp</email>
</author>
<published>2009-08-29T19:21:41Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=820fdfd8431c2b06723ce2c42806f3cef0ddb2e5'/>
<id>urn:sha1:820fdfd8431c2b06723ce2c42806f3cef0ddb2e5</id>
<content type='text'>
commit b1f1b8ce0a1d71cbc72f7540134d52b79bd8f5ac upstream.

This will fix the following preempt count underflow reported from
users with the title "[NILFS users] segctord problem" (Message-ID:
&lt;949415.6494.qm@web58808.mail.re1.yahoo.com&gt; and Message-ID:
&lt;debc30fc0908270825v747c1734xa59126623cfd5b05@mail.gmail.com&gt;):

 WARNING: at kernel/sched.c:4890 sub_preempt_count+0x95/0xa0()
 Hardware name: HP Compaq 6530b (KR980UT#ABC)
 Modules linked in: bridge stp llc bnep rfcomm l2cap xfs exportfs nilfs2 cowloop loop vboxnetadp vboxnetflt vboxdrv btusb bluetooth uvcvideo videodev v4l1_compat v4l2_compat_ioctl32 arc4 snd_hda_codec_analog ecb iwlagn iwlcore rfkill lib80211 mac80211 snd_hda_intel snd_hda_codec ehci_hcd uhci_hcd usbcore snd_hwdep snd_pcm tg3 cfg80211 psmouse snd_timer joydev libphy ohci1394 snd_page_alloc hp_accel lis3lv02d ieee1394 led_class i915 drm i2c_algo_bit video backlight output i2c_core dm_crypt dm_mod
 Pid: 4197, comm: segctord Not tainted 2.6.30-gentoo-r4-64 #7
 Call Trace:
  [&lt;ffffffff8023fa05&gt;] ? sub_preempt_count+0x95/0xa0
  [&lt;ffffffff802470f8&gt;] warn_slowpath_common+0x78/0xd0
  [&lt;ffffffff8024715f&gt;] warn_slowpath_null+0xf/0x20
  [&lt;ffffffff8023fa05&gt;] sub_preempt_count+0x95/0xa0
  [&lt;ffffffffa04ce4db&gt;] nilfs_btnode_prepare_change_key+0x11b/0x190 [nilfs2]
  [&lt;ffffffffa04d01ad&gt;] nilfs_btree_assign_p+0x19d/0x1e0 [nilfs2]
  [&lt;ffffffffa04d10ad&gt;] nilfs_btree_assign+0xbd/0x130 [nilfs2]
  [&lt;ffffffffa04cead7&gt;] nilfs_bmap_assign+0x47/0x70 [nilfs2]
  [&lt;ffffffffa04d9bc6&gt;] nilfs_segctor_do_construct+0x956/0x20f0 [nilfs2]
  [&lt;ffffffff805ac8e2&gt;] ? _spin_unlock_irqrestore+0x12/0x40
  [&lt;ffffffff803c06e0&gt;] ? __up_write+0xe0/0x150
  [&lt;ffffffff80262959&gt;] ? up_write+0x9/0x10
  [&lt;ffffffffa04ce9f3&gt;] ? nilfs_bmap_test_and_clear_dirty+0x43/0x60 [nilfs2]
  [&lt;ffffffffa04cd627&gt;] ? nilfs_mdt_fetch_dirty+0x27/0x60 [nilfs2]
  [&lt;ffffffffa04db5fc&gt;] nilfs_segctor_construct+0x8c/0xd0 [nilfs2]
  [&lt;ffffffffa04dc3dc&gt;] nilfs_segctor_thread+0x15c/0x3a0 [nilfs2]
  [&lt;ffffffffa04dbe20&gt;] ? nilfs_construction_timeout+0x0/0x10 [nilfs2]
  [&lt;ffffffff80252633&gt;] ? add_timer+0x13/0x20
  [&lt;ffffffff802370da&gt;] ? __wake_up_common+0x5a/0x90
  [&lt;ffffffff8025e960&gt;] ? autoremove_wake_function+0x0/0x40
  [&lt;ffffffffa04dc280&gt;] ? nilfs_segctor_thread+0x0/0x3a0 [nilfs2]
  [&lt;ffffffffa04dc280&gt;] ? nilfs_segctor_thread+0x0/0x3a0 [nilfs2]
  [&lt;ffffffff8025e556&gt;] kthread+0x56/0x90
  [&lt;ffffffff8020cdea&gt;] child_rip+0xa/0x20
  [&lt;ffffffff8025e500&gt;] ? kthread+0x0/0x90
  [&lt;ffffffff8020cde0&gt;] ? child_rip+0x0/0x20

This problem was caused due to a missing radix_tree_preload() call in
the retry path of nilfs_btnode_prepare_change_key() function.

Reported-by: Eric A &lt;eric225125@yahoo.com&gt;
Reported-by: Jerome Poulin &lt;jeromepoulin@gmail.com&gt;
Signed-off-by: Ryusuke Konishi &lt;konishi.ryusuke@lab.ntt.co.jp&gt;
Tested-by: Jerome Poulin &lt;jeromepoulin@gmail.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>JFFS2: add missing verify buffer allocation/deallocation</title>
<updated>2009-09-15T17:45:23Z</updated>
<author>
<name>Massimo Cirillo</name>
<email>maxcir@gmail.com</email>
</author>
<published>2009-08-27T08:44:09Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=6c135b4e133017a8aeef1534107ee4b1f11d3265'/>
<id>urn:sha1:6c135b4e133017a8aeef1534107ee4b1f11d3265</id>
<content type='text'>
commit bc8cec0dff072f1a45ce7f6b2c5234bb3411ac51 upstream.

The function jffs2_nor_wbuf_flash_setup() doesn't allocate the verify buffer
if CONFIG_JFFS2_FS_WBUF_VERIFY is defined, so causing a kernel panic when
that macro is enabled and the verify function is called. Similarly the
jffs2_nor_wbuf_flash_cleanup() must free the buffer if
CONFIG_JFFS2_FS_WBUF_VERIFY is enabled.
The following patch fixes the problem.
The following patch applies to 2.6.30 kernel.

Signed-off-by: Massimo Cirillo &lt;maxcir@gmail.com&gt;
Signed-off-by: Artem Bityutskiy &lt;Artem.Bityutskiy@nokia.com&gt;
Signed-off-by: David Woodhouse &lt;David.Woodhouse@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>ocfs2: ocfs2_write_begin_nolock() should handle len=0</title>
<updated>2009-09-09T03:34:12Z</updated>
<author>
<name>Sunil Mushran</name>
<email>sunil.mushran@oracle.com</email>
</author>
<published>2009-09-04T18:12:01Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=e38d22518cad8f3e1805bf17482b8bd2a5ec7de0'/>
<id>urn:sha1:e38d22518cad8f3e1805bf17482b8bd2a5ec7de0</id>
<content type='text'>
commit 8379e7c46cc48f51197dd663fc6676f47f2a1e71 upstream.

Bug introduced by mainline commit e7432675f8ca868a4af365759a8d4c3779a3d922
The bug causes ocfs2_write_begin_nolock() to oops when len=0.

Signed-off-by: Sunil Mushran &lt;sunil.mushran@oracle.com&gt;
Signed-off-by: Joel Becker &lt;joel.becker@oracle.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>xfs: fix spin_is_locked assert on uni-processor builds</title>
<updated>2009-09-09T03:33:57Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@infradead.org</email>
</author>
<published>2009-08-19T18:43:02Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=85481d89414655115270dc891d8604d299269c58'/>
<id>urn:sha1:85481d89414655115270dc891d8604d299269c58</id>
<content type='text'>
upstream commit a8914f3a6d72c97328597a556a99daaf5cc288ae

Without SMP or preemption spin_is_locked always returns false,
so we can't do an assert with it.  Instead use assert_spin_locked,
which does the right thing on all builds.


Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Eric Sandeen &lt;sandeen@sandeen.net&gt;
Reported-by: Johannes Engel &lt;jcnengel@googlemail.com&gt;
Tested-by: Johannes Engel &lt;jcnengel@googlemail.com&gt;
Signed-off-by: Felix Blyakher &lt;felixb@sgi.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>xfs: fix freeing of inodes not yet added to the inode cache</title>
<updated>2009-09-09T03:33:56Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@infradead.org</email>
</author>
<published>2009-08-19T18:43:01Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=f3cae744a8cae0b07d2798cac77a4be1e222409d'/>
<id>urn:sha1:f3cae744a8cae0b07d2798cac77a4be1e222409d</id>
<content type='text'>
upstream commit b36ec0428a06fcbdb67d61e9e664154e5dd9a8c7

When freeing an inode that lost race getting added to the inode cache we
must not call into -&gt;destroy_inode, because that would delete the inode
that won the race from the inode cache radix tree.

This patch uses splits a new xfs_inode_free helper out of xfs_ireclaim
and uses that plus __destroy_inode to make sure we really only free
the memory allocted for the inode that lost the race, and not mess with
the inode cache state.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Eric Sandeen &lt;sandeen@sandeen.net&gt;
Reported-by: Alex Samad &lt;alex@samad.com.au&gt;
Reported-by: Andrew Randrianasulu &lt;randrik@mail.ru&gt;
Reported-by: Stephane &lt;sharnois@max-t.com&gt;
Reported-by: Tommy &lt;tommy@news-service.com&gt;
Reported-by: Miah Gregory &lt;mace@darksilence.net&gt;
Reported-by: Gabriel Barazer &lt;gabriel@oxeva.fr&gt;
Reported-by: Leandro Lucarella &lt;llucax@gmail.com&gt;
Reported-by: Daniel Burr &lt;dburr@fami.com.au&gt;
Reported-by: Nickolay &lt;newmail@spaces.ru&gt;
Reported-by: Michael Guntsche &lt;mike@it-loops.com&gt;
Reported-by: Dan Carley &lt;dan.carley+linuxkern-bugs@gmail.com&gt;
Reported-by: Michael Ole Olsen &lt;gnu@gmx.net&gt;
Reported-by: Michael Weissenbacher &lt;mw@dermichi.com&gt;
Reported-by: Martin Spott &lt;Martin.Spott@mgras.net&gt;
Reported-by: Christian Kujau &lt;lists@nerdbynature.de&gt;
Tested-by: Michael Guntsche &lt;mike@it-loops.com&gt;
Tested-by: Dan Carley &lt;dan.carley+linuxkern-bugs@gmail.com&gt;
Tested-by: Christian Kujau &lt;lists@nerdbynature.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>vfs: add __destroy_inode</title>
<updated>2009-09-09T03:33:56Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@infradead.org</email>
</author>
<published>2009-08-19T18:43:00Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=e22d4dae5a805ca986063fa304d2125b98910fc2'/>
<id>urn:sha1:e22d4dae5a805ca986063fa304d2125b98910fc2</id>
<content type='text'>
backport of upstream commit 2e00c97e2c1d2ffc9e26252ca26b237678b0b772

When we want to tear down an inode that lost the add to the cache race
in XFS we must not call into -&gt;destroy_inode because that would delete
the inode that won the race from the inode cache radix tree.

This patch provides the __destroy_inode helper needed to fix this,
the actual fix will be in th next patch.  As XFS was the only reason
destroy_inode was exported we shift the export to the new __destroy_inode.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Eric Sandeen &lt;sandeen@sandeen.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>vfs: fix inode_init_always calling convention</title>
<updated>2009-09-09T03:33:54Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@infradead.org</email>
</author>
<published>2009-08-19T18:42:59Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b1abc2814585a3a13ce725ba62f62e9dc531010c'/>
<id>urn:sha1:b1abc2814585a3a13ce725ba62f62e9dc531010c</id>
<content type='text'>
backport of upstream commit 54e346215e4fe2ca8c94c54e546cc61902060510

Currently inode_init_always calls into -&gt;destroy_inode if the additional
initialization fails.  That's not only counter-intuitive because
inode_init_always did not allocate the inode structure, but in case of
XFS it's actively harmful as -&gt;destroy_inode might delete the inode from
a radix-tree that has never been added.  This in turn might end up
deleting the inode for the same inum that has been instanciated by
another process and cause lots of cause subtile problems.

Also in the case of re-initializing a reclaimable inode in XFS it would
free an inode we still want to keep alive.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Eric Sandeen &lt;sandeen@sandeen.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>ocfs2: Initialize the cluster we're writing to in a non-sparse extend</title>
<updated>2009-09-09T03:33:52Z</updated>
<author>
<name>Sunil Mushran</name>
<email>sunil.mushran@oracle.com</email>
</author>
<published>2009-08-06T23:12:58Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=9baf278cca4043a1312f3a40bf17b979b6238ebc'/>
<id>urn:sha1:9baf278cca4043a1312f3a40bf17b979b6238ebc</id>
<content type='text'>
commit e7432675f8ca868a4af365759a8d4c3779a3d922 upstream.

In a non-sparse extend, we correctly allocate (and zero) the clusters between
the old_i_size and pos, but we don't zero the portions of the cluster we're
writing to outside of pos&lt;-&gt;len.

It handles clustersize &gt; pagesize and blocksize &lt; pagesize.

[Cleaned up by Joel Becker.]

Signed-off-by: Sunil Mushran &lt;sunil.mushran@oracle.com&gt;
Signed-off-by: Joel Becker &lt;joel.becker@oracle.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>kernel_read: redefine offset type</title>
<updated>2009-09-09T03:33:22Z</updated>
<author>
<name>Mimi Zohar</name>
<email>zohar@linux.vnet.ibm.com</email>
</author>
<published>2009-08-21T18:32:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c80dd82ff1f0a39fb8153f0c6719cfca8e4493a0'/>
<id>urn:sha1:c80dd82ff1f0a39fb8153f0c6719cfca8e4493a0</id>
<content type='text'>
commit 6777d773a463ac045d333b989d4e44660f8d92ad upstream.

vfs_read() offset is defined as loff_t, but kernel_read()
offset is only defined as unsigned long. Redefine
kernel_read() offset as loff_t.

Signed-off-by: Mimi Zohar &lt;zohar@us.ibm.com&gt;
Signed-off-by: James Morris &lt;jmorris@namei.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>mm: fix hugetlb bug due to user_shm_unlock call</title>
<updated>2009-09-09T03:33:20Z</updated>
<author>
<name>Hugh Dickins</name>
<email>hugh.dickins@tiscali.co.uk</email>
</author>
<published>2009-08-24T15:30:28Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=00ba05d158a04889546092400b5d76443d01f112'/>
<id>urn:sha1:00ba05d158a04889546092400b5d76443d01f112</id>
<content type='text'>
commit 353d5c30c666580347515da609dd74a2b8e9b828 upstream.

2.6.30's commit 8a0bdec194c21c8fdef840989d0d7b742bb5d4bc removed
user_shm_lock() calls in hugetlb_file_setup() but left the
user_shm_unlock call in shm_destroy().

In detail:
Assume that can_do_hugetlb_shm() returns true and hence user_shm_lock()
is not called in hugetlb_file_setup(). However, user_shm_unlock() is
called in any case in shm_destroy() and in the following
atomic_dec_and_lock(&amp;up-&gt;__count) in free_uid() is executed and if
up-&gt;__count gets zero, also cleanup_user_struct() is scheduled.

Note that sched_destroy_user() is empty if CONFIG_USER_SCHED is not set.
However, the ref counter up-&gt;__count gets unexpectedly non-positive and
the corresponding structs are freed even though there are live
references to them, resulting in a kernel oops after a lots of
shmget(SHM_HUGETLB)/shmctl(IPC_RMID) cycles and CONFIG_USER_SCHED set.

Hugh changed Stefan's suggested patch: can_do_hugetlb_shm() at the
time of shm_destroy() may give a different answer from at the time
of hugetlb_file_setup().  And fixed newseg()'s no_id error path,
which has missed user_shm_unlock() ever since it came in 2.6.9.

Reported-by: Stefan Huber &lt;shuber2@gmail.com&gt;
Signed-off-by: Hugh Dickins &lt;hugh.dickins@tiscali.co.uk&gt;
Tested-by: Stefan Huber &lt;shuber2@gmail.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
</feed>
