<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/include/linux/nvme.h, branch v3.18.22</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.18.22</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.18.22'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2014-06-13T14:43:34Z</updated>
<entry>
<title>NVMe: Fix hot cpu notification dead lock</title>
<updated>2014-06-13T14:43:34Z</updated>
<author>
<name>Keith Busch</name>
<email>keith.busch@intel.com</email>
</author>
<published>2014-06-11T17:51:35Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=f3db22feb5de6b98b7bae924c2d4b6c8d65bedae'/>
<id>urn:sha1:f3db22feb5de6b98b7bae924c2d4b6c8d65bedae</id>
<content type='text'>
There is a potential dead lock if a cpu event occurs during nvme probe
since it registered with hot cpu notification. This fixes the race by
having the module register with notification outside of probe rather
than have each device register.

The actual work is done in a scheduled work queue instead of in the
notifier since assigning IO queues has the potential to block if the
driver creates additional queues.

Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
Signed-off-by: Matthew Wilcox &lt;matthew.r.wilcox@intel.com&gt;
</content>
</entry>
<entry>
<title>NVMe: Rename io_timeout to nvme_io_timeout</title>
<updated>2014-06-04T03:04:30Z</updated>
<author>
<name>Matthew Wilcox</name>
<email>matthew.r.wilcox@intel.com</email>
</author>
<published>2014-06-04T03:04:30Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=bd67608a6127c994e897c49cc4f72d9095925301'/>
<id>urn:sha1:bd67608a6127c994e897c49cc4f72d9095925301</id>
<content type='text'>
It's positively immoral to have a global variable called 'io_timeout'.
Keep the module parameter called io_timeout, though.

Signed-off-by: Matthew Wilcox &lt;matthew.r.wilcox@intel.com&gt;
</content>
</entry>
<entry>
<title>NVMe: Flush with data support</title>
<updated>2014-05-05T14:54:02Z</updated>
<author>
<name>Keith Busch</name>
<email>keith.busch@intel.com</email>
</author>
<published>2014-04-29T17:41:29Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=53562be74bd06bbe74d2acf3caca5398f8eeb160'/>
<id>urn:sha1:53562be74bd06bbe74d2acf3caca5398f8eeb160</id>
<content type='text'>
It is possible a filesystem may send a flush flagged bio with write
data. There is no such composite NVMe command, so the driver sends flush
and write separately.

The device is allowed to execute these commands in any order, so it was
possible the driver ends the bio after the write completes, but while the
flush is still active. We don't want to let a filesystem believe flush
succeeded before it really has; this could cause data corruption on a
power loss between these events. To fix, this patch splits the flush
and write into chained bios.

Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
Signed-off-by: Matthew Wilcox &lt;matthew.r.wilcox@intel.com&gt;
</content>
</entry>
<entry>
<title>NVMe: Configure support for block flush</title>
<updated>2014-05-05T14:53:53Z</updated>
<author>
<name>Keith Busch</name>
<email>keith.busch@intel.com</email>
</author>
<published>2014-04-29T17:41:28Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=a7d2ce2832d84e0182585f63bf96ca7323b3aee7'/>
<id>urn:sha1:a7d2ce2832d84e0182585f63bf96ca7323b3aee7</id>
<content type='text'>
This configures an nvme request_queue as flush capable if the device
has a volatile write cache present.

Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
Signed-off-by: Matthew Wilcox &lt;matthew.r.wilcox@intel.com&gt;
</content>
</entry>
<entry>
<title>NVMe: Update copyright headers</title>
<updated>2014-05-05T14:41:25Z</updated>
<author>
<name>Matthew Wilcox</name>
<email>matthew.r.wilcox@intel.com</email>
</author>
<published>2014-04-11T14:37:39Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=8757ad65d30f009fe0beeb2d70d3cd834cb998f2'/>
<id>urn:sha1:8757ad65d30f009fe0beeb2d70d3cd834cb998f2</id>
<content type='text'>
Make the copyright dates accurate and remove the final paragraph that
includes the address of the FSF.

Signed-off-by: Matthew Wilcox &lt;matthew.r.wilcox@intel.com&gt;
</content>
</entry>
<entry>
<title>Merge git://git.infradead.org/users/willy/linux-nvme</title>
<updated>2014-04-11T23:45:59Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2014-04-11T23:45:59Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3e8072d48b2dd0898e99698018b2045f8cd49965'/>
<id>urn:sha1:3e8072d48b2dd0898e99698018b2045f8cd49965</id>
<content type='text'>
Pull NVMe driver updates from Matthew Wilcox:
 "Various updates to the NVMe driver.  The most user-visible change is
  that drive hotplugging now works and CPU hotplug while an NVMe drive
  is installed should also work better"

* git://git.infradead.org/users/willy/linux-nvme:
  NVMe: Retry failed commands with non-fatal errors
  NVMe: Add getgeo to block ops
  NVMe: Start-stop nvme_thread during device add-remove.
  NVMe: Make I/O timeout a module parameter
  NVMe: CPU hot plug notification
  NVMe: per-cpu io queues
  NVMe: Replace DEFINE_PCI_DEVICE_TABLE
  NVMe: Fix divide-by-zero in nvme_trans_io_get_num_cmds
  NVMe: IOCTL path RCU protect queue access
  NVMe: RCU protected access to io queues
  NVMe: Initialize device reference count earlier
  NVMe: Add CONFIG_PM_SLEEP to suspend/resume functions
</content>
</entry>
<entry>
<title>NVMe: Retry failed commands with non-fatal errors</title>
<updated>2014-04-10T21:11:59Z</updated>
<author>
<name>Keith Busch</name>
<email>keith.busch@intel.com</email>
</author>
<published>2014-04-03T22:45:23Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=edd10d33283899fb15d99a290dcc9ceb3604ca78'/>
<id>urn:sha1:edd10d33283899fb15d99a290dcc9ceb3604ca78</id>
<content type='text'>
For commands returned with failed status, queue these for resubmission
and continue retrying them until success or for a limited amount of
time. The final timeout was arbitrarily chosen so requests can't be
retried indefinitely.

Since these are requeued on the nvmeq that submitted the command, the
callbacks have to take an nvmeq instead of an nvme_dev as a parameter
so that we can use the locked queue to append the iod to retry later.

The nvme_iod conviently can be used to track how long we've been trying
to successfully complete an iod request. The nvme_iod also provides the
nvme prp dma mappings, so I had to move a few things around so we can
keep those mappings.

Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
[fixed checkpatch issue with long line]
Signed-off-by: Matthew Wilcox &lt;matthew.r.wilcox@intel.com&gt;
</content>
</entry>
<entry>
<title>NVMe: Make I/O timeout a module parameter</title>
<updated>2014-04-10T21:04:38Z</updated>
<author>
<name>Keith Busch</name>
<email>keith.busch@intel.com</email>
</author>
<published>2014-04-04T17:43:36Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b355084a891985d4cd0ca23b1a83366af2c4232d'/>
<id>urn:sha1:b355084a891985d4cd0ca23b1a83366af2c4232d</id>
<content type='text'>
Increase the default timeout to 30 seconds to match SCSI.

Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
[use byte instead of ushort]
Signed-off-by: Matthew Wilcox &lt;matthew.r.wilcox@intel.com&gt;
</content>
</entry>
<entry>
<title>NVMe: CPU hot plug notification</title>
<updated>2014-04-10T21:03:42Z</updated>
<author>
<name>Keith Busch</name>
<email>keith.busch@intel.com</email>
</author>
<published>2014-03-24T16:46:26Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=33b1e95c90447ea73e37e837ea0268a894919f19'/>
<id>urn:sha1:33b1e95c90447ea73e37e837ea0268a894919f19</id>
<content type='text'>
Registers with hot cpu notification to rebalance, and potentially allocate
additional, io queues.

Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
Signed-off-by: Matthew Wilcox &lt;matthew.r.wilcox@intel.com&gt;
</content>
</entry>
<entry>
<title>NVMe: per-cpu io queues</title>
<updated>2014-04-10T21:03:15Z</updated>
<author>
<name>Keith Busch</name>
<email>keith.busch@intel.com</email>
</author>
<published>2014-03-24T16:46:25Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=42f614201e80ff4cfb8b285d7190149a8e1e6cec'/>
<id>urn:sha1:42f614201e80ff4cfb8b285d7190149a8e1e6cec</id>
<content type='text'>
The device's IO queues are associated with CPUs, so we can use a per-cpu
variable to map the a qid to a cpu. This provides a convienient way
to optimally assign queues to multiple cpus when the device supports
fewer queues than the host has cpus. The previous implementation may
have assigned these poorly in these situations. This patch addresses
this by sharing queues among cpus that are "close" together and should
have a lower lock contention penalty.

Signed-off-by: Keith Busch &lt;keith.busch@intel.com&gt;
Signed-off-by: Matthew Wilcox &lt;matthew.r.wilcox@intel.com&gt;
</content>
</entry>
</feed>
