<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/drivers/block/zram, branch next/master</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=next%2Fmaster</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=next%2Fmaster'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2026-04-17T06:12:47Z</updated>
<entry>
<title>zram: reject unrecognized type= values in recompress_store()</title>
<updated>2026-04-17T06:12:47Z</updated>
<author>
<name>Andrew Stellman</name>
<email>astellman@stellman-greene.com</email>
</author>
<published>2026-04-07T15:30:27Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=ee57526a455e72c7bbb461eabc9785d95b323c46'/>
<id>urn:sha1:ee57526a455e72c7bbb461eabc9785d95b323c46</id>
<content type='text'>
recompress_store() parses the type= parameter with three if statements
checking for "idle", "huge", and "huge_idle".  An unrecognized value
silently falls through with mode left at 0, causing the recompression pass
to run with no slot filter — processing all slots instead of the
intended subset.

Add a !mode check after the type parsing block to return -EINVAL for
unrecognized values, consistent with the function's other parameter
validation.

Link: https://lore.kernel.org/20260407153027.42425-1-astellman@stellman-greene.com
Signed-off-by: Andrew Stellman &lt;astellman@stellman-greene.com&gt;
Suggested-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Reviewed-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Cc: Jens Axboe &lt;axboe@kernel.dk&gt;
Cc: Minchan Kim &lt;minchan@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>zram: do not forget to endio for partial discard requests</title>
<updated>2026-04-17T06:12:43Z</updated>
<author>
<name>Sergey Senozhatsky</name>
<email>senozhatsky@chromium.org</email>
</author>
<published>2026-03-31T07:42:44Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=18f48870a7fe43f98ed09a1428c6ba13535be538'/>
<id>urn:sha1:18f48870a7fe43f98ed09a1428c6ba13535be538</id>
<content type='text'>
As reported by Qu Wenruo and Avinesh Kumar, the following

 getconf PAGESIZE
 65536
 blkdiscard -p 4k /dev/zram0

takes literally forever to complete.  zram doesn't support partial
discards and just returns immediately w/o doing any discard work in such
cases.  The problem is that we forget to endio on our way out, so
blkdiscard sleeps forever in submit_bio_wait().  Fix this by jumping to
end_bio label, which does bio_endio().

Link: https://lore.kernel.org/20260331074255.777019-1-senozhatsky@chromium.org
Fixes: 0120dd6e4e20 ("zram: make zram_bio_discard more self-contained")
Signed-off-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Reported-by: Qu Wenruo &lt;wqu@suse.com&gt;
Closes: https://lore.kernel.org/linux-block/92361cd3-fb8b-482e-bc89-15ff1acb9a59@suse.com
Tested-by: Qu Wenruo &lt;wqu@suse.com&gt;
Reported-by: Avinesh Kumar &lt;avinesh.kumar@suse.com&gt;
Closes: https://bugzilla.suse.com/show_bug.cgi?id=1256530
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Brian Geffon &lt;bgeffon@google.com&gt;
Cc: Jens Axboe &lt;axboe@kernel.dk&gt;
Cc: Minchan Kim &lt;minchan@kernel.org&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>zram: change scan_slots to return void</title>
<updated>2026-04-05T20:53:30Z</updated>
<author>
<name>Sergey Senozhatsky</name>
<email>senozhatsky@chromium.org</email>
</author>
<published>2026-03-17T03:23:19Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=cba82993308dc66403c5c3dd27712a58e6fe3aa8'/>
<id>urn:sha1:cba82993308dc66403c5c3dd27712a58e6fe3aa8</id>
<content type='text'>
scan_slots_for_writeback() and scan_slots_for_recompress() work in a "best
effort" fashion, if they cannot allocate memory for a new pp-slot
candidate they just return and post-processing selects slots that were
successfully scanned thus far.  scan_slots functions never return errors
and their callers never check the return status, so convert them to return
void.

Link: https://lkml.kernel.org/r/20260317032349.753645-1-senozhatsky@chromium.org
Signed-off-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Reviewed-by: SeongJae Park &lt;sj@kernel.org&gt;
Cc: Jens Axboe &lt;axboe@kernel.dk&gt;
Cc: Minchan Kim &lt;minchan@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>zram: propagate read_from_bdev_async() errors</title>
<updated>2026-04-05T20:53:30Z</updated>
<author>
<name>Sergey Senozhatsky</name>
<email>senozhatsky@chromium.org</email>
</author>
<published>2026-03-16T01:53:32Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=bf989ade270d4ca65e73d5fc1ab5e4d2ef472e80'/>
<id>urn:sha1:bf989ade270d4ca65e73d5fc1ab5e4d2ef472e80</id>
<content type='text'>
When read_from_bdev_async() fails to chain bio, for instance fails to
allocate request or bio, we need to propagate the error condition so that
upper layer is aware of it.  zram already does that by setting
BLK_STS_IOERR -&gt;bi_status, but only for sync reads.  Change async read
path to return its error status so that async errors are also handled.

Link: https://lkml.kernel.org/r/20260316015354.114465-1-senozhatsky@chromium.org
Signed-off-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Suggested-by: Brian Geffon &lt;bgeffon@google.com&gt;
Acked-by: Brian Geffon &lt;bgeffon@google.com&gt;
Cc: Minchan Kim &lt;minchan@kernel.org&gt;
Cc: Richard Chang &lt;richardycc@google.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>zram: optimize LZ4 dictionary compression performance</title>
<updated>2026-04-05T20:53:30Z</updated>
<author>
<name>gao xu</name>
<email>gaoxu2@honor.com</email>
</author>
<published>2026-03-13T02:41:14Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=f0f6f787143068b23c5808e7a63aef03601f1377'/>
<id>urn:sha1:f0f6f787143068b23c5808e7a63aef03601f1377</id>
<content type='text'>
Calling `LZ4_loadDict()` repeatedly in Zram causes significant overhead
due to its internal dictionary pre-processing.  This commit introduces a
template stream mechanism to pre-process the dictionary only once when the
dictionary is initially set or modified.  It then efficiently copies this
state for subsequent compressions.

Verification Test Items:
Test Platform: android16-6.12
1. Collect Anonymous Page Dataset
1) Apply the following patch:
static bool zram_meta_alloc(struct zram *zram, u64 disksize)
	if (!huge_class_size)
-		huge_class_size = zs_huge_class_size(zram-&gt;mem_pool);
+		huge_class_size = 0;

2）Install multiple apps and monkey testing until SwapFree is close to 0.

3）Execute the following command to export data:
dd if=/dev/block/zram0 of=/data/samples/zram_dump.img bs=4K

2. Train Dictionary
Since LZ4 does not have a dedicated dictionary training tool, the zstd
tool can be used for training[1]. The command is as follows:
zstd --train /data/samples/* --split=4096 --maxdict=64KB -o /vendor/etc/dict_data

3. Test Code
adb shell "dd if=/data/samples/zram_dump.img of=/dev/test_pattern bs=4096 count=131072 conv=fsync"
adb shell "swapoff /dev/block/zram0"
adb shell "echo 1 &gt; /sys/block/zram0/reset"
adb shell "echo lz4 &gt; /sys/block/zram0/comp_algorithm"
adb shell "echo dict=/vendor/etc/dict_data   &gt;  /sys/block/zram0/algorithm_params"
adb shell "echo 6G &gt; /sys/block/zram0/disksize"
echo "Start Compression"
adb shell "taskset 80 dd if=/dev/test_pattern of=/dev/block/zram0 bs=4096 count=131072 conv=fsync"
echo.
echo "Start Decompression"
adb shell "taskset 80 dd if=/dev/block/zram0 of=/dev/output_result bs=4096 count=131072 conv=fsync"
echo "mm_stat:"
adb shell "cat /sys/block/zram0/mm_stat"
echo.
Note: To ensure stable test results, it is best to lock the CPU frequency
before executing the test.

LZ4 supports dictionaries up to 64KB. Below are the test results for
compression rates at various dictionary sizes:
dict_size          base        patch
  4 KB          156M/s      219M/s
  8 KB          136M/s      217M/s
 16KB           98M/s       214M/s
 32KB           66M/s       225M/s
 64KB           38M/s       224M/s

When an LZ4 compression dictionary is enabled, compression speed is
negatively impacted by the dictionary's size; larger dictionaries result
in slower compression.  This patch eliminates the influence of dictionary
size on compression speed, ensuring consistent performance regardless of
dictionary scale.

Link: https://lkml.kernel.org/r/698181478c9c4b10aa21b4a847bdc706@honor.com
Link: https://github.com/lz4/lz4?tab=readme-ov-file [1]
Signed-off-by: gao xu &lt;gaoxu2@honor.com&gt;
Acked-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Cc: Jens Axboe &lt;axboe@kernel.dk&gt;
Cc: Minchan Kim &lt;minchan@kernel.org&gt;
Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>zram: unify and harden algo/priority params handling</title>
<updated>2026-04-05T20:53:25Z</updated>
<author>
<name>Sergey Senozhatsky</name>
<email>senozhatsky@chromium.org</email>
</author>
<published>2026-03-11T08:42:49Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=301f3922009658ee353b3177bc186a12d36b8dd3'/>
<id>urn:sha1:301f3922009658ee353b3177bc186a12d36b8dd3</id>
<content type='text'>
We have two functions that accept algo= and priority= params -
algorithm_params_store() and recompress_store().  This patch unifies and
hardens handling of those parameters.

There are 4 possible cases:

- only priority= provided [recommended]
  We need to verify that provided priority value is
  within permitted range for each particular function.

- both algo= and priority= provided
  We cannot prioritize one over another.  All we should
  do is to verify that zram is configured in the way
  that user-space expects it to be.  Namely that zram
  indeed has compressor algo= setup at given priority=.

- only algo= provided [not recommended]
  We should lookup priority in compressors list.

- none provided [not recommended]
  Just use function's defaults.

Link: https://lkml.kernel.org/r/20260311084312.1766036-7-senozhatsky@chromium.org
Signed-off-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Suggested-by: Minchan Kim &lt;minchan@kernel.org&gt;
Cc: Brian Geffon &lt;bgeffon@google.com&gt;
Cc: gao xu &lt;gaoxu2@honor.com&gt;
Cc: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>zram: remove chained recompression</title>
<updated>2026-04-05T20:53:24Z</updated>
<author>
<name>Sergey Senozhatsky</name>
<email>senozhatsky@chromium.org</email>
</author>
<published>2026-03-11T08:42:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=cedfa028b54e584532026888dec94039b62b3d1f'/>
<id>urn:sha1:cedfa028b54e584532026888dec94039b62b3d1f</id>
<content type='text'>
Chained recompression has unpredictable behavior and is not useful in
practice.

First, systems usually configure just one alternative recompression
algorithm, which has slower compression/decompression but better
compression ratio.  A single alternative algorithm doesn't need chaining.

Second, even with multiple recompression algorithms, chained recompression
is suboptimal.  If a lower priority algorithm succeeds, the page is never
attempted with a higher priority algorithm, leading to worse memory
savings.  If a lower priority algorithm fails, the page is still attempted
with a higher priority algorithm, wasting resources on the failed lower
priority attempt.

In either case, the system would be better off targeting a specific
priority directly.

Chained recompression also significantly complicates the code.  Remove it.

Link: https://lkml.kernel.org/r/20260311084312.1766036-6-senozhatsky@chromium.org
Signed-off-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Cc: Brian Geffon &lt;bgeffon@google.com&gt;
Cc: gao xu &lt;gaoxu2@honor.com&gt;
Cc: Jens Axboe &lt;axboe@kernel.dk&gt;
Cc: Minchan Kim &lt;minchan@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>zram: drop -&gt;num_active_comps</title>
<updated>2026-04-05T20:53:24Z</updated>
<author>
<name>Sergey Senozhatsky</name>
<email>senozhatsky@chromium.org</email>
</author>
<published>2026-03-11T08:42:46Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=5004a27edba5987bd75fe84c40b1b486ffae8f99'/>
<id>urn:sha1:5004a27edba5987bd75fe84c40b1b486ffae8f99</id>
<content type='text'>
It's not entirely correct to use -&gt;num_active_comps for max-prio limit, as
-&gt;num_active_comps just tells the number of configured algorithms, not the
max configured priority.  For instance, in the following theoretical
example:

    [lz4] [nil] [nil] [deflate]

-&gt;num_active_comps is 2, while the actual max-prio is 3.

Drop -&gt;num_active_comps and use ZRAM_MAX_COMPS instead.

Link: https://lkml.kernel.org/r/20260311084312.1766036-4-senozhatsky@chromium.org
Signed-off-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Suggested-by: Minchan Kim &lt;minchan@kernel.org&gt;
Cc: Brian Geffon &lt;bgeffon@google.com&gt;
Cc: gao xu &lt;gaoxu2@honor.com&gt;
Cc: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>zram: do not autocorrect bad recompression parameters</title>
<updated>2026-04-05T20:53:24Z</updated>
<author>
<name>Sergey Senozhatsky</name>
<email>senozhatsky@chromium.org</email>
</author>
<published>2026-03-11T08:42:45Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=ed19b9d5504f3f7adb68ad8d8db96c390c8570e5'/>
<id>urn:sha1:ed19b9d5504f3f7adb68ad8d8db96c390c8570e5</id>
<content type='text'>
Do not silently autocorrect bad recompression priority parameter value and
just error out.

Link: https://lkml.kernel.org/r/20260311084312.1766036-3-senozhatsky@chromium.org
Signed-off-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Suggested-by: Minchan Kim &lt;minchan@kernel.org&gt;
Cc: Brian Geffon &lt;bgeffon@google.com&gt;
Cc: gao xu &lt;gaoxu2@honor.com&gt;
Cc: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>zram: do not permit params change after init</title>
<updated>2026-04-05T20:53:24Z</updated>
<author>
<name>Sergey Senozhatsky</name>
<email>senozhatsky@chromium.org</email>
</author>
<published>2026-03-11T08:42:44Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=241f9005b1c81c2637eef2c836a03c83b4f3eeb9'/>
<id>urn:sha1:241f9005b1c81c2637eef2c836a03c83b4f3eeb9</id>
<content type='text'>
Patch series "zram: recompression cleanups and tweaks", v2.

This series is a somewhat random mix of fixups, recompression cleanups and
improvements partly based on internal conversations.  A few patches in the
series remove unexpected or confusing behaviour, e.g.  auto correction of
bad priority= param for recompression, which should have always been just
an error.  Then it also removes "chain recompression" which has a tricky,
unexpected and confusing behaviour at times.  We also unify and harden the
handling of algo/priority params.  There is also an addition of missing
device lock in algorithm_params_store() which previously permitted
modification of algo params while the device is active.


This patch (of 6):

First, algorithm_params_store(), like any sysfs handler, should grab
device lock.

Second, like any write() sysfs handler, it should grab device lock in
exclusive mode.

Third, it should not permit change of algos' parameters after device init,
as this doesn't make sense - we cannot compress with one C/D dict and then
just change C/D dict to a different one, for example.

Another thing to notice is that algorithm_params_store() accesses device's
-&gt;comp_algs for algo priority lookup, which should be protected by device
lock in exclusive mode in general.

Link: https://lkml.kernel.org/r/20260311084312.1766036-1-senozhatsky@chromium.org
Link: https://lkml.kernel.org/r/20260311084312.1766036-2-senozhatsky@chromium.org
Fixes: 4eac932103a5 ("zram: introduce algorithm_params device attribute")
Signed-off-by: Sergey Senozhatsky &lt;senozhatsky@chromium.org&gt;
Acked-by: Brian Geffon &lt;bgeffon@google.com&gt;
Cc: gao xu &lt;gaoxu2@honor.com&gt;
Cc: Jens Axboe &lt;axboe@kernel.dk&gt;
Cc: Minchan Kim &lt;minchan@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
</feed>
