diff options
| author | Michael S. Tsirkin <mst@redhat.com> | 2025-12-30 08:04:15 -0500 |
|---|---|---|
| committer | Michael S. Tsirkin <mst@redhat.com> | 2026-01-28 15:32:16 -0500 |
| commit | 29615fe3fb5015a96a14cfa43bd168034719ddeb (patch) | |
| tree | e1ba57e28837c8203be1cae43ce5614e08d894a1 /scripts | |
| parent | f9108dee782fe45318a2c9f007fb72ab370d476d (diff) | |
gpio: virtio: fix DMA alignment
The res and ires buffers in struct virtio_gpio_line and struct
vgpio_irq_line respectively are used for DMA_FROM_DEVICE via
virtqueue_add_sgs(). However, within these structs, even though these
elements are tagged as ____cacheline_aligned, adjacent struct elements
can share DMA cachelines on platforms where ARCH_DMA_MINALIGN >
L1_CACHE_BYTES (e.g., arm64 with 128-byte DMA alignment but 64-byte
cache lines).
The existing ____cacheline_aligned annotation aligns to L1_CACHE_BYTES
which is not always sufficient for DMA alignment. For example, with
L1_CACHE_BYTES = 32 and ARCH_DMA_MINALIGN = 128
- irq_lines[0].ires at offset 128
- irq_lines[1].type at offset 192
both in same 128-byte DMA cacheline [128-256)
When the device writes to irq_lines[0].ires and the CPU concurrently
modifies one of irq_lines[1].type/disabled/masked/queued flags,
corruption can occur on non-cache-coherent platforms.
Fix by using __dma_from_device_group_begin()/end() annotations on the
DMA buffers. Drop ____cacheline_aligned - it's not required to isolate
request and response, and keeping them would increase the memory cost.
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Message-ID: <ba7e025a6c84aed012421468d83639e5dae982b0.1767601130.git.mst@redhat.com>
Acked-by: Bartosz Golaszewski <bartosz.golaszewski@oss.qualcomm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Diffstat (limited to 'scripts')
0 files changed, 0 insertions, 0 deletions
