diff options
| author | Timur Tabi <ttabi@nvidia.com> | 2025-11-13 17:03:22 -0600 |
|---|---|---|
| committer | Lyude Paul <lyude@redhat.com> | 2025-11-24 17:53:22 -0500 |
| commit | 04d98b3452331fa53ec3b698b66273af6ef73288 (patch) | |
| tree | 871539f70f2d6060f8dedee8beaff1fbd3909a2c /tools/docs/parse-headers.py | |
| parent | f3a1d69f9b388271986f4efe1fd775df15b443c1 (diff) | |
drm/nouveau: restrict the flush page to a 32-bit address
The flush page DMA address is stored in a special register that is not
associated with the GPU's standard DMA range. For example, on Turing,
the GPU's MMU can handle 47-bit addresses, but the flush page address
register is limited to 40 bits.
At the point during device initialization when the flush page is
allocated, the DMA mask is still at its default of 32 bits. So even
though it's unlikely that the flush page could exist above a 40-bit
address, the dma_map_page() call could fail, e.g. if IOMMU is disabled
and the address is above 32 bits. The simplest way to achieve all
constraints is to allocate the page in the DMA32 zone. Since the flush
page is literally just a page, this is an acceptable limitation. The
alternative is to temporarily set the DMA mask to 40 (or 52 for Hopper
and later) bits, but that could have unforseen side effects.
In situations where the flush page is allocated above 32 bits and IOMMU
is disabled, you will get an error like this:
nouveau 0000:65:00.0: DMA addr 0x0000000107c56000+4096 overflow (mask ffffffff, bus limit 0).
Fixes: 5728d064190e ("drm/nouveau/fb: handle sysmem flush page from common code")
Signed-off-by: Timur Tabi <ttabi@nvidia.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Lyude Paul <lyude@redhat.com>
Link: https://patch.msgid.link/20251113230323.1271726-1-ttabi@nvidia.com
Diffstat (limited to 'tools/docs/parse-headers.py')
0 files changed, 0 insertions, 0 deletions
