summaryrefslogtreecommitdiff
path: root/Documentation/gpu/amdgpu
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/gpu/amdgpu')
-rw-r--r--Documentation/gpu/amdgpu/amd-hardware-list-info.rst4
-rw-r--r--Documentation/gpu/amdgpu/apu-asic-info-table.csv35
-rw-r--r--Documentation/gpu/amdgpu/debugfs.rst4
-rw-r--r--Documentation/gpu/amdgpu/dgpu-asic-info-table.csv58
-rw-r--r--Documentation/gpu/amdgpu/display/dc-glossary.rst2
-rw-r--r--Documentation/gpu/amdgpu/display/display-contributing.rst4
-rw-r--r--Documentation/gpu/amdgpu/display/programming-model-dcn.rst2
-rw-r--r--Documentation/gpu/amdgpu/driver-core.rst4
-rw-r--r--Documentation/gpu/amdgpu/index.rst1
-rw-r--r--Documentation/gpu/amdgpu/process-isolation.rst2
-rw-r--r--Documentation/gpu/amdgpu/userq.rst203
11 files changed, 263 insertions, 56 deletions
diff --git a/Documentation/gpu/amdgpu/amd-hardware-list-info.rst b/Documentation/gpu/amdgpu/amd-hardware-list-info.rst
index 1786544fe7c1..e72f4ff770c4 100644
--- a/Documentation/gpu/amdgpu/amd-hardware-list-info.rst
+++ b/Documentation/gpu/amdgpu/amd-hardware-list-info.rst
@@ -10,7 +10,7 @@ Accelerated Processing Units (APU) Info
.. csv-table::
:header-rows: 1
- :widths: 3, 2, 2, 1, 1, 1, 1
+ :widths: 3, 2, 2, 1, 1, 1, 1, 1
:file: ./apu-asic-info-table.csv
Discrete GPU Info
@@ -18,6 +18,6 @@ Discrete GPU Info
.. csv-table::
:header-rows: 1
- :widths: 3, 2, 2, 1, 1, 1
+ :widths: 3, 2, 2, 1, 1, 1, 1, 1
:file: ./dgpu-asic-info-table.csv
diff --git a/Documentation/gpu/amdgpu/apu-asic-info-table.csv b/Documentation/gpu/amdgpu/apu-asic-info-table.csv
index 1d50b539677f..dee5f663a47f 100644
--- a/Documentation/gpu/amdgpu/apu-asic-info-table.csv
+++ b/Documentation/gpu/amdgpu/apu-asic-info-table.csv
@@ -1,17 +1,18 @@
-Product Name, Code Reference, DCN/DCE version, GC version, VCE/UVD/VCN version, SDMA version, MP0 version
-Radeon R* Graphics, CARRIZO/STONEY, DCE 11, 8, VCE 3 / UVD 6, 3, n/a
-Ryzen 3000 series / AMD Ryzen Embedded V1*/R1* with Radeon Vega Gfx, RAVEN/PICASSO, DCN 1.0, 9.1.0, VCN 1.0, 4.1.0, 10.0.0
-Ryzen 4000 series, RENOIR, DCN 2.1, 9.3, VCN 2.2, 4.1.2, 11.0.3
-Ryzen 3000 series / AMD Ryzen Embedded V1*/R1* with Radeon Vega Gfx, RAVEN2, DCN 1.0, 9.2.2, VCN 1.0.1, 4.1.1, 10.0.1
-SteamDeck, VANGOGH, DCN 3.0.1, 10.3.1, VCN 3.1.0, 5.2.1, 11.5.0
-Ryzen 5000 series / Ryzen 7x30 series, GREEN SARDINE / Cezanne / Barcelo / Barcelo-R, DCN 2.1, 9.3, VCN 2.2, 4.1.1, 12.0.1
-Ryzen 6000 series / Ryzen 7x35 series / Ryzen 7x36 series, YELLOW CARP / Rembrandt / Rembrandt-R, 3.1.2, 10.3.3, VCN 3.1.1, 5.2.3, 13.0.3
-Ryzen 7000 series (AM5), Raphael, 3.1.5, 10.3.6, 3.1.2, 5.2.6, 13.0.5
-Ryzen 9000 series (AM5), Granite Ridge, 3.1.5, 10.3.6, 3.1.2, 5.2.6, 13.0.5
-Ryzen 7x45 series (FL1), Dragon Range, 3.1.5, 10.3.6, 3.1.2, 5.2.6, 13.0.5
-Ryzen 7x20 series, Mendocino, 3.1.6, 10.3.7, 3.1.1, 5.2.7, 13.0.8
-Ryzen 7x40 series, Phoenix, 3.1.4, 11.0.1 / 11.0.4, 4.0.2, 6.0.1, 13.0.4 / 13.0.11
-Ryzen 8x40 series, Hawk Point, 3.1.4, 11.0.1 / 11.0.4, 4.0.2, 6.0.1, 13.0.4 / 13.0.11
-Ryzen AI 300 series, Strix Point, 3.5.0, 11.5.0, 4.0.5, 6.1.0, 14.0.0
-Ryzen AI 350 series, Krackan Point, 3.5.0, 11.5.2, 4.0.5, 6.1.2, 14.0.4
-Ryzen AI Max 300 series, Strix Halo, 3.5.1, 11.5.1, 4.0.6, 6.1.1, 14.0.1
+Product Name, Code Reference, DCN/DCE version, GC version, VCE/UVD/VCN version, SDMA version, MP0 version, MP1 version
+Radeon R* Graphics, CARRIZO/STONEY, DCE 11, 8, VCE 3 / UVD 6, 3, n/a, 8
+Ryzen 3000 series / AMD Ryzen Embedded V1*/R1* with Radeon Vega Gfx, RAVEN/PICASSO, DCN 1.0, 9.1.0, VCN 1.0, 4.1.0, 10.0.0, 10.0.0
+Ryzen 4000 series, RENOIR, DCN 2.1, 9.3, VCN 2.2, 4.1.2, 11.0.3, 12.0.1
+Ryzen 3000 series / AMD Ryzen Embedded V1*/R1* with Radeon Vega Gfx, RAVEN2, DCN 1.0, 9.2.2, VCN 1.0.1, 4.1.1, 10.0.1, 10.0.1
+SteamDeck, VANGOGH, DCN 3.0.1, 10.3.1, VCN 3.1.0, 5.2.1, 11.5.0, 11.5.0
+Ryzen 5000 series / Ryzen 7x30 series, GREEN SARDINE / Cezanne / Barcelo / Barcelo-R, DCN 2.1, 9.3, VCN 2.2, 4.1.1, 12.0.1, 12.0.1
+Ryzen 6000 series / Ryzen 7x35 series / Ryzen 7x36 series, YELLOW CARP / Rembrandt / Rembrandt-R, 3.1.2, 10.3.3, VCN 3.1.1, 5.2.3, 13.0.3, 13.0.3
+Ryzen 7000 series (AM5), Raphael, 3.1.5, 10.3.6, 3.1.2, 5.2.6, 13.0.5, 13.0.5
+Ryzen 9000 series (AM5), Granite Ridge, 3.1.5, 10.3.6, 3.1.2, 5.2.6, 13.0.5, 13.0.5
+Ryzen 7x45 series (FL1), Dragon Range, 3.1.5, 10.3.6, 3.1.2, 5.2.6, 13.0.5, 13.0.5
+Ryzen 7x20 series, Mendocino, 3.1.6, 10.3.7, 3.1.1, 5.2.7, 13.0.8, 13.0.8
+Ryzen 7x40 series, Phoenix, 3.1.4, 11.0.1 / 11.0.4, 4.0.2, 6.0.1, 13.0.4 / 13.0.11, 13.0.4 / 13.0.11
+Ryzen 8x40 series, Hawk Point, 3.1.4, 11.0.1 / 11.0.4, 4.0.2, 6.0.1, 13.0.4 / 13.0.11, 13.0.4 / 13.0.11
+Ryzen AI 300 series, Strix Point, 3.5.0, 11.5.0, 4.0.5, 6.1.0, 14.0.0, 14.0.0
+Ryzen AI 330 series, Krackan Point, 3.6.0, 11.5.3, 4.0.5, 6.1.3, 14.0.5, 14.0.5
+Ryzen AI 350 series, Krackan Point, 3.5.0, 11.5.2, 4.0.5, 6.1.2, 14.0.4, 14.0.4
+Ryzen AI Max 300 series, Strix Halo, 3.5.1, 11.5.1, 4.0.6, 6.1.1, 14.0.1, 14.0.1
diff --git a/Documentation/gpu/amdgpu/debugfs.rst b/Documentation/gpu/amdgpu/debugfs.rst
index 5150d0a95658..151d8bfc79e2 100644
--- a/Documentation/gpu/amdgpu/debugfs.rst
+++ b/Documentation/gpu/amdgpu/debugfs.rst
@@ -94,7 +94,7 @@ amdgpu_error_<name>
-------------------
Provides an interface to set an error code on the dma fences associated with
-ring <name>. The error code specified is propogated to all fences associated
+ring <name>. The error code specified is propagated to all fences associated
with the ring. Use this to inject a fence error into a ring.
amdgpu_pm_info
@@ -165,7 +165,7 @@ GTT memory.
amdgpu_regs_*
-------------
-Provides direct access to various register aperatures on the GPU. Used
+Provides direct access to various register apertures on the GPU. Used
by tools like UMR to access GPU registers.
amdgpu_regs2
diff --git a/Documentation/gpu/amdgpu/dgpu-asic-info-table.csv b/Documentation/gpu/amdgpu/dgpu-asic-info-table.csv
index d2f10ee69dfc..bfd44c6e052a 100644
--- a/Documentation/gpu/amdgpu/dgpu-asic-info-table.csv
+++ b/Documentation/gpu/amdgpu/dgpu-asic-info-table.csv
@@ -1,28 +1,30 @@
-Product Name, Code Reference, DCN/DCE version, GC version, VCN version, SDMA version
-AMD Radeon (TM) HD 8500M/ 8600M /M200 /M320 /M330 /M335 Series, HAINAN, --, 6, --, --
-AMD Radeon HD 7800 /7900 /FireGL Series, TAHITI, DCE 6, 6, VCE 1 / UVD 3, --
-AMD Radeon R7 (TM|HD) M265 /M370 /8500M /8600 /8700 /8700M, OLAND, DCE 6, 6, VCE 1 / UVD 3, --
-AMD Radeon (TM) (HD|R7) 7800 /7970 /8800 /8970 /370/ Series, PITCAIRN, DCE 6, 6, VCE 1 / UVD 3, --
-AMD Radeon (TM|R7|R9|HD) E8860 /M360 /7700 /7800 /8800 /9000(M) /W4100 Series, VERDE, DCE 6, 6, VCE 1 / UVD 3, --
-AMD Radeon HD M280X /M380 /7700 /8950 /W5100, BONAIRE, DCE 8, 7, VCE 2 / UVD 4.2, 1
-AMD Radeon (R9|TM) 200 /390 /W8100 /W9100 Series, HAWAII, DCE 8, 7, VCE 2 / UVD 4.2, 1
-AMD Radeon (TM) R(5|7) M315 /M340 /M360, TOPAZ, *, 8, --, 2
-AMD Radeon (TM) R9 200 /380 /W7100 /S7150 /M390 /M395 Series, TONGA, DCE 10, 8, VCE 3 / UVD 5, 3
-AMD Radeon (FirePro) (TM) R9 Fury Series, FIJI, DCE 10, 8, VCE 3 / UVD 6, 3
-Radeon RX 470 /480 /570 /580 /590 Series - AMD Radeon (TM) (Pro WX) 5100 /E9390 /E9560 /E9565 /V7350 /7100 /P30PH, POLARIS10, DCE 11.2, 8, VCE 3.4 / UVD 6.3, 3
-Radeon (TM) (RX|Pro WX) E9260 /460 /V5300X /550 /560(X) Series, POLARIS11, DCE 11.2, 8, VCE 3.4 / UVD 6.3, 3
-Radeon (RX/Pro) 500 /540(X) /550 /640 /WX2100 /WX3100 /WX200 Series, POLARIS12, DCE 11.2, 8, VCE 3.4 / UVD 6.3, 3
-Radeon (RX|TM) (PRO|WX) Vega /MI25 /V320 /V340L /8200 /9100 /SSG MxGPU, VEGA10, DCE 12, 9.0.1, VCE 4.0.0 / UVD 7.0.0, 4.0.0
-AMD Radeon (Pro) VII /MI50 /MI60, VEGA20, DCE 12, 9.4.0, VCE 4.1.0 / UVD 7.2.0, 4.2.0
-MI100, ARCTURUS, *, 9.4.1, VCN 2.5.0, 4.2.2
-MI200 Series, ALDEBARAN, *, 9.4.2, VCN 2.6.0, 4.4.0
-MI300 Series, AQUA_VANJARAM, *, 9.4.3, VCN 4.0.3, 4.4.2
-AMD Radeon (RX|Pro) 5600(M|XT) /5700 (M|XT|XTB) /W5700, NAVI10, DCN 2.0.0, 10.1.10, VCN 2.0.0, 5.0.0
-AMD Radeon (Pro) 5300 /5500XTB/5500(XT|M) /W5500M /W5500, NAVI14, DCN 2.0.0, 10.1.1, VCN 2.0.2, 5.0.2
-AMD Radeon RX 6800(XT) /6900(XT) /W6800, SIENNA_CICHLID, DCN 3.0.0, 10.3.0, VCN 3.0.0, 5.2.0
-AMD Radeon RX 6700 XT / 6800M / 6700M, NAVY_FLOUNDER, DCN 3.0.0, 10.3.2, VCN 3.0.0, 5.2.2
-AMD Radeon RX 6600(XT) /6600M /W6600 /W6600M, DIMGREY_CAVEFISH, DCN 3.0.2, 10.3.4, VCN 3.0.16, 5.2.4
-AMD Radeon RX 6500M /6300M /W6500M /W6300M, BEIGE_GOBY, DCN 3.0.3, 10.3.5, VCN 3.0.33, 5.2.5
-AMD Radeon RX 7900 XT /XTX, , DCN 3.2.0, 11.0.0, VCN 4.0.0, 6.0.0
-AMD Radeon RX 7800 XT, , DCN 3.2.0, 11.0.3, VCN 4.0.0, 6.0.3
-AMD Radeon RX 7600M (XT) /7700S /7600S, , DCN 3.2.1, 11.0.2, VCN 4.0.4, 6.0.2
+Product Name, Code Reference, DCN/DCE version, GC version, VCN version, SDMA version, MP0 version, MP1 version
+AMD Radeon (TM) HD 8500M/ 8600M /M200 /M320 /M330 /M335 Series, HAINAN, --, 6, --, --, --, 6
+AMD Radeon HD 7800 /7900 /FireGL Series, TAHITI, DCE 6, 6, VCE 1 / UVD 3, --, --, 6
+AMD Radeon R7 (TM|HD) M265 /M370 /8500M /8600 /8700 /8700M, OLAND, DCE 6, 6, -- / UVD 3, --, --, 6
+AMD Radeon (TM) (HD|R7) 7800 /7970 /8800 /8970 /370/ Series, PITCAIRN, DCE 6, 6, VCE 1 / UVD 3, --, --, 6
+AMD Radeon (TM|R7|R9|HD) E8860 /M360 /7700 /7800 /8800 /9000(M) /W4100 Series, VERDE, DCE 6, 6, VCE 1 / UVD 3, --, --, 6
+AMD Radeon HD M280X /M380 /7700 /8950 /W5100, BONAIRE, DCE 8, 7, VCE 2 / UVD 4.2, 1, --, 7
+AMD Radeon (R9|TM) 200 /390 /W8100 /W9100 Series, HAWAII, DCE 8, 7, VCE 2 / UVD 4.2, 1, --, 7
+AMD Radeon (TM) R(5|7) M315 /M340 /M360, TOPAZ, *, 8, --, 2, n/a, 7
+AMD Radeon (TM) R9 200 /380 /W7100 /S7150 /M390 /M395 Series, TONGA, DCE 10, 8, VCE 3 / UVD 5, 3, n/a, 7
+AMD Radeon (FirePro) (TM) R9 Fury Series, FIJI, DCE 10, 8, VCE 3 / UVD 6, 3, n/a, 7
+Radeon RX 470 /480 /570 /580 /590 Series - AMD Radeon (TM) (Pro WX) 5100 /E9390 /E9560 /E9565 /V7350 /7100 /P30PH, POLARIS10, DCE 11.2, 8, VCE 3.4 / UVD 6.3, 3, n/a, 7
+Radeon (TM) (RX|Pro WX) E9260 /460 /V5300X /550 /560(X) Series, POLARIS11, DCE 11.2, 8, VCE 3.4 / UVD 6.3, 3, n/a, 7
+Radeon (RX/Pro) 500 /540(X) /550 /640 /WX2100 /WX3100 /WX200 Series, POLARIS12, DCE 11.2, 8, VCE 3.4 / UVD 6.3, 3, n/a, 7
+Radeon (RX|TM) (PRO|WX) Vega /MI25 /V320 /V340L /8200 /9100 /SSG MxGPU, VEGA10, DCE 12, 9.0.1, VCE 4.0.0 / UVD 7.0.0, 4.0.0, 9.0.0, 9.0.0
+AMD Radeon (Pro) VII /MI50 /MI60, VEGA20, DCE 12, 9.4.0, VCE 4.1.0 / UVD 7.2.0, 4.2.0, 11.0.2, 11.0.2
+MI100, ARCTURUS, *, 9.4.1, VCN 2.5.0, 4.2.2, 11.0.4, 11.0.2
+MI200 Series, ALDEBARAN, *, 9.4.2, VCN 2.6.0, 4.4.0, 13.0.2, 13.0.2
+MI300 Series, AQUA_VANJARAM, *, 9.4.3, VCN 4.0.3, 4.4.2, 13.0.6, 13.0.6
+AMD Radeon (RX|Pro) 5600(M|XT) /5700 (M|XT|XTB) /W5700, NAVI10, DCN 2.0.0, 10.1.10, VCN 2.0.0, 5.0.0, 11.0.0, 11.0.0
+AMD Radeon (Pro) 5300 /5500XTB/5500(XT|M) /W5500M /W5500, NAVI14, DCN 2.0.0, 10.1.1, VCN 2.0.2, 5.0.2, 11.0.5, 11.0.5
+AMD Radeon RX 6800(XT) /6900(XT) /W6800, SIENNA_CICHLID, DCN 3.0.0, 10.3.0, VCN 3.0.0, 5.2.0, 11.0.7, 11.0.7
+AMD Radeon RX 6700 XT / 6800M / 6700M, NAVY_FLOUNDER, DCN 3.0.0, 10.3.2, VCN 3.0.0, 5.2.2, 11.0.11, 11.0.11
+AMD Radeon RX 6600(XT) /6600M /W6600 /W6600M, DIMGREY_CAVEFISH, DCN 3.0.2, 10.3.4, VCN 3.0.16, 5.2.4, 11.0.12, 11.0.12
+AMD Radeon RX 6500M /6300M /W6500M /W6300M, BEIGE_GOBY, DCN 3.0.3, 10.3.5, VCN 3.0.33, 5.2.5, 11.0.13, 11.0.13
+AMD Radeon RX 7900 XT /XTX, , DCN 3.2.0, 11.0.0, VCN 4.0.0, 6.0.0, 13.0.0, 13.0.0
+AMD Radeon RX 7800 XT, , DCN 3.2.0, 11.0.3, VCN 4.0.0, 6.0.3, 13.0.10, 13.0.10
+AMD Radeon RX 7600M (XT) /7700S /7600S, , DCN 3.2.1, 11.0.2, VCN 4.0.4, 6.0.2, 13.0.7, 13.0.7
+AMD Radeon RX 9070 (XT), , DCN 4.0.1, 12.0.1, VCN 5.0.0, 7.0.1, 14.0.3, 14.0.3
+AMD Radeon RX 9060 XT, , DCN 4.0.1, 12.0.0, VCN 5.0.0, 7.0.0, 14.0.2, 14.0.2
diff --git a/Documentation/gpu/amdgpu/display/dc-glossary.rst b/Documentation/gpu/amdgpu/display/dc-glossary.rst
index 7dc034e9e586..cbe737d1fcea 100644
--- a/Documentation/gpu/amdgpu/display/dc-glossary.rst
+++ b/Documentation/gpu/amdgpu/display/dc-glossary.rst
@@ -5,7 +5,7 @@ DC Glossary
On this page, we try to keep track of acronyms related to the display
component. If you do not find what you are looking for, look at the
'Documentation/gpu/amdgpu/amdgpu-glossary.rst'; if you cannot find it anywhere,
-consider asking in the amdgfx and update this page.
+consider asking on the amd-gfx mailing list and update this page.
.. glossary::
diff --git a/Documentation/gpu/amdgpu/display/display-contributing.rst b/Documentation/gpu/amdgpu/display/display-contributing.rst
index 36f3077eee00..2f741c52dce5 100644
--- a/Documentation/gpu/amdgpu/display/display-contributing.rst
+++ b/Documentation/gpu/amdgpu/display/display-contributing.rst
@@ -9,8 +9,8 @@ contribution to the display code, and for that, we say thank you :)
This page summarizes some of the issues you can help with; keep in mind that
this is a static page, and it is always a good idea to try to reach developers
-in the amdgfx or some of the maintainers. Finally, this page follows the DRM
-way of creating a TODO list; for more information, check
+on the amd-gfx mailing list or some of the maintainers. Finally, this page
+follows the DRM way of creating a TODO list; for more information, check
'Documentation/gpu/todo.rst'.
Gitlab issues
diff --git a/Documentation/gpu/amdgpu/display/programming-model-dcn.rst b/Documentation/gpu/amdgpu/display/programming-model-dcn.rst
index c1b48d49fb0b..bc7de97a746f 100644
--- a/Documentation/gpu/amdgpu/display/programming-model-dcn.rst
+++ b/Documentation/gpu/amdgpu/display/programming-model-dcn.rst
@@ -100,7 +100,7 @@ represents the connected display.
For historical reasons, we used the name `dc_link`, which gives the
wrong impression that this abstraction only deals with physical connections
that the developer can easily manipulate. However, this also covers
- conections like eDP or cases where the output is connected to other devices.
+ connections like eDP or cases where the output is connected to other devices.
There are two structs that are not represented in the diagram since they were
elaborated in the DCN overview page (check the DCN block diagram :ref:`Display
diff --git a/Documentation/gpu/amdgpu/driver-core.rst b/Documentation/gpu/amdgpu/driver-core.rst
index 81256318e93c..3ce276272171 100644
--- a/Documentation/gpu/amdgpu/driver-core.rst
+++ b/Documentation/gpu/amdgpu/driver-core.rst
@@ -65,7 +65,7 @@ SDMA (System DMA)
GC (Graphics and Compute)
This is the graphics and compute engine, i.e., the block that
- encompasses the 3D pipeline and and shader blocks. This is by far the
+ encompasses the 3D pipeline and shader blocks. This is by far the
largest block on the GPU. The 3D pipeline has tons of sub-blocks. In
addition to that, it also contains the CP microcontrollers (ME, PFP, CE,
MEC) and the RLC microcontroller. It's exposed to userspace for user mode
@@ -210,4 +210,4 @@ IP Blocks
:doc: IP Blocks
.. kernel-doc:: drivers/gpu/drm/amd/include/amd_shared.h
- :identifiers: amd_ip_block_type amd_ip_funcs DC_DEBUG_MASK
+ :identifiers: amd_ip_block_type amd_ip_funcs DC_FEATURE_MASK DC_DEBUG_MASK
diff --git a/Documentation/gpu/amdgpu/index.rst b/Documentation/gpu/amdgpu/index.rst
index bb2894b5edaf..45523e9860fc 100644
--- a/Documentation/gpu/amdgpu/index.rst
+++ b/Documentation/gpu/amdgpu/index.rst
@@ -12,6 +12,7 @@ Next (GCN), Radeon DNA (RDNA), and Compute DNA (CDNA) architectures.
module-parameters
gc/index
display/index
+ userq
flashing
xgmi
ras
diff --git a/Documentation/gpu/amdgpu/process-isolation.rst b/Documentation/gpu/amdgpu/process-isolation.rst
index 6b6d70e357a7..25b06ffefc33 100644
--- a/Documentation/gpu/amdgpu/process-isolation.rst
+++ b/Documentation/gpu/amdgpu/process-isolation.rst
@@ -26,7 +26,7 @@ Example of enabling enforce isolation on a GPU with multiple partitions:
$ cat /sys/class/drm/card0/device/enforce_isolation
1 0 1 0
-The output indicates that enforce isolation is enabled on zeroth and second parition and disabled on first and fourth parition.
+The output indicates that enforce isolation is enabled on zeroth and second partition and disabled on first and third partition.
For devices with a single partition or those that do not support partitions, there will be only one element:
diff --git a/Documentation/gpu/amdgpu/userq.rst b/Documentation/gpu/amdgpu/userq.rst
new file mode 100644
index 000000000000..ca3ea71f7888
--- /dev/null
+++ b/Documentation/gpu/amdgpu/userq.rst
@@ -0,0 +1,203 @@
+==================
+ User Mode Queues
+==================
+
+Introduction
+============
+
+Similar to the KFD, GPU engine queues move into userspace. The idea is to let
+user processes manage their submissions to the GPU engines directly, bypassing
+IOCTL calls to the driver to submit work. This reduces overhead and also allows
+the GPU to submit work to itself. Applications can set up work graphs of jobs
+across multiple GPU engines without needing trips through the CPU.
+
+UMDs directly interface with firmware via per application shared memory areas.
+The main vehicle for this is queue. A queue is a ring buffer with a read
+pointer (rptr) and a write pointer (wptr). The UMD writes IP specific packets
+into the queue and the firmware processes those packets, kicking off work on the
+GPU engines. The CPU in the application (or another queue or device) updates
+the wptr to tell the firmware how far into the ring buffer to process packets
+and the rtpr provides feedback to the UMD on how far the firmware has progressed
+in executing those packets. When the wptr and the rptr are equal, the queue is
+idle.
+
+Theory of Operation
+===================
+
+The various engines on modern AMD GPUs support multiple queues per engine with a
+scheduling firmware which handles dynamically scheduling user queues on the
+available hardware queue slots. When the number of user queues outnumbers the
+available hardware queue slots, the scheduling firmware dynamically maps and
+unmaps queues based on priority and time quanta. The state of each user queue
+is managed in the kernel driver in an MQD (Memory Queue Descriptor). This is a
+buffer in GPU accessible memory that stores the state of a user queue. The
+scheduling firmware uses the MQD to load the queue state into an HQD (Hardware
+Queue Descriptor) when a user queue is mapped. Each user queue requires a
+number of additional buffers which represent the ring buffer and any metadata
+needed by the engine for runtime operation. On most engines this consists of
+the ring buffer itself, a rptr buffer (where the firmware will shadow the rptr
+to userspace), a wptr buffer (where the application will write the wptr for the
+firmware to fetch it), and a doorbell. A doorbell is a piece of one of the
+device's MMIO BARs which can be mapped to specific user queues. When the
+application writes to the doorbell, it will signal the firmware to take some
+action. Writing to the doorbell wakes the firmware and causes it to fetch the
+wptr and start processing the packets in the queue. Each 4K page of the doorbell
+BAR supports specific offset ranges for specific engines. The doorbell of a
+queue must be mapped into the aperture aligned to the IP used by the queue
+(e.g., GFX, VCN, SDMA, etc.). These doorbell apertures are set up via NBIO
+registers. Doorbells are 32 bit or 64 bit (depending on the engine) chunks of
+the doorbell BAR. A 4K doorbell page provides 512 64-bit doorbells for up to
+512 user queues. A subset of each page is reserved for each IP type supported
+on the device. The user can query the doorbell ranges for each IP via the INFO
+IOCTL. See the IOCTL Interfaces section for more information.
+
+When an application wants to create a user queue, it allocates the necessary
+buffers for the queue (ring buffer, wptr and rptr, context save areas, etc.).
+These can be separate buffers or all part of one larger buffer. The application
+would map the buffer(s) into its GPUVM and use the GPU virtual addresses of for
+the areas of memory they want to use for the user queue. They would also
+allocate a doorbell page for the doorbells used by the user queues. The
+application would then populate the MQD in the USERQ IOCTL structure with the
+GPU virtual addresses and doorbell index they want to use. The user can also
+specify the attributes for the user queue (priority, whether the queue is secure
+for protected content, etc.). The application would then call the USERQ
+CREATE IOCTL to create the queue using the specified MQD details in the IOCTL.
+The kernel driver then validates the MQD provided by the application and
+translates the MQD into the engine specific MQD format for the IP. The IP
+specific MQD would be allocated and the queue would be added to the run list
+maintained by the scheduling firmware. Once the queue has been created, the
+application can write packets directly into the queue, update the wptr, and
+write to the doorbell offset to kick off work in the user queue.
+
+When the application is done with the user queue, it would call the USERQ
+FREE IOCTL to destroy it. The kernel driver would preempt the queue and
+remove it from the scheduling firmware's run list. Then the IP specific MQD
+would be freed and the user queue state would be cleaned up.
+
+Some engines may require the aggregated doorbell too if the engine does not
+support doorbells from unmapped queues. The aggregated doorbell is a special
+page of doorbell space which wakes the scheduler. In cases where the engine may
+be oversubscribed, some queues may not be mapped. If the doorbell is rung when
+the queue is not mapped, the engine firmware may miss the request. Some
+scheduling firmware may work around this by polling wptr shadows when the
+hardware is oversubscribed, other engines may support doorbell updates from
+unmapped queues. In the event that one of these options is not available, the
+kernel driver will map a page of aggregated doorbell space into each GPUVM
+space. The UMD will then update the doorbell and wptr as normal and then write
+to the aggregated doorbell as well.
+
+Special Packets
+---------------
+
+In order to support legacy implicit synchronization, as well as mixed user and
+kernel queues, we need a synchronization mechanism that is secure. Because
+kernel queues or memory management tasks depend on kernel fences, we need a way
+for user queues to update memory that the kernel can use for a fence, that can't
+be messed with by a bad actor. To support this, we've added a protected fence
+packet. This packet works by writing a monotonically increasing value to
+a memory location that only privileged clients have write access to. User
+queues only have read access. When this packet is executed, the memory location
+is updated and other queues (kernel or user) can see the results. The
+user application would submit this packet in their command stream. The actual
+packet format varies from IP to IP (GFX/Compute, SDMA, VCN, etc.), but the
+behavior is the same. The packet submission is handled in userspace. The
+kernel driver sets up the privileged memory used for each user queue when it
+sets the queues up when the application creates them.
+
+
+Memory Management
+=================
+
+It is assumed that all buffers mapped into the GPUVM space for the process are
+valid when engines on the GPU are running. The kernel driver will only allow
+user queues to run when all buffers are mapped. If there is a memory event that
+requires buffer migration, the kernel driver will preempt the user queues,
+migrate buffers to where they need to be, update the GPUVM page tables and
+invaldidate the TLB, and then resume the user queues.
+
+Interaction with Kernel Queues
+==============================
+
+Depending on the IP and the scheduling firmware, you can enable kernel queues
+and user queues at the same time, however, you are limited by the HQD slots.
+Kernel queues are always mapped so any work that goes into kernel queues will
+take priority. This limits the available HQD slots for user queues.
+
+Not all IPs will support user queues on all GPUs. As such, UMDs will need to
+support both user queues and kernel queues depending on the IP. For example, a
+GPU may support user queues for GFX, compute, and SDMA, but not for VCN, JPEG,
+and VPE. UMDs need to support both. The kernel driver provides a way to
+determine if user queues and kernel queues are supported on a per IP basis.
+UMDs can query this information via the INFO IOCTL and determine whether to use
+kernel queues or user queues for each IP.
+
+Queue Resets
+============
+
+For most engines, queues can be reset individually. GFX, compute, and SDMA
+queues can be reset individually. When a hung queue is detected, it can be
+reset either via the scheduling firmware or MMIO. Since there are no kernel
+fences for most user queues, they will usually only be detected when some other
+event happens; e.g., a memory event which requires migration of buffers. When
+the queues are preempted, if the queue is hung, the preemption will fail.
+Driver will then look up the queues that failed to preempt and reset them and
+record which queues are hung.
+
+On the UMD side, we will add a USERQ QUERY_STATUS IOCTL to query the queue
+status. UMD will provide the queue id in the IOCTL and the kernel driver
+will check if it has already recorded the queue as hung (e.g., due to failed
+peemption) and report back the status.
+
+IOCTL Interfaces
+================
+
+GPU virtual addresses used for queues and related data (rptrs, wptrs, context
+save areas, etc.) should be validated by the kernel mode driver to prevent the
+user from specifying invalid GPU virtual addresses. If the user provides
+invalid GPU virtual addresses or doorbell indicies, the IOCTL should return an
+error message. These buffers should also be tracked in the kernel driver so
+that if the user attempts to unmap the buffer(s) from the GPUVM, the umap call
+would return an error.
+
+INFO
+----
+There are several new INFO queries related to user queues in order to query the
+size of user queue meta data needed for a user queue (e.g., context save areas
+or shadow buffers), whether kernel or user queues or both are supported
+for each IP type, and the offsets for each IP type in each doorbell page.
+
+USERQ
+-----
+The USERQ IOCTL is used for creating, freeing, and querying the status of user
+queues. It supports 3 opcodes:
+
+1. CREATE - Create a user queue. The application provides an MQD-like structure
+ that defines the type of queue and associated metadata and flags for that
+ queue type. Returns the queue id.
+2. FREE - Free a user queue.
+3. QUERY_STATUS - Query that status of a queue. Used to check if the queue is
+ healthy or not. E.g., if the queue has been reset. (WIP)
+
+USERQ_SIGNAL
+------------
+The USERQ_SIGNAL IOCTL is used to provide a list of sync objects to be signaled.
+
+USERQ_WAIT
+----------
+The USERQ_WAIT IOCTL is used to provide a list of sync object to be waited on.
+
+Kernel and User Queues
+======================
+
+In order to properly validate and test performance, we have a driver option to
+select what type of queues are enabled (kernel queues, user queues or both).
+The user_queue driver parameter allows you to enable kernel queues only (0),
+user queues and kernel queues (1), and user queues only (2). Enabling user
+queues only will free up static queue assignments that would otherwise be used
+by kernel queues for use by the scheduling firmware. Some kernel queues are
+required for kernel driver operation and they will always be created. When the
+kernel queues are not enabled, they are not registered with the drm scheduler
+and the CS IOCTL will reject any incoming command submissions which target those
+queue types. Kernel queues only mirrors the behavior on all existing GPUs.
+Enabling both queues allows for backwards compatibility with old userspace while
+still supporting user queues.