Skip to content
This repository was archived by the owner on Feb 25, 2025. It is now read-only.

Commit 394554b

Browse files
authored
[Impeller] Document threading configuration with Vulkan. (#44874)
Moving bits and pieces of the presentation this morning into docs so we can keep them up to date.
1 parent 1165842 commit 394554b

File tree

1 file changed

+61
-0
lines changed

1 file changed

+61
-0
lines changed

impeller/docs/vulkan_threading.md

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
# Threading in Vulkan
2+
3+
The Vulkan backend uses a dedicated concurrent worker pool that is created along
4+
with the creation of the Vulkan context.
5+
6+
Unlike other pools such as the IO worker pool, long running tasks may **not** be
7+
posted to this pool. This is because frame workloads can and often are
8+
distributed to workers in this pool. Having a potentially long running task
9+
(such as texture decompression) may potentially block frame critical tasks. The
10+
limitation of a separate pool for frame critical tasks is working around the
11+
separate limitation of not being able to specify a QoS to specific tasks and may
12+
be lifted in the future.
13+
14+
There is also a separate component called the fence waiter which operates on its
15+
own thread. The purpose of the fence waiter is to ensure that the resource
16+
reference count lives at least as long as the GPU command buffer(s) that access
17+
this resource.
18+
19+
Resource collection and pooling also happens on another thread called the
20+
resource manager. This is because touching the allocators is a potentially
21+
expensive operation and having the collection be done in a frame workload, or on
22+
the fence waiter thread may cause jank.
23+
24+
With this overview, the total number of thread used by the Impeller Vulkan
25+
backend is the number of workers in the concurrent worker pool, and the two
26+
threads for the fence waiter and resource manager respectively.
27+
28+
A summary of the interaction between the various threads is drawn below:
29+
30+
31+
```mermaid
32+
sequenceDiagram
33+
participant rt as Render Thread
34+
participant worker1 as Concurrent Worker 1
35+
participant worker2 as Concurrent Worker 2
36+
participant fence_waiter as Fence Waiter
37+
participant resource_manager as Resource Manager
38+
participant gpu as GPU
39+
rt->>+worker1: Setup PSO 1
40+
rt->>+worker2: Setup PSO n
41+
worker1-->>-rt: Done
42+
worker2-->>-rt: Done
43+
Note over rt,resource_manager: Application launch
44+
loop One Frame
45+
activate rt
46+
rt->>+worker2: Frame Workload
47+
activate fence_waiter
48+
rt->>fence_waiter: Resource 1 owned by GPU
49+
worker2-->>-rt: Done
50+
rt->>fence_waiter: Resource 2 owned by GPU
51+
rt->>gpu: Submit GPU Commands
52+
deactivate rt
53+
end
54+
activate gpu
55+
gpu-->>fence_waiter: GPU Work Done
56+
fence_waiter->>resource_manager: Collect/Pool Resources
57+
deactivate fence_waiter
58+
activate resource_manager
59+
deactivate gpu
60+
deactivate resource_manager
61+
```

0 commit comments

Comments
 (0)