This document explains how to run and maintain the TypeGPU resolve benchmarks. These benchmarks are executed in a GitHub Actions virtual machine and help track performance across releases.
- Manual Workflow Trigger The benchmark workflow should be triggered manually to ensure that the latest tag is present.
Note
The script iterates through all releases and currently takes ~1 hour to complete (as of [email protected]).
-
Repository Permissions The GitHub Actions runner must have permission to push updates to this repository to store new benchmark results.
-
Benchmark Storage Benchmark data is stored in the
benchmarksdirectory.- Each file corresponds to a single release.
- Each file contains a JSON object with the following structure:
{ resolutionMetrics: [ { resolveDuration: number, compileDuration: number, wgslSize: number, } ] }
-
Installing Dependencies We use
pnpmas the package manager, so install dependencies with:pnpm install
Also, make sure that you have
denoandpythoninstalled. -
Running Benchmarks Locally To run the benchmarks locally:
pnpm run measure
-
Plotting Results To generate benchmark plots:
- Create a Python 3 virtual environment
- Run:
pnpm run local:env && pnpm run plot
-
Fetching Plots Generated plot PNG files are stored in the
plotsdirectory and can be fetched for displaying on the main website.