-
Notifications
You must be signed in to change notification settings - Fork 23
perf: VQE ablation tests #1416
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
perf: VQE ablation tests #1416
Conversation
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
66efa2c
to
216e3bf
Compare
![]() @wsmoses could there be a pass in this benchmark is mostly only |
Yeah potentially, definitely merits investigation. cc @avik-pal esp cuz fft related issue is similar |
Can you check the mlir for each case and see the trace under xprof? I have
seen the numbers to be quite noisy (and sometimes make zero sense with
wildly different timings for same mlir)
Best,
Avik
…On Tue, 24 Jun, 2025, 17:04 William Moses, ***@***.***> wrote:
*wsmoses* left a comment (EnzymeAD/Reactant.jl#1416)
<#1416 (comment)>
Yeah potentially, definitely merits investigation.
cc @avik-pal <https://github.com/avik-pal> esp cuz fft related issue is
similar
—
Reply to this email directly, view it on GitHub
<#1416 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHJF57S7XDEGNQ5TDWCAOTL3FG4NBAVCNFSM6AAAAAB754BNH2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTAMBRHA2TINRXGU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Here is the MLIR generated for each case: vqe-mlir.zip I'm running into problems with xprof. I'm doing the following: Reactant.with_profiler(@__DIR__; trace_host=true, trace_device=true) do
∇f_xla = @compile compile_options = Reactant.DefaultXLACompileOptions(; sync=true) ∇expectation(
params_re, observable_re, coef_re
)
for _ in 1:100
∇f_xla(params_re, observable_re, coef_re)
end
end |
The gradients looks quite strange. Did some loop or vector op get scalarized? |
Ahh that's due to the fact that I was zero initializing the parameters and just using one Hamiltonian term; i.e. the ket and the bra are then effectively perpendicular and thus, the primal value and the gradients are zero. Random initialization shows some better but small gradients. Most probably I would need to add more Hamiltonian terms but right now, we are just sequentially running the gradient function over all the Hamiltonian terms (some parallelization using MPI) and then summing. Batching over the Hamiltonian terms requires some more work but this benchmark (which you can imagine as just running runs 1 epoch, 1 sample) is a good reflection of what we do. tldr: Gradients being zero are / were a numerical issue, not a bug introduced by a pass. |
No description provided.