Skip to content

hir-ty: add incremental benchmarks #19934

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

davidbarsky
Copy link
Contributor

We didn't have these before, but we probably should. The only benchmarks here are for trait solving, but I can extend this to infer as well.

(Builds atop of #19914.)

@rustbot rustbot added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Jun 5, 2025
@davidbarsky davidbarsky requested a review from Veykril June 5, 2025 18:34
Copy link
Contributor

@ChayimFriedman2 ChayimFriedman2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm quite confused. What is the point of those benchmarks? If we want to test incrementality, we should check salsa's logs. But it looks like you want to measure raw speed, why?

@@ -96,6 +95,8 @@ use crate::{
signatures::VariantFields,
};

pub use crate::db::DefDatabase;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The db module is public, no need for this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whoops, my bad.


#[salsa_macros::db]
#[derive(Clone)]
pub(crate) struct BenchmarkDB {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we need to do something with all the boilerplate with declaring a db.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we reuse the TestDB? Why do we need a new type

fn benchmark_incremental_struct_addition(c: &mut Criterion) {
c.bench_function("incremental_struct_addition", |b| {
b.iter_batched(
|| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid the cost of the setup will dwarf the cost of the benchmarked computation, not allowing criterion to accurately measure the time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first closure, thankfully, is setup for the second closure, which is the actually benchmarked computation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that, what I meant is that the setup is too costly relative to the actual computation, and it runs on every iteration, so it can make the benchmarking inaccurate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I’m sorry: I misunderstood. I have noticed this is somewhat noisy, but I chalked it up to my work-managed computer being bad.

@davidbarsky
Copy link
Contributor Author

I'm quite confused. What is the point of those benchmarks? If we want to test incrementality, we should check salsa's logs. But it looks like you want to measure raw speed, why?

On Salsa, we've had success with http://codspeed.io/ and I wanted to set that up proper, useful benchmarks that measure incremental performance. The examples I have here are nowhere near sufficient to do reasonable performance measurements, but this contains some useful bones.

@davidbarsky davidbarsky force-pushed the davidbarsky/add-some-incremental-benchmarks branch from d8b8ad1 to b926d5d Compare June 5, 2025 19:51
Copy link
Member

@Veykril Veykril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the source snippets here are likely way too small/simple to give us any interesting numbers

@davidbarsky
Copy link
Contributor Author

I think the source snippets here are likely way too small/simple to give us any interesting numbers

Didn't see this, sorry! At which size do you think the snippet would be sufficiently large? I imagine we'd want to keep it at one file, which would involve inlining dependencies if we're to use anything existing...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-review Status: Awaiting review from the assignee but also interested parties.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants