-
Notifications
You must be signed in to change notification settings - Fork 1.8k
hir-ty: add incremental benchmarks #19934
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
hir-ty: add incremental benchmarks #19934
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm quite confused. What is the point of those benchmarks? If we want to test incrementality, we should check salsa's logs. But it looks like you want to measure raw speed, why?
crates/hir-def/src/lib.rs
Outdated
@@ -96,6 +95,8 @@ use crate::{ | |||
signatures::VariantFields, | |||
}; | |||
|
|||
pub use crate::db::DefDatabase; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The db
module is public, no need for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whoops, my bad.
|
||
#[salsa_macros::db] | ||
#[derive(Clone)] | ||
pub(crate) struct BenchmarkDB { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we need to do something with all the boilerplate with declaring a db.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we reuse the TestDB
? Why do we need a new type
fn benchmark_incremental_struct_addition(c: &mut Criterion) { | ||
c.bench_function("incremental_struct_addition", |b| { | ||
b.iter_batched( | ||
|| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm afraid the cost of the setup will dwarf the cost of the benchmarked computation, not allowing criterion to accurately measure the time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first closure, thankfully, is setup for the second closure, which is the actually benchmarked computation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that, what I meant is that the setup is too costly relative to the actual computation, and it runs on every iteration, so it can make the benchmarking inaccurate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I’m sorry: I misunderstood. I have noticed this is somewhat noisy, but I chalked it up to my work-managed computer being bad.
On Salsa, we've had success with http://codspeed.io/ and I wanted to set that up proper, useful benchmarks that measure incremental performance. The examples I have here are nowhere near sufficient to do reasonable performance measurements, but this contains some useful bones. |
d8b8ad1
to
b926d5d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the source snippets here are likely way too small/simple to give us any interesting numbers
Didn't see this, sorry! At which size do you think the snippet would be sufficiently large? I imagine we'd want to keep it at one file, which would involve inlining dependencies if we're to use anything existing... |
We didn't have these before, but we probably should. The only benchmarks here are for trait solving, but I can extend this to
infer
as well.(Builds atop of #19914.)