-
Notifications
You must be signed in to change notification settings - Fork 31
Ignore execution_time regressions when binaries have same hash #140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -10,3 +10,4 @@ lnt/server/ui/static/docs | |
| test_run_tmp | ||
| tests/**/Output | ||
| venv | ||
| *~ | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,26 @@ | ||
| """Adds a ignore_same_hash column to the sample fields table and sets it to | ||
| true for execution_time. | ||
|
|
||
| """ | ||
|
|
||
| from sqlalchemy import Column, Integer, update | ||
|
|
||
| from lnt.server.db.migrations.util import introspect_table | ||
| from lnt.server.db.util import add_column | ||
|
|
||
|
|
||
| def upgrade(engine): | ||
| ignore_same_hash = Column("ignore_same_hash", Integer, default=0) | ||
| add_column(engine, "TestSuiteSampleFields", ignore_same_hash) | ||
|
|
||
| test_suite_sample_fields = introspect_table(engine, "TestSuiteSampleFields") | ||
| set_init_value = update(test_suite_sample_fields).values(ignore_same_hash=0) | ||
| set_exec_time = ( | ||
| update(test_suite_sample_fields) | ||
| .where(test_suite_sample_fields.c.Name == "execution_time") | ||
| .values(ignore_same_hash=1) | ||
| ) | ||
|
|
||
| with engine.begin() as trans: | ||
| trans.execute(set_init_value) | ||
| trans.execute(set_exec_time) |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -54,7 +54,8 @@ class ComparisonResult: | |
| def __init__(self, aggregation_fn, | ||
| cur_failed, prev_failed, samples, prev_samples, | ||
| cur_hash, prev_hash, cur_profile=None, prev_profile=None, | ||
| confidence_lv=0.05, bigger_is_better=False): | ||
| confidence_lv=0.05, bigger_is_better=False, | ||
| ignore_same_hash=False): | ||
| self.aggregation_fn = aggregation_fn | ||
|
|
||
| # Special case: if we're using the minimum to aggregate, swap it for | ||
|
|
@@ -103,6 +104,7 @@ def __init__(self, aggregation_fn, | |
|
|
||
| self.confidence_lv = confidence_lv | ||
| self.bigger_is_better = bigger_is_better | ||
| self.ignore_same_hash = ignore_same_hash | ||
|
|
||
| def __repr__(self): | ||
| """Print this ComparisonResult's constructor. | ||
|
|
@@ -118,7 +120,8 @@ def __repr__(self): | |
| self.samples, | ||
| self.prev_samples, | ||
| self.confidence_lv, | ||
| bool(self.bigger_is_better)) | ||
| bool(self.bigger_is_better), | ||
| bool(self.ignore_same_hash)) | ||
|
|
||
| def __json__(self): | ||
| simple_dict = self.__dict__ | ||
|
|
@@ -176,6 +179,12 @@ def get_value_status(self, confidence_interval=2.576, | |
| elif self.prev_failed: | ||
| return UNCHANGED_PASS | ||
|
|
||
| # Ignore changes if the hash of the binary is the same and the field is | ||
| # sensitive to the hash, e.g. execution time. | ||
| if self.ignore_same_hash: | ||
| if self.cur_hash and self.prev_hash and self.cur_hash == self.prev_hash: | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not sure about completely ignoring, if same binaries have changes it can be a good indication of the noise level, and changed binaries may also be impacted by the same noise. Not sure if that's possible, but it may be good to display the results for the binaries with same hash separately.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FWIW LNT already detects noisy results based off the stddev and ignores them, the code for it is later on this function. This also only affects when regressions are flagged, i.e. the Run-over-run changes detail > performance regressions - execution time" table at the top. You can still see the differences in the runs in the test results table below when you check "show all values", which will reveal the noisy tests.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I tend to agree with @fhahn, I don't really understand why we'd ignore subsequent results entirely. I also don't fully understand the impact of this change: for multi-valued runs (e.g. running the same program multiple times and submitting multiple execution times for it), what does this PR change, if anything? I'm not familiar with how
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is addressing a long standing FIXME, see above in the code. LNT flags improvements and regressions when there is a significant change detected between runs. It still always saves all the results of each run and you can always still view them. This just determines what is flagged to the user in the regressions list, i.e. this page here: https://cc-perf.igalia.com/db_default/v4/nts/regressions/?state=0 It ignores changes that aren't significant or are likely noise, e.g. smaller than MIN_PERCENTAGE_CHANGE. For runs with multiple samples it also uses the standard deviation and the Mann-Whitney U test to ignore changes that are statistically likely to be noise. LNT has always done this to remove false positives from the list of regressions. This list of regressions is what you read on a daily basis from the LNT reports that are sent out by email etc., so the regressions should be as actionable as possible. Some noisy tests that are only slightly noisy still slip through the statistical checks, but given that the binary hasn't changed we shouldn't flag them as regressions. Here's an example from cc-perf.igalia.com, the colour of each run indicates the binary hash. The Equivalencing-flt binary hasn't changed over the past 7 runs, but there's 3 improvements detected in the green boxes. This PR would stop them from being flagged. It would however ensure that the improvements in miniFE above are still flagged, because the hashes are different.
|
||
| return UNCHANGED_PASS | ||
|
|
||
| # Always ignore percentage changes below MIN_PERCENTAGE_CHANGE %, for now, we just don't | ||
| # have enough time to investigate that level of stuff. | ||
| if ignore_small and abs(self.pct_delta) < MIN_PERCENTAGE_CHANGE: | ||
|
|
@@ -355,7 +364,8 @@ def get_comparison_result(self, runs, compare_runs, test_id, field, | |
| prev_values, cur_hash, prev_hash, | ||
| cur_profile, prev_profile, | ||
| self.confidence_lv, | ||
| bigger_is_better=field.bigger_is_better) | ||
| bigger_is_better=field.bigger_is_better, | ||
| ignore_same_hash=field.ignore_same_hash) | ||
| return r | ||
|
|
||
| def get_geomean_comparison_result(self, run, compare_to, field, tests): | ||
|
|
@@ -385,7 +395,8 @@ def get_geomean_comparison_result(self, run, compare_to, field, tests): | |
| cur_hash=cur_hash, | ||
| prev_hash=prev_hash, | ||
| confidence_lv=0, | ||
| bigger_is_better=field.bigger_is_better) | ||
| bigger_is_better=field.bigger_is_better, | ||
| ignore_same_hash=field.ignore_same_hash) | ||
|
|
||
| def _load_samples_for_runs(self, session, run_ids, only_tests): | ||
| # Find the set of new runs to load. | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -16,6 +16,7 @@ metrics: | |
| display_name: Execution Time | ||
| unit: seconds | ||
| unit_abbrev: s | ||
| ignore_same_hash: true | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we go with this, it should also be applied to score for consistency |
||
| - name: execution_status | ||
| type: Status | ||
| - name: score | ||
|
|
||

There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for what exactly, some kind of temp file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, for emacs and some other editors. I didn't mean to include this in this PR though, will take out.