You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current setup allows us to be super flexible in what we want to run (i.e. only quarkus, only spring, certain versions, which comparisons, etc). However, when we run a job most of the time we want all permutations/combinations of runtimes & tests because of the way we store the results (both in horreum & in https://github.com/quarkusio/benchmarks).
The challenge is that a "full" run takes 4+ hours and blocks the entire lab environment while running. It would be nice if we could split it into multiple "sub-jobs" that were part of the same run, and then "combine" the results in horreum and https://github.com/quarkusio/benchmarks once all the sub-runs are done.
Note
The term "sub-run" could be purely conceptual where we "glue" multiple runs together or it could be an actual thing that exists. I'm trying to separate theory from implementation.
This way each sub-run could be scheduled independently in the lab and other jobs besides ours could compete for compute time (we want to be good citizens after all!).
Ideally we would want to continue to publish results incrementally. Currently we treat each run as a standalone thing with a unique date/time, and then that whole run becomes the "latest". We would need to re-think how we do that. There will probably be some work within https://github.com/quarkusio/benchmarks as well once we figure out a plan.
The current setup allows us to be super flexible in what we want to run (i.e. only quarkus, only spring, certain versions, which comparisons, etc). However, when we run a job most of the time we want all permutations/combinations of runtimes & tests because of the way we store the results (both in horreum & in https://github.com/quarkusio/benchmarks).
The challenge is that a "full" run takes 4+ hours and blocks the entire lab environment while running. It would be nice if we could split it into multiple "sub-jobs" that were part of the same run, and then "combine" the results in horreum and https://github.com/quarkusio/benchmarks once all the sub-runs are done.
Note
The term "sub-run" could be purely conceptual where we "glue" multiple runs together or it could be an actual thing that exists. I'm trying to separate theory from implementation.
This way each sub-run could be scheduled independently in the lab and other jobs besides ours could compete for compute time (we want to be good citizens after all!).
Ideally we would want to continue to publish results incrementally. Currently we treat each run as a standalone thing with a unique date/time, and then that whole run becomes the "latest". We would need to re-think how we do that. There will probably be some work within https://github.com/quarkusio/benchmarks as well once we figure out a plan.