-
Notifications
You must be signed in to change notification settings - Fork 187
add benchmark configuration and case for codeflash #1254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
tests/inference/unit_tests/benchmarks/core/test_speed_benchmark.py
Outdated
Show resolved
Hide resolved
Hi @aseembits93, is this ready for review? |
Hi @grzegorz-roboflow ! it's ready for review, please let me know any feedback, the feedback is invaluable to us. |
we are looking at resolving the failing tests |
|
||
model = get_model(model_id="yolov8n-640", api_key=None) | ||
|
||
benchmark(model.infer, images) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this symbol imported?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @PawelPeczek-Roboflow , We follow the style of pytest-benchmark in writing tests (https://pytest-benchmark.readthedocs.io/en/latest/) but override the benchmark keyword to suit our style of benchmarking. The 'benchmark' keyword is injected by codeflash during execution. This file is only executed by codeflash during codeflash --benchmark
, it is outside any test subdirectory and won't be encountered by any of the CI workflows except the codeflash workflow.
Hi @grzegorz-roboflow,
This Pull Request sets up a Codeflash benchmark optimization workflow that automatically optimizes code in future pull requests that modify benchmark code.
When Codeflash discovers optimizations, it guarantees their correctness and reports the expected performance improvements for each benchmark.
We've set up the following inference benchmark to be optimized in CI:
inference benchmark python-package-speed -m yolov8n-seg-640 -bi 10000
We believe Roboflow would benefit from a broader set of end-to-end benchmarks that reflect real workflows.
To support this initiative, we'd like Roboflow to define benchmarks for workflows you consider important to optimize.
These benchmarks should be lightweight enough to run quickly and follow the pytest-benchmark format. (Note: Codeflash uses its own pytest plugin rather than the actual pytest-benchmark plugin.)
Let us know if you have any questions or need help setting this up!