(CMSIS-NN Integration) Testing strategy #13944
Replies: 4 comments 1 reply
-
We follow the same setup in XNNPACK. Obviously except running on an FVP. Given I want to eventually convert |
Beta Was this translation helpful? Give feedback.
-
Meta: if you want @AdrianLundell you can sketch up a small design doc and we can put it up as RFC here as well. Or just add more details in this post. |
Beta Was this translation helpful? Give feedback.
-
@AdrianLundell At the moment, I am using this dedicated script (based off run.sh) to test the newly added custom kernels (with CMSIS-NN integ) |
Beta Was this translation helpful? Give feedback.
-
I have created a PoC CortexMTester in ~30 LoC which I was also able to hook into the tester framework by creating a flow for it, I think it is reasonable for all MCU:s to do a similar implementation. You essentially have to define a For the test suite, we can use the backends/test suite as said, but I think it is also useful to define relevant tests per MCU. I suggest we define it pytest style using the Note that this means that I think we should remove all other testing added to the run.sh scripts and so on and keep to only using testes definedin Python using the tester. When we have a couple of ops ready we can create a sample script as in the mentioned ethos_u_minimal_example for a complete model and run that, both as an example for users and as a test of the regular build w/o semihosting. I might have changed my mind for 1, as it kind of leads to a lot of unnecessary work compared to just running all ops e2e, at least for ops which is just about adding glue for already tested kernels such as for CMSIS-NN. I still would like to add a CMake flow for building the existing tests however to make these tests easy for everyone to use. What do you think about this? @digantdesai Is this clear and does it sound reasonable for you? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Let's use this issue to align on the testing for the Cortex-M backend. I suggest the following tests, but I am happy to hear other opinions:
Kernel tests
All kernels should have GTests similar to the quantization ops to separate testing kernel logic from the full executorch framework. These tests should be easily be built and run on an FVP through some helper script. This should simplify debugging of kernels significantly.
Operator integration tests
We should write a cortex_m implementation of https://github.com/pytorch/executorch/blob/main/backends/test/harness/tester.py to get the kernel integration tests for free, stealing some code from the arm backend to run pte:s on FVP using semihosting.
Python code tests
We should have some level of unittesting of the python code. We can reuse the tester for easily counting the expected number of ops before/ after passes. For testing the dialect ops we should write some helper functions for asserting that they produce expected shapes and results, if they do not exist already?
Network tests
In principle we can reuse the tester framework also for running full network tests instead of having to keep integrating cortex_m support into the run.sh script/ arm_aot_compiler. We should still have some proper full flow test which is also useful as an example for users, I would suggest something close to the https://github.com/pytorch/executorch/blob/main/examples/arm/ethos_u_minimal_example.ipynb but for cortex_m in that case.
Thoughts?
@digantdesai @psiddh @per @zingo
Beta Was this translation helpful? Give feedback.
All reactions