Proper Memory Allocator for Android QNN EP on GPU? #26890
Unanswered
brady-cherish
asked this question in
EP Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What is the proper way to allocate memory/buffers (
Ort::MemoryInfoand/orOrt::Allocator) for inference on Adreno GPU when using C++ APIs and the QNN EP?I've had behavior where I create input tensors with varying data, however the results of subsequent inferences results in the same, repeated output.
Any help or pointers to appropriate resources would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions