Replies: 4 comments 2 replies
-
I personnaly test my dev on an "old" jetson xavier nx, it was an extreme ordeal to compile it myself with max python 3.10. after that I the result is not working at 100% on all torch function but at least E2A recognized it with CUDA. however you need to change some default settings in the native install script (or docker setting if you use docker)
in native mode, once the script installed all, just quit it, and log into its python virtual env like Then replace the torch by your jetson one: pip install torch-for-jetson.etc... for docker here is the wiki to do it: |
Beta Was this translation helpful? Give feedback.
-
Right now it's Jetpack 5....could that be my issue? It's Ubuntu 20.04 (eek). For PYTHON_VERSION can that be set by ENV variable? In that wiki I followed the docker compose instructions without any luck. |
Beta Was this translation helpful? Give feedback.
-
At that point it seems easier to flash to Jetpack 6, right? Lots of benefit there, not much drawback, especially if using docker compose.
|
Beta Was this translation helpful? Give feedback.
-
if it's possible for your jetson absolutely yes. so you'll be able to upgrade CUDA and all components needed to work with torch 2.3+ and btw nvidia offers already a package for you at this point (my link above) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a Jetson AGX Orin dev kit I'm using to try and develop some tools for higher education. I ran across your project while I was looking for a solution for learners with various disabilities that mean audiobooks or a combination of audio and textual input work better than one or the other (or they cannot use a text-based version). I can easily run the CPU version on my own hardware, but was hoping to take advantage of the capabilities of something like that Jetson AGX Orin to offer this as a possible option for disability services.
I did try out setting the TORCH_VERSION as well as building and setting runtime to nvidia (which is set as the global runtime for docker) as part of the compose file, but no dice. I'm not entirely sure what would be required to detect the onboard GPU or use a custom version of pytorch, but that would be an awesome improvement and, with the Orin Nano priced at $250, I'd bet for a lot of others running low-powered devices designed for AI.
Beta Was this translation helpful? Give feedback.
All reactions