Replies: 5 comments 1 reply
-
the biggest issue for now is to replace torch library, installed by default by coqui-tts, which is for python-3.12 a cpu version and gpu version with cuda 12.5. meaning if you GPU is not supported by cuda 12.5 so it won't be recognized. the temp way would be to uninstall torch under the eb2ab python_env (conda activate ./python_env) and then find the righ torch cuda version for your gpu. |
Beta Was this translation helpful? Give feedback.
-
oh btw you said you are renting a gpu at HF right? you can test in tools folder with python gpu_test.py or python gpu_notebook_test.py if the gpu is detected. |
Beta Was this translation helpful? Give feedback.
-
Thanks for replying. You'll have to bear with me hugging face and a lot of this stuff is new to me so I'm trying to learn as I go. I was not sure how to interact. With the hugging face to run these commands. I ended up using SSH but the python command was not present in tools. |
Beta Was this translation helpful? Give feedback.
-
chatGPT or other can be helpful for you. I'm sorry to not have time to train users here ;) |
Beta Was this translation helpful? Give feedback.
-
Thank you for the reply. I did not expect nor want you to help in that
regard. I was just giving some information. Thanks for all your help and
all the amazing work you're doing.
…On Sun, Feb 16, 2025, 11:43 AM ROBERT MCDOWELL ***@***.***> wrote:
chatGPT or other can be helpful for you. I'm sorry to not have time to
train users here ;)
—
Reply to this email directly, view it on GitHub
<#265 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIVRLQBNPV635X5H57X2XZ32QC53NAVCNFSM6AAAAABXFK4Z5WVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTEMRRGYZTOMY>
.
You are receiving this because you authored the thread.Message ID:
<DrewThomasson/ebook2audiobook/repo-discussions/265/comments/12216373@
github.com>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm running a copy in hugging face and I relly like it. I'm just wondering about the best (bearing in mind cost) setup to use. I'm running on T4 medium and I have a book that's about average length. So using wc -w in linux I come up with 99946 words. For whatever that's worth.
Anyway, I'm wonding about conversion time. It's about at 50% and has been running for about 6000 seconds or so. FYI I'm selected Morgan Feeman model and I've selected GPU.
I'm just getting into HF, tried kokoro but love the ebook2audiobook's ability to clone voices. tried it with some sample text and it sounded great.
Just wonding if my time so far is on point, do I need to change processors? Or am I doingsomething wrong?
Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions