-
Notifications
You must be signed in to change notification settings - Fork 125
Open
Description
When I run the Faster-Whisper-XXL Pro using the mb-roformer model, I got the warning message
"faster_whisper\roformer_attend.py:73: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263."
The command that I used was ""%dp%faster-whisper-xxl.exe" %file_list% -pp -o source --batch_recursive --check_files --standard -f srt -l Japanese -m large-v2 -o source --temperature 0 --word_timestamps True --ff_vocal_extract mb-roformer --vad_method pyannote_v3".
Although this warning occurred didn't matter what vad_method I used.
Is this an issue needed to be fixed? What do I need to do to correct it?
Metadata
Metadata
Assignees
Labels
No labels