|
1 | 1 | ## transcribe
|
| 2 | + |
2 | 3 | Simple script that uses OpenAI's Whisper to transcribe audio files from your local folders.
|
| 4 | + |
3 | 5 | ## Note
|
| 6 | + |
4 | 7 | This implementation and guide is mostly made for researchers not familiar with programming that want a way to transcribe their files locally, without internet connection, usually required within ethical data practices and frameworks. Two examples are shown, a normal workflow with internet connection. And one in which the model is loaded first, via openai-whisper, and then the transcription can be done without being connected to the internet. There is now also a GUI implementation, read below for more information.
|
5 | 8 |
|
6 |
| -### Instructions |
| 9 | +### Instructions |
| 10 | + |
7 | 11 | #### Requirements
|
8 |
| -1. This script was made and tested in an Anaconda environment with python 3.10. I recommend this method if you're not familiar with python. |
| 12 | + |
| 13 | +1. This script was made and tested in an Anaconda environment with Python 3.10. I recommend this method if you're not familiar with Python. |
9 | 14 | See [here](https://docs.anaconda.com/anaconda/install/index.html) for instructions. You might need administrator rights.
|
| 15 | + |
10 | 16 | 2. Whisper requires some additional libraries. The [setup](https://github.com/openai/whisper#setup) page states: "The codebase also depends on a few Python packages, most notably HuggingFace Transformers for their fast tokenizer implementation and ffmpeg-python for reading audio files."
|
11 |
| -Users might not need to specifically install Transfomers. However, a conda installation might be needed for ffmepg[^1], which takes care of setting up PATH variables. From the anaconda prompt, type or copy the following: |
| 17 | +Users might not need to specifically install Transfomers. However, a conda installation might be needed for ffmpeg[^1], which takes care of setting up PATH variables. From the anaconda prompt, type or copy the following: |
| 18 | + |
12 | 19 | ```
|
13 |
| - conda install -c conda-forge ffmpeg-python |
14 |
| - ``` |
| 20 | +conda install -c conda-forge ffmpeg-python |
| 21 | +``` |
| 22 | + |
15 | 23 | 3. The main functionality comes from openai-whisper. See their [page](https://github.com/openai/whisper) for details. As of 2023-03-22 you can install via:
|
| 24 | + |
16 | 25 | ```
|
17 | 26 | pip install -U openai-whisper
|
18 | 27 | ```
|
19 |
| -4. There is an option to run a batch file, which launches a GUI built on TKinter and TTKthemes. If using these options, make sure they are installed in your python build. You can install them via pip. |
| 28 | + |
| 29 | +4. There is an option to run a batch file, which launches a GUI built on TKinter and TTKthemes. If using these options, make sure they are installed in your Python build. You can install them via pip. |
| 30 | + |
20 | 31 | ```
|
21 | 32 | pip install tk
|
22 | 33 | ```
|
| 34 | + |
23 | 35 | and
|
| 36 | + |
24 | 37 | ```
|
25 | 38 | pip install ttkthemes
|
26 | 39 | ```
|
| 40 | + |
27 | 41 | #### Using the script
|
| 42 | + |
28 | 43 | This is a simple script with no installation. You can either clone the repository with
|
| 44 | + |
29 | 45 | ```
|
30 | 46 | git clone https://github.com/soderstromkr/transcribe.git
|
31 | 47 | ```
|
32 |
| -and use the example.ipynb template to use the script. |
33 |
| -**OR** download the ```transcribe.py``` file into your work folder. Then you can either import it to another script or notebook for use. I recommend jupyter notebook for new users, see the example below. (Remember to have transcribe.py and example.ipynb in the same working folder). |
34 |
| -#### Example with jupyter notebook |
35 |
| -See [example](example.ipynb) for an implementation on jupyter notebook, also added an example for a simple [workaround](example_no_internet.ipynb) to transcribe while offline. |
36 |
| -#### Using the GUI |
37 |
| -You can also run the GUI version from your terminal running ```python GUI.py``` or with the batch file called run_Windows.bat (for Windows user, Mac users should read the text file for instructions), just make sure to add your conda path to it. If you want to download a model first, and then go offline for transcription, I recommend running the model with the default sample folder, which will download the model locally. The GUI should look like this: |
| 48 | + |
| 49 | +and use the example.ipynb template to use the script, |
| 50 | +**OR** download the ```transcribe.py``` file into your work folder. Then you can either import it to another script or notebook for use. I recommend [Jupyter Notebook](https://jupyter.org/) for new users, see the example below. (Remember to have `transcribe.py` and `example.ipynb` in the same working folder). |
| 51 | + |
| 52 | +#### Example with Jupyter Notebook |
| 53 | + |
| 54 | +See [example](example.ipynb) for an implementation on Jupyter Notebook, also added an example for a simple [workaround](example_no_internet.ipynb) to transcribe while offline. |
| 55 | + |
| 56 | +#### Using the GUI |
| 57 | + |
| 58 | +You can also run the GUI version from your terminal running ```python GUI.py``` or with the batch file called run_Windows.bat (for Windows users), just make sure to add your conda path to it. If you want to download a model first, and then go offline for transcription, I recommend running the model with the default sample folder, which will download the model locally. |
| 59 | + |
| 60 | +The GUI should look like this: |
| 61 | + |
38 | 62 | 
|
39 | 63 |
|
| 64 | +or this, on a Mac, by running `python GUI.py` or `python3 GUI.py`: |
| 65 | + |
| 66 | + |
40 | 67 |
|
41 | 68 | [^1]: Advanced users can use ```pip install ffmpeg-python``` but be ready to deal with some [PATH issues](https://stackoverflow.com/questions/65836756/python-ffmpeg-wont-accept-path-why), which I encountered in Windows 11.
|
42 | 69 |
|
|
0 commit comments