Skip to content

Commit 1ea5187

Browse files
authored
Merge pull request #1 from bjornekstrom/main
README.md formatting suggestions
2 parents 4e1c709 + 0051ceb commit 1ea5187

File tree

2 files changed

+39
-12
lines changed

2 files changed

+39
-12
lines changed

README.md

Lines changed: 39 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,42 +1,69 @@
11
## transcribe
2+
23
Simple script that uses OpenAI's Whisper to transcribe audio files from your local folders.
4+
35
## Note
6+
47
This implementation and guide is mostly made for researchers not familiar with programming that want a way to transcribe their files locally, without internet connection, usually required within ethical data practices and frameworks. Two examples are shown, a normal workflow with internet connection. And one in which the model is loaded first, via openai-whisper, and then the transcription can be done without being connected to the internet. There is now also a GUI implementation, read below for more information.
58

6-
### Instructions
9+
### Instructions
10+
711
#### Requirements
8-
1. This script was made and tested in an Anaconda environment with python 3.10. I recommend this method if you're not familiar with python.
12+
13+
1. This script was made and tested in an Anaconda environment with Python 3.10. I recommend this method if you're not familiar with Python.
914
See [here](https://docs.anaconda.com/anaconda/install/index.html) for instructions. You might need administrator rights.
15+
1016
2. Whisper requires some additional libraries. The [setup](https://github.com/openai/whisper#setup) page states: "The codebase also depends on a few Python packages, most notably HuggingFace Transformers for their fast tokenizer implementation and ffmpeg-python for reading audio files."
11-
Users might not need to specifically install Transfomers. However, a conda installation might be needed for ffmepg[^1], which takes care of setting up PATH variables. From the anaconda prompt, type or copy the following:
17+
Users might not need to specifically install Transfomers. However, a conda installation might be needed for ffmpeg[^1], which takes care of setting up PATH variables. From the anaconda prompt, type or copy the following:
18+
1219
```
13-
conda install -c conda-forge ffmpeg-python
14-
```
20+
conda install -c conda-forge ffmpeg-python
21+
```
22+
1523
3. The main functionality comes from openai-whisper. See their [page](https://github.com/openai/whisper) for details. As of 2023-03-22 you can install via:
24+
1625
```
1726
pip install -U openai-whisper
1827
```
19-
4. There is an option to run a batch file, which launches a GUI built on TKinter and TTKthemes. If using these options, make sure they are installed in your python build. You can install them via pip.
28+
29+
4. There is an option to run a batch file, which launches a GUI built on TKinter and TTKthemes. If using these options, make sure they are installed in your Python build. You can install them via pip.
30+
2031
```
2132
pip install tk
2233
```
34+
2335
and
36+
2437
```
2538
pip install ttkthemes
2639
```
40+
2741
#### Using the script
42+
2843
This is a simple script with no installation. You can either clone the repository with
44+
2945
```
3046
git clone https://github.com/soderstromkr/transcribe.git
3147
```
32-
and use the example.ipynb template to use the script.
33-
**OR** download the ```transcribe.py``` file into your work folder. Then you can either import it to another script or notebook for use. I recommend jupyter notebook for new users, see the example below. (Remember to have transcribe.py and example.ipynb in the same working folder).
34-
#### Example with jupyter notebook
35-
See [example](example.ipynb) for an implementation on jupyter notebook, also added an example for a simple [workaround](example_no_internet.ipynb) to transcribe while offline.
36-
#### Using the GUI
37-
You can also run the GUI version from your terminal running ```python GUI.py``` or with the batch file called run_Windows.bat (for Windows user, Mac users should read the text file for instructions), just make sure to add your conda path to it. If you want to download a model first, and then go offline for transcription, I recommend running the model with the default sample folder, which will download the model locally. The GUI should look like this:
48+
49+
and use the example.ipynb template to use the script,
50+
**OR** download the ```transcribe.py``` file into your work folder. Then you can either import it to another script or notebook for use. I recommend [Jupyter Notebook](https://jupyter.org/) for new users, see the example below. (Remember to have `transcribe.py` and `example.ipynb` in the same working folder).
51+
52+
#### Example with Jupyter Notebook
53+
54+
See [example](example.ipynb) for an implementation on Jupyter Notebook, also added an example for a simple [workaround](example_no_internet.ipynb) to transcribe while offline.
55+
56+
#### Using the GUI
57+
58+
You can also run the GUI version from your terminal running ```python GUI.py``` or with the batch file called run_Windows.bat (for Windows users), just make sure to add your conda path to it. If you want to download a model first, and then go offline for transcription, I recommend running the model with the default sample folder, which will download the model locally.
59+
60+
The GUI should look like this:
61+
3862
![python GUI.py](gui_jpeg.jpg?raw=true)
3963

64+
or this, on a Mac, by running `python GUI.py` or `python3 GUI.py`:
65+
66+
![python GUI Mac.py](gui-mac.png)
4067

4168
[^1]: Advanced users can use ```pip install ffmpeg-python``` but be ready to deal with some [PATH issues](https://stackoverflow.com/questions/65836756/python-ffmpeg-wont-accept-path-why), which I encountered in Windows 11.
4269

gui-mac.png

324 KB
Loading

0 commit comments

Comments
 (0)